عنوان البحث(Papers / Research Title)
An Application of Stability to Regularization
الناشر \ المحرر \ الكاتب (Author / Editor / Publisher)
كوثر فوزي حمزة الحسن
Citation Information
كوثر,فوزي,حمزة,الحسن ,An Application of Stability to Regularization , Time 5/12/2011 5:16:14 PM : كلية التربية للعلوم الصرفة
وصف الابستركت (Abstract)
An Application of Stability to Regularization in hilbert space It has long been known that when trying to estimate an unknown function from data
الوصف الكامل (Full Abstract)
An Application of Stability to Regularization in Hilbert Space
Udie Subre Abdul Razaq Kawther Fawzi Hamza
Basic Education College Education College
University of Babylon University of Babylon
Abstract
In this paper ,some definitions and concepts of stability are given, an application of stability to regularization in Hilbert space are performed with modification. some illustration examples and conclusion, are listed.
1. Introduction
It has long been known that when trying to estimate an unknown function from data, one needs to find a tradeoff between bias and variance. Indeed, on one hand, it is natural to use the largest model in order to be able to approximate any function, while on the other hand, if the model is too large, then the estimation of the best function in the model will be harder given a restricted amount of data. Several ideas have been proposed to fight against this phenomenon. One of them is to perform estimation in several models of increasing size and then to choose the best estimator based on a complexity penalty (e.g. Structural Risk Minimization).
One such technique is the bagging approach of Breiman (1996)[5] which consists in averaging several estimators built from random sub samples of the data. In the early nineties, concentration inequalities became popular in the probabilistic analysis of algorithms, due to the work of McDiarmid (1989) and started to be used as tools to derive generalization bounds for learning algorithms by Devroye (1991). Building on this technique, Lugosi and Pawlak (1994) obtained new bounds for the k-NN, kernel rules and histogram rules.
A key issue in the design of efficient machine learning systems is the estimation of the accuracy of learning algorithms. among the several approaches that have been proposed to this problem, one of the most prominent is based on the theory of uniform convergence of empirical quantities to their mean .this theory provides ways to estimate the risk (or generalization error) of a learning system based on an empirical measurement of its accuracy and measure of its complexity, such as the Vapnik-Chervonenkis(VC)dimension or the fat-shattering dimension. We explore here a different approach which is based on sensitivity analysis .
sensitivity analysis aims at determining how much the variation of the input can influence the output of a system .it has been applied to many areas such as statistics and mathematical programming .in the latter domain ,it is often referred to as perturbation analys.
Uniform stability may appear as a strict condition . actually we will observe that many existing learning methods exhibit a uniform stability which is controlled by the regularization parameter and thus be very small. many algorithms such as Support Vector Machines(SVM) or classical regularization networks introduced by Poggio and Girosi (1990)[9] perform the minimization of a regularized objective function where the regularizer is a norm in a reproducing kernel Hilbert space (RKHS):
تحميل الملف المرفق Download Attached File
|
|