Statistical Science2006, Vol. 21, No. 3, 352–357DOI: 10.1214/088342306000000466Main article DOI: 10.1214/088342306000000493© Institute of Mathematical Statistics, 2006CommentTrevorHastieandJiZhuWe congratulate the authors for a well written and reproducing kernel Hilbert space H (RKHS) gener-Kthoughtful survey of some of the literature in this area. ated by K(·,·) (see Burges, 1998; Evgeniou, Pontil andThey are mainly concerned with the geometry and the Poggio, 2000; and Wahba, 1999, for details).computational learning aspects of the support vector Suppose the positive definite kernel K(·,·) has amachine (SVM). We will therefore complement their (possibly finite) eigenexpansion,review by discussing from the statistical function esti-∞ mation perspective. In particular, we will elaborate on K(x,x ) = δ φ (x)φ (x ),j j jthe following points: j=1where δ ≥ δ ≥ ··· ≥ 0 are the eigenvalues and• Kernel regularization is essentially a generalized 1 2ridge penalty in a certain feature space. φ (x)’s are the corresponding eigenfunctions. Ele-j• In practice, the effective dimension of the data ker- ments of H have an expansion in terms of theseKnel matrix is not always equal to n, even when the eigenfunctionsimplicit dimension of the feature space is infinite;∞hence, the training data are not always perfectly sep- (2) f(x) = β φ (x),j jarable. j=1• Appropriate regularization plays an important role inwith the constraint thatthe success of the SVM ...
Voir