Vous êtes sur la page 1sur 5

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO.

3, MAY 2007

931

[33] I. Rashkovsky and M. Margaliot, Nicholsons blowies revisited: A fuzzy modeling approach, Fuzzy Sets Syst., 2007, to be published. [34] V. Rozin and M. Margaliot, The fuzzy ant, 2006 [Online]. Available: www.eng.tau.ac.il/~michaelm [35] S. Sestito and T. Dillon, Knowledge acquisition of conjunctive rules using multilayered neural networks, Int. J. Intell. Syst., vol. 8, pp. 779805, 1993. [36] R. Setiono, Extracting rules from neural networks by pruning and hidden-unit splitting, Neural Comput., vol. 9, pp. 205225, 1997. [37] J. M. C. Sousa and U. Kaymak, Fuzzy Decision Making in Modeling and Control. Singapore: World Scientic, 2002. [38] T. Terano, K. Asai, and M. Sugeno, Applied Fuzzy Systems. Boston, MA: AP Professional, 1994. [39] A. B. Tickle, R. Andrews, M. Golea, and J. Diederich, The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained articial neural networks, IEEE Trans. Neural Netw., vol. 9, no. 6, pp. 10571068, Nov. 1998. [40] G. G. Towell and J. W. Shavlik, Extracting rened rules from knowledge-based neural networks, Mach. Learn., vol. 13, pp. 71101, 1993. [41] E. Tron and M. Margaliot, Mathematical modeling of observed natural behavior: A fuzzy logic approach, Fuzzy Sets Syst., vol. 146, pp. 437450, 2004. [42] , How does the dendrocoleum lacteum orient to light? A fuzzy modeling approach, Fuzzy Sets Syst., vol. 155, pp. 236251, 2005. [43] L. A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning, Inf. Sci., vol. 30, pp. 199249, 1975. [44] , Fuzzy logic = computing with words, IEEE Trans. Fuzzy Syst., vol. 4, no. 2, pp. 103111, May 1996. [45] D. Zhang, X.-L. Bai, and K.-Y. Cai, Extended neuro-fuzzy models of multilayer perceptrons, Fuzzy Sets Syst,, vol. 142, pp. 221242, 2004. [46] J. M. Zurada, Introduction to Articial Neural Systems. St. Paul, MN: West, 1992.

I. INTRODUCTION The objective of image restoration is to restore the visual information of a degraded image. It has wide applications in photographic deblurring, remote sensing, medical imaging, etc. Blurring may be introduced by the imaging device, such as in lens defocusing, or by the medium in which the light propagates, as with atmospheric turbulence. The relative motion of the scene and the camera can also lead to blurring. In many applications, an observed discrete image g (x; y ) is often approximated as a sum of a 2-D convolution of the true image f (x; y ) with a linear shift invariant blur, also known as a point spread function (PSF), h(x; y ), and additive noise  (x; y ). Thus

g (x; y ) = f (x; y ) 3 h(x; y )+  (x; y )


=
(n;m)

f (n; m)h(x 0 n; y 0 m)+  (x; y )

x; y; n; m 2 Z

Blind Image Deconvolution Through Support Vector Regression


Dalong Li, Russell M. Mersereau, and Steven Simske

AbstractThis letter introduces a new algorithm for the restoration of a noisy blurred image based on the support vector regression (SVR). Experiments show that the performance of the SVR is very robust in blind image deconvolution where the types of blurs, point spread function (PSF) support, and noise level are all unknown. Index TermsBlind deconvolution, LucyRichardson (LR) algorithm, peak signal-to-noise ratio (PSNR), support vector regression (SVR). Manuscript received April 28, 2006; revised September 7, 2006; accepted November 12, 2006. D. Li was with the Center for Signal and Image Processing, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA. He is now with the Digital Printing and Imaging Laboratory, Hewlett-Packard Laboratories, Fort Collins, CO 80528 USA (e-mail: dalong.li@hp.com). R. M. Mersereau is with the Center for Signal and Image Processing, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA. S. Simske is with the Digital Printing and Imaging Laboratory, HewlettPackard Laboratories, Fort Collins, CO 80528 USA. Color versions of one or more of the gures in this letter are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TNN.2007.891622

where * denotes the 2-D linear convolution operator, Z is the set of the integers, and  is the additive noise. Image deconvolution or restoration aims to recover an estimate of the true image f (x; y ) from the observed degraded image g (x; y ). In classical restorations, it is assumed that complete knowledge of the blur h(x; y ) and the statistics of the noise  (x; y ) are known prior to restoration. However, in practice, it is often impossible or prohibitive to determine these parameters a priori. The practical constraints might be the difculty of characterizing atmospheric turbulence in astronomical imaging or the potential health hazard of using stronger incident beams to increase the image resolution in medical imaging. In such cases, blind image deconvolution is applicable to restore the image. Many blind restoration algorithms have been proposed [1][3] in the past. Since blind deconvolution is an ill-posed problem, additional assumptions need to be made in order to obtain a solution. For example, independent component analysis (ICA) approaches [4] and modied Bussgang algorithms [5] assume independence of pixels in image. Lucy [6] and Richardson [7] independently developed an algorithm based on the maximum likelihood estimation (MLE). This method is effective when the PSF is known but little information about the additive noise is known. Ayers and Dainty [8] proposed an iterative blind deconvolution (IBD) method and applied it in the restoration of turbulence-degraded images. The uniqueness and convergence properties of the IBD algorithm has not been validated. Moreover, noise degrades the performance. As a way to increase the robustness of IBD, the LucyRichardson (LR) algorithm can be used together with the IBD algorithm. The double regularization (DR) algorithm [9] is built upon the regularized iterative image restoration algorithm [10]. In this letter, both the iterative LR maximum likelihood (ML) algorithm and the DR method are compared with the proposed support vector regression (SVR) method. Blurs can be quite different in terms of their coefcients and their PSF support, as well as with respect to the noise level. However, some common characteristics exist among different images degraded by different blurs and random noise. For example, the blurs are all effectively low-pass lters. Although one image might look signicantly different from another, a small block taken from the one image often looks similar to a block in the other image. This is why different images can be compressed by using a common codebook in vector-quantization-based image compression [11]. The basic idea in our approach towards blind image deconvolution is to extract the commonality behind the seemly diverse degradations and images. Then, a common usable model may be established that can handle different blurs with different PSF supports. In this letter, we use SVR [12] to obtain an optimized mapping from a (2R +1)2 (2S +1) neighborhood in a degraded image

1045-9227/$25.00 2007 IEEE

932

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 3, MAY 2007

to the central pixel f (x; y) in the undergraded image. This mapping is learned on an ensemble of degraded images g for which the true images f are available in the training phase; then, in the test phase, the learned mapping is used to perform the blind image deconvolution. Although other choices for learning methods are possible, SVR is easy to use and often yields a better result than competing methods. Machine learning approaches had been used in similar problems such as super resolution by Freeman et al. [13], but signicant differences exist between their method and ours. In their method, patches from the training images are stored, then a tree-based, approximate nearest neighbor search is performed to nd a high-resolution patch based on the low-resolution patch. In our method, we formulate the blind deconvolution as a regression problem. SVR is used to nd the mapping function between a low-resolution patch and the central pixel of a high-resolution patch. The number of support vectors is much smaller than the number of the samples in the training set. Therefore, our method is more efcient. The letter is organized as follows. In Section II, linear regression is rst reviewed followed by a discussion of SVR and the details of our algorithm. In Section III, the implementations and experiments of our approach are reported. The performance is evaluated using peak signal-to-noise ratio (PSNR) as well as improvement in signal-to-noise ratio (ISNR). Conclusions are discussed in Section IV. II. SVR-BASED IMAGE DEBLURRING

fg(i 0 k; j 0 l) j (k; l) 2 f0R; . . . ; 0; . . . ; Rg2f0S; . . . ; 0; . . . ; S gg

Given training data (X1 ; y1 ); . . . ; (Xl ; yl ), where Xi are input vectors (attribute) and yi is the associated output value (target) of Xi , traditional linear regression nds a linear function W T X + b such that (W ; b) is the minimum mean square error solution of
l

Fig. 1. (a) Original CAMERAMAN image. (b) Degraded CAMERAMAN image, 3 3 averaging lter, and BSNR = 7.38 dB. (c) Original LENA image. (d) DegradedLENA image, 3 3 aeraging lter, and BSNR= 2.94 dB. Two SVR deblurring models are learned from these pairs of images.

III. BLIND DECONVOLUTION EXPERIMENTS The experiments are designed to show that the SVR can learn a generally applicable model. The model can be used in restoring new images. The test images are degraded differently from those degradations in the training images regarding the PSF form, PSF support size, and noise level. The images are the LENA, CAMERAMAN, and PEPPER, which are often used for image restoration comparison in the literature. These images are used for both training and testing. For example, the model trained on the LENA image is used to deblur the CAMERAMAN image and the model trained on the CAMERAMAN image is used to deblur the LENA image. In all of the experiments, the images are all scaled into the range [0; 1] and the size of the deblurring PSF support is 7 2 7. Therefore, the attribute is a 49 2 1 vector. The LENA and the CAMERAMAN images are blurred by a 3 2 3 averaging PSF and Gaussian random noise (variance 0.01) is added to the blurred images. The noise level in the degraded image is measured by its BSNR BSNR = 10 log b 2 n

2

min w;b

i=1

(yi

0 (W T Xi + b))2 :

(1)

T W X + b approximates the training data by minimizing the sum of squared errors. For nonlinearly distributed data, a linear function may not t well. In these cases, a kernel function (x) can be used to map the data into a higher dimensional space. Some common kernels include linear, polynomial, Gaussian, etc. In the high-dimensional space, overtting might happen. To avoid overtting, some modications have been adopted. SVR solves the following modied optimization l problem:

1 T W W +C min W;b;; 2

T 3 (W (Xi ) + b) yi  + i 3 0; i = 1; . . . ; l: (2) i ;  i 3 i is the upper training error (i is the lower training error) subject to the -insensitive tube y (W T (X)+ b) ; and  is a threshold. In the previous objective function, (1=2)W T W is a regularization term to smooth the function W T (Xi ) + b to avoid overtting. C > 0 is the penalty parameter of the error term. The default value of C is 1; it

subject to yi 0 (W T (Xi ) + b)   + i

i=1

3 (i + i )

0 

j0

j

(3)

can also be optimized by cross validation [14]. To apply the SVR in blind image deconvolution is straightforward. The 2n + 1 2 2m + 1 neighborhood of the blurred image pixel g(x; y) is converted either row by row or column by column into a vector. This vector becomes X for the SVR. The target yi is the corresponding pixel of f (x; y). By shifting this sampling window over all positions of the ensemble of the blurred images, we can obtain a training set for the SVR. To restore a degraded image, the same 2n + 1 2 2m + 1 window is used to create the attribute. The pixel value is predicted by the trained model obtained in the training step. The image is deblurred on a pixel-by-pixel basis.

2 2 where b and n are the variance of the blurred image and noise, respectively. In terms of BSNR, the noise level is 2.94 dB in the LENA image and 7.38 dB in the CAMERAMAN image. These images are shown in Fig. 1. The pairs of CAMERAMAN images are used in training SVR to obtain the CAMERAMAN deblurring model. Similarly, the pairs of LENA images are used to train SVR and the LENA deblurring model. The LibSVM software package [14] is used with the default radial basis kernel function to perform the SVR. It is also worth mentioning that the statistics of the CAMERAMAN image (bimodal) are quite different from the LENA image (unimodal). The PEPPER image is different from both the CAMERAMAN and LENA images.

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 3, MAY 2007

933

TABLE I ISNR (IN DECIBELS) COMPARISON OF THE CAMERAMAN SVR, LR, AND DR FOR THE DIFFERENT BLURS ON THE LENA IMAGE

For comparison purposes, the same degraded images are also restored using the LR ML algorithm and the DR method. These algorithms require an initial guess of the blur PSF. When the PSF is specied, its size and the coefcient values must be provided. The size has been observed to have a more signicant impact on the ultimate success of the restoration than the coefcient values. Since the PSF support was 3 2 3 when the SVR was trained, the initial PSF was also specied as a 3 2 3 averaging lter when the LR and DR algorithms were used. The default number of iteration is 10. A commonly used measurement of restoration quality is the PSNR of the image. The formula for PSNR is then given as PSNR = 10 log10

M i=1 M i=1 M i=1

M N 2552 i=1 j=1 N (f (i; j ) 0 f^(i; j ))2 j=1

(4)

^ where f is the original image and f is the restored image. The ISNR of the image is another common measurement in image restoration

ISNR = 10 log10

N=1 (f (i; j ) 0 g(i; j ))2 j N=1 (f (i; j ) 0 f^(i; j ))2 j

(5)

Fig. 2. (a) Degraded LENA image, Gaussian blur, BSNR 6.14 dB. (b) Image restored by the CAMERAMAN SVR model ISNR 4.46 dB. (c) Image restored 3.35 dB. (d) Image restored by the DR by the LR ML algorithm ISNR method ISNR 3.47 dB.

=0

=0

^ where f and f are the same as in (4) and g is the degraded image. In the experiments, PSNR and ISNR are used to measure restoration performance.

TABLE II ISNR (IN DECIBELS) COMPARISON OF THE CAMERAMAN SVR, LR, AND DR FOR THE DIFFERENT BLURS ON THE PEPER IMAGE

A. Blind Deconvolution Test Using the CAMERAMAN Model 1) Robustness To the Untrained Blurs: A number of different types of blurs were applied to create the LENA image to make the test data set. These blurs are as follows: a 4 2 3 rectangular averaging blur, a 5 2 5 Gaussian ( = 1) blur, a 5 2 1 motion blur, and a circular averaging lter ( = 3). Gaussian noise (variance = 0.005) was then added to these blurred images. Notice that none of these lters are part of the training set. Although an averaging lter was also used in training, the PSF support is different: in the testing, it is 4 2 3; in the training, the support is 3 2 3. The trained CAMERAMAN SVR model is used to estimate f from the observed degraded image g on a pixel-by-pixel basis. The results are summarized in the Table I. In all the experiments, SVR always improves the image quality in terms of ISNR, while the ISNR for the LR and DR algorithms are always negative. Recall that none of the blurs used in the testing were used in the training. Fig. 2 shows the restored images in the Gaussian blur test. It is obvious that the image restored by the LR algorithm is not smoothed properly. The SVR restored image looks sharp and most of the noise is removed successfully. The trained CAMERAMAN SVR model is also used to deblur the PEPPER image. The results are summarized in Table II. 2) Robustness to the Varying Noise Level: To test the robustness to the noise level, the following experiments were conducted. First, the LENA image was blurred by convolving it with a 5 2 5 Gaussian lter ( = 1). Then, different levels of zero-mean Gaussian random noise was added to the blurred image. The noise variance changed from 0.001 to 0.02 in steps of 0.001. Since the noise variance was 0.01 when the models was trained, half of the images have lower noise level than in the training and half of the images have a higher noise level. The

Fig. 3. Noise robustness test on the LENA image using the CAMERAMAN SVR model.

BSNR is computed at each step to reect the noise degradation. The CAMERAMAN SVR model is used for the deconvolution of the degraded LENA image. Fig. 3 shows the comparative results. The horizontal axis is the BSNR in the degraded image. The vertical axis is the PSNR of the image. The PSNR of the SVR restored image is always higher than

934

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 3, MAY 2007

TABLE III ISNR (IN DECIBELS) COMPARISON OF THE LENA SVR, LR, AND DR FOR THE DIFFERENT BLURS ON THE CAMERAMAN IMAGE

TABLE IV ISNR (IN DECIBELS) OF THE LENA SVR FOR THE DIFFERENT BLURS ON THE PEPPER IMAGE Fig. 5. One of the examples in the noise robustness test (BSNR is 14.58 dB). (a) CAMERAMAN image restored by LR ML algorithm. (b) Image restored by SVR.

Fig. 4. Noise robustness test on the CAMERAMAN image using the LENA SVR model.

that of the degraded image, while the PSNRs of the LR and DR restored images are lower than the degraded image. B. Blind Deconvolution Test Using the LENA SVR Model The same tests were performed on the CAMERAMAN and PEPPER images using the LENA SVR model. 1) Robustness To the Untrained Blurs: Table III summarizes the comparative results of the deblurring on the CAMERAMAN image using the LENA SVR model. In terms of ISNR, the improvements to the LENA image using the CAMERAMAN model as shown in Table I is better than those on the CAMERAMAN image using the LENA model as shown in Table III. This might be due to the fact that the CAMERAMAN image is a better training image since it has more varied contents (i.e., the cameraman in the foreground and buildings/sky in the background). The restored PEPPER images by the LENA SVR model are similar to those restored by the CAMERAMAN SVR model, as shown in Table IV. 2) Robustness To the Varying Noise Level: The CAMERAMAN image is blurred with a 5 2 5 Gaussian PSF and then added to the Gaussain noise (variance varying from 0.001 to 0.20 in steps of 0.001). The noise robustness test result on the degraded CAMERAMAN images using the LENA SVR model is plotted in Fig. 4. The PSNR of the SVR restored image begins to become lower than the PSNR of the degraded image
Fig. 6. (a) Example of a real blurred image. (b) CAMERAMAN SVR model deblurred. (c) LR algorithm restored. (d) DR result.

after the BSNR becomes higher than 12 dB. To help understand why this is so, Fig. 5 shows the deblurred image when the BSNR is 14.58 dB. The SVR restored image is overly smoothed (thus low PSNR), while the LR restored image is little bit overdeblurred. One of the disadvantages of the LR and DR algorithms is their sensitivity to iteration number. The LENA SVR model is trained with the noise at a much higher level (BSNR = 2.94 dB). Not surprisingly, the corresponding SVR deblurring model attempts to perform heavy smoothing (noise reduction). C. Real Image Example Fig. 6 shows a comparative result on a real motion blurred image. The blur was caused by nonlinear motion of the object. Noise is amplied in the LR deblurred image and the DR result. The CAMERAMAN

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 18, NO. 3, MAY 2007

935

SVR model is smoother while the edges are preserved. Similar to the high BSNR case, the SVR deblurring has a smoothing effect since it was trained on high noise level while the noise level is low in this real blurred image.

An Empirical Evaluation of the Fuzzy Kernel Perceptron


Gavin C. Cawley

IV. CONCLUSION An SVR-based algorithm for the restoration of a noisy blurred image is introduced. SVR nds an optimal mapping from the training set, which is then used to perform deconvolution on the degraded images unseen in the training set. Moreover, the blurs, PSF support, and noise level are all different from those in the training set. The information about blur and/or noise can improve SVR performance since the training can be more specic and accurate. Some of the advantages of the proposed algorithm include the following. SVR can generalize to new images degraded by different types of blurs with different PSF support as well as varying noise level. SVR is robust to parameter selection. The only parameter is the size of the deblurring lter support [2n + 1; 2m + 1]. It does not need to match the true blur PSF support. Upgrading simply involves including new examples, then a new SVR model is trained.

AbstractJ.-H. Chen and C.-S. Chen have recently proposed a nonlinear variant of Keller and Hunts fuzzy perceptron algorithm, based on the now familiar kernel trick. In this letter, we demonstrate experimentally that J.-H. Chen and C.-S. Chens assertion that the fuzzy kernel perceptron (FKP) outperforms the support vector machine (SVM) cannot be sustained. A more thorough model comparison exercise, based on a much wider range of benchmark data sets, shows that the FKP algorithm is not competitive with the SVM. Index TermsFuzzy sets, kernel machines.

I. INTRODUCTION J.-H. Chen and C.-S. Chen [1] intercompare the fuzzy kernel perceptron (FKP), fuzzy perceptron, conventional perceptron, and the support vector machine (SVM) on the publicly available spiral, sonar, and ionosphere benchmark data sets. The results presented indicate that the FKP outperforms the SVM on all three data sets. The model comparison exercise performed in [1] unfortunately contains a number of aws, stemming mainly from the use of the test data in model selection, and, as a result, the conclusions drawn cannot be considered reliable. First, the model selection criterion used to select the optimal values for the hyperparameters of both the FKP and SVM was taken to be the average of the error rate on the training and test data. This is a somewhat unusual choice as it is known that the training set error does not provide a reliable indication of the performance on unseen data [2], unlike the test set error, which does provide a good estimator of the generalization performance. However, the use of the test set in model selection biases the subsequent model comparison exercise as the test set is no longer statistically pure. The use of this criterion also seems likely to result in overtting, as the minimum of the test set error is unlikely to coincide with the minimum of the training set error. A change in the hyperparameters that reduces the test set error will then only be accepted provided the corresponding increase in training set error is of a smaller magnitude. It is difcult to explain a mechanism by which this will generally be the case. The FKP algorithm incorporates stochastic elements in the random initialization of the weight vector and bias, as well as in the random permutation of the training patterns at each iteration. In order to account for the random variation in the performance of the FKP, J.-H. Chen and C.-S. Chen train a pool of FKP classiers, each having different, randomly selected values of w 0 . Using muliple networks with random initialization is a common practice in training of the multilayer perceptron networks in order to locate a good local minima in the training criterion. In this case, it would be acceptable to pick the network from the pool achieving the lowest value for the training criterion. However, if the network was selected from the pool on the basis of the model selection criterion, this is highly likely to result in overtting of the model selection criterion, as the meaningless variation due to the random initialization may outweigh deterministic improvements
Manuscript received February 3, 2003; revised January 18, 2005 and October 15, 2006; accepted November 19, 2006. This work was supported by the Royal Society under Grant RSRG-22270. The author is with the School of Computing Sciences, University of East Anglia, Norwich, Norfolk NR4 7TJ, U.K. (e-mail: gcc@cmp.uea.ac.uk). Color versions of one or more of the gures in this letter are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TNN.2007.891624

REFERENCES
[1] D. Kundur and D. Hatzinakos, Blind image deconvolution revisited, IEEE Signal Process. Mag., vol. 13, no. 6, pp. 6163, Nov. 1996. [2] L. Chen and K.-H. Yap, A soft double regularization approach to parametric blind image deconvolution, IEEE Trans. Image Process., vol. 14, no. 5, pp. 624633, May 2005. [3] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, Blind deconvolution of images using optimal sparse representations, IEEE Trans. Image Process., vol. 14, no. 6, pp. 726736, Jun. 2005. [4] A. J. Bell and T. J. Sejnowski, An information-maximization approach to blind separation and blind deconvolution, Neural Comput., vol. 7, no. 6, pp. 11291159, 1995. [5] S. F. Ri, A fast xed-point neural blind deconvolution algorithm, IEEE Trans. Neural Netw., vol. 15, no. 2, pp. 455459, Mar. 2004. [6] L. Lucy, An iterative technique for the rectication of observed distribution, Astron. J., vol. 79, no. 6, pp. 745754, 1974. [7] W. H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 62, pp. 5559, 1972. [8] G. R. Ayers and J. C. Dainty, Iterative blind deconvolution method and its applications, Opt. Lett., vol. 13, pp. 547549, Jul. 1988. [9] Y.-L. You and M. Kaveh, A regularization approach to joint blur identication and image restoration, IEEE Trans. Image Process., vol. 5, no. 3, pp. 416428, Mar. 1996. [10] J. Lagendijk, R. L. Biemond, and D. E. Boekee, Regularized iterative image restoration with ringing reduction, IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-36, no. 12, pp. 1874, Dec. 1988. [11] D. Salomon, Data Compression. New York: Springer-Verlag, Mar. 2004. [12] C. Cortes and V. Vapnik, Support-vector network, Mach. Learn., vol. 20, pp. 273297, 1995. [13] W. T. Freeman, T. R. Jones, and E. C. Pasztor, Example-based superresolution, IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 5665, Mar./Apr. 2002. [14] C. C. Chang and C. J. Lin, LIBSVM: A library for support vector machines, 2001 [Online]. Available: http://www.csie.ntu.edu.tw/ \verb~cjlin/libsvm

1045-9227/$25.00 2007 IEEE

Vous aimerez peut-être aussi