DOI QR코드

DOI QR Code

Study on the ensemble methods with kernel ridge regression

  • Received : 2012.01.31
  • Accepted : 2012.02.24
  • Published : 2012.03.31

Abstract

The purpose of the ensemble methods is to increase the accuracy of prediction through combining many classifiers. According to recent studies, it is proved that random forests and forward stagewise regression have good accuracies in classification problems. However they have great prediction error in separation boundary points because they used decision tree as a base learner. In this study, we use the kernel ridge regression instead of the decision trees in random forests and boosting. The usefulness of our proposed ensemble methods was shown by the simulation results of the prostate cancer and the Boston housing data.

Keywords

References

  1. Breiman, L. (1996). Bagging predictors. Machine Learning Journal, 26, 123-140.
  2. Breiman, L. (2001). Random forests. Machine Learning Journal, 45, 5-32. https://doi.org/10.1023/A:1010933404324
  3. Cho, D. (2010). Mixed-effects LS-SVR for longitudinal data. Journal of the Korean Data & Information Science Society, 21, 363-369.
  4. Cho, D., Shim, J. and Seok, K. H. (2010). Doubly penalized kernel method for heteroscedastic autoregressive data. Journal of the Korean Data & Information Science Society, 21, 155-162.
  5. Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32, 407-451. https://doi.org/10.1214/009053604000000067
  6. Freund, Y. and Schapire, R. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55,119-139. https://doi.org/10.1006/jcss.1997.1504
  7. Hastie, T., Taylor, J., Tibshirani, R. and Walther. G. (2007). Forward stagewise regression and the monotone lasso. Electronic Journal of Statistics, 1, 1-29. https://doi.org/10.1214/07-EJS004
  8. Hwang, H. (2010). Variable selection for multiclassification by LS-SVM. Journal of the Korean Data & Information Science Society, 21, 959-965.
  9. Shim, J. (2011). Variable selection in the kernel Cox regression. Journal of the Korean Data & Information Science Society, 22, 795-801.
  10. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B, 58, 267-288.

Cited by

  1. The study of foreign exchange trading revenue model using decision tree and gradient boosting vol.24, pp.1, 2013, https://doi.org/10.7465/jkdi.2013.24.1.161
  2. Classification of large-scale data and data batch stream with forward stagewise algorithm vol.25, pp.6, 2014, https://doi.org/10.7465/jkdi.2014.25.6.1283
  3. ANN-Based Estimation of Low-Latitude Monthly Ocean Latent Heat Flux by Ensemble Satellite and Reanalysis Products vol.20, pp.17, 2012, https://doi.org/10.3390/s20174773