DOI QR코드

DOI QR Code

Outlier Detection By Clustering-Based Ensemble Model Construction

클러스터링 기반 앙상블 모델 구성을 이용한 이상치 탐지

  • 박정희 (충남대학교 컴퓨터공학과) ;
  • 김태공 (충남대학교 컴퓨터공학과) ;
  • 김지일 (충남대학교 컴퓨터공학과) ;
  • 최세목 (충남대학교 컴퓨터공학과) ;
  • 이경훈 (충남대학교 컴퓨터공학과)
  • Received : 2018.07.26
  • Accepted : 2018.09.17
  • Published : 2018.11.30

Abstract

Outlier detection means to detect data samples that deviate significantly from the distribution of normal data. Most outlier detection methods calculate an outlier score that indicates the extent to which a data sample is out of normal state and determine it to be an outlier when its outlier score is above a given threshold. However, since the range of an outlier score is different for each data and the outliers exist at a smaller ratio than the normal data, it is very difficult to determine the threshold value for an outlier score. Further, in an actual situation, it is not easy to acquire data including a sufficient amount of outliers available for learning. In this paper, we propose a clustering-based outlier detection method by constructing a model representing a normal data region using only normal data and performing binary classification of outliers and normal data for new data samples. Then, by dividing the given normal data into chunks, and constructing a clustering model for each chunk, we expand it to the ensemble method combining the decision by the models and apply it to the streaming data with dynamic changes. Experimental results using real data and artificial data show high performance of the proposed method.

이상치 탐지는 정상 데이터 분포를 크게 벗어나는 데이터 샘플을 탐지하는 것을 의미한다. 대부분의 이상치 탐지 방법은 데이터 샘플이 정상 상태를 벗어나는 정도를 나타내는 이상치 지수(outlier score)를 계산하여 주어진 임계값 이상일 때 이상치로 판정한다. 그러나, 데이터마다 이상치 지수의 범위가 다양하고 정상 데이터에 비해 이상치 데이터는 적은 비율로 존재하기 때문에 이상치 지수에 대한 임계값을 결정하기는 매우 어렵다. 또한, 실제 상황에서는 학습에 이용할 수 있는 충분한 양의 이상치를 포함하는 데이터의 획득이 용이하지 않다. 본 논문에서는 정상 데이터가 주어졌을 때 이를 이용하여 정상 데이터 영역을 나타내는 모델을 구성하고 새로운 데이터 샘플에 대해 이상치와 정상치의 이진 분류를 수행하는 방법으로 군집화 기반 이상치 탐지 방법을 제안한다. 그리고, 주어진 정상 데이터를 청크로 나누고 각 청크에 대해 클러스터링 모델을 구성한 후 모델들에 의한 이상치 판정 결과를 결합하는 앙상블 방법과 동적 변화가 있는 스트리밍 데이터에서의 적용 방법으로 확장한다. 실제 데이터와 인공 데이터를 이용한 실험결과는 제안 방법의 높은 성능을 보여준다.

Keywords

JBCRJM_2018_v7n11_435_f0001.png 이미지

Fig. 1. An Illustration of a Clustering-Based Ensemble Method for Outlier Detection

JBCRJM_2018_v7n11_435_f0002.png 이미지

Fig. 2. The Process of a Clustering-Based Ensemble Method for Outlier Detection

JBCRJM_2018_v7n11_435_f0003.png 이미지

Fig. 3. An Illustration of Clustering-Based Ensemble Method for Outlier Detection on Streaming Data

JBCRJM_2018_v7n11_435_f0004.png 이미지

Fig. 4. The Performance Comparison with Respect to the k Value in k-means Clustering and the Number of Ensemble Members

JBCRJM_2018_v7n11_435_f0005.png 이미지

Fig. 5. The Effects of Window Size in the Ensemble Outlier Detection Method Using RBFevents_drift and RBFevents_no_drift data

Table 1. The Description of Data Sets

JBCRJM_2018_v7n11_435_t0001.png 이미지

Table 2. Performance Comparison by Precision(P), Recall(R), and F1-measure of the Compared Methods

JBCRJM_2018_v7n11_435_t0002.png 이미지

Table 3. Performance Comparison of the Ensemble Outlier Detection Method Using RBFevents_drift and RBFevents_no_drift Data

JBCRJM_2018_v7n11_435_t0003.png 이미지

References

  1. E. M. Knorr and R. T. Ng, "Finding intensional knowledge of distance-based outliers," in Proceedings of 25th International Conference on Very Large Databases, 1999.
  2. M. Markou and A. Singh, "Novelty detection: a review-part 1: statistical approaches," Signal Processing, Vol.83, No.12, pp.2481-2497, 2003. https://doi.org/10.1016/j.sigpro.2003.07.018
  3. M. Markou and A. Singh, "Novelty detection: a review - part 2: neural network based approaches," Signal Processing, Vol.83, No.12, pp.2499-2521, 2003. https://doi.org/10.1016/j.sigpro.2003.07.019
  4. R. Domingues and M. Filippone, P. Michiardi, and J. Zouaoui, "A comparative evaluation of outlier detection algorithms: Experiments and analyses," Pattern Recognition, Vol.74, pp.406-421, 2018. https://doi.org/10.1016/j.patcog.2017.09.037
  5. C. Aggarwal, "Outlier analysis," Springer, Switzerlnd, 2017.
  6. K. Wu, K. Zhang, W. Fan, A. Edwards, and P. Yu, "RSForest: A rapid density estimator for streaming anomaly detection," In Proceedings of the 14th International Conference on Data Mining, 2014.
  7. F. Angiulli and F. Fassetti, "Detecting distance-based outlies in streams of data," in Proceedings of CIKM, 2007.
  8. M. M. Breunig. H-P. Kriegel. R. T. Ng, and J. Sander, "LOF: Identifying density-based local outliers," in Proceedings of the 2000 ACM Sigmod International Conference on Management of Data, 2000.
  9. K. Ting, F. Liu, and Z. Zhou, "Isolation forest," In Proceedings of the 8th International Conference on Data Mining, 2008.
  10. S. Zhai, Y. Cheng, W. Lu, and Z. Zhang, "Deep structured energy based models for anomaly detection," in Proceedings of the ICML, 2016.
  11. X. Xu, Z. He, and S. Deng, "Discovering cluster-based local outliers," Pattern Recognition Letters, Vol.24, pp.1641-1650, 2003. https://doi.org/10.1016/S0167-8655(03)00003-5
  12. E. Spinosa, A. Carvalho, and J. Gama, "Olindda: A clusterbased approach for detecting novelty and concept drift in data streams," In Proceedings of the SAC, 2007.
  13. Creditcard/Kaggle [Internet], https://www.kaggle.com/ arvindratan/creditcard/version/1.
  14. S. Hawkins, H. Hongxing, G. Williams, and R. Baxter, "Outlier detection using replicator neural networks," In Proceedings of the International Conference on Data Warehousing and Knowledge Discovery, 2002.
  15. A. Bifet, G. Holmes, R. Kirkby, and B. Pfahringer, "MOA: Massive online analysis," Journal of Machine Learning Research, Vol.11, pp.1601-1604, 2010.
  16. M. Steinbach, P. Tan, and V. Kumar, "Introduction to Data Mining," Addison Wesley, Boston, 2006.
  17. B. Scholkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson, "Estimating the support of a high-dimensional distribution," Neural Computation, Vol.13, No.7, pp.1443- 1471, 2001. https://doi.org/10.1162/089976601750264965
  18. F. Pedregosa et al, "Scikit-learn: Machine Learning in Python," Journal of Machine Learning Research, Vol.12, pp. 2825-2830, 2011.