DOI QR코드

DOI QR Code

Detecting Foreground Objects Under Sudden Illumination Change Using Double Background Models

이중 배경 모델을 이용한 급격한 조명 변화에서의 전경 객체 검출

  • Saeed, Mahmoudpour (Department of Computer & Communications Engineering, Kangwon National University) ;
  • Kim, Manbae (Department of Computer & Communications Engineering, Kangwon National University)
  • Received : 2015.12.15
  • Accepted : 2016.02.03
  • Published : 2016.03.30

Abstract

In video sequences, foreground object detection being composed of a background model and a background subtraction is an important part of diverse computer vision applications. However, object detection might fail in sudden illumination changes. In this letter, an illumination-robust background detection is proposed to address this problem. The method can provide quick adaption to current illumination condition using two background models with different adaption rates. Since the proposed method is a non-parametric approach, experimental results show that the proposed algorithm outperforms several state-of-art non-parametric approaches and provides low computational cost.

배경 모델과 배경 차분화로 구성되어 있는 전경객체 추출은 다양한 컴퓨터 비젼 응용에서 중요한 기능이다. 조명 변화를 고려하지 않은 기존 방법들은 급격한 조명 변화에서는 성능이 저하된다. 본 레터에서는 이 문제를 해결할 수 있는 조명 변화에 강인한 배경 모델링 방법을 제안한다. 제안 방법은 다른 적응률을 가진 두 개의 배경 모델을 사용함으로써 조명 조건에 신속하게 적응할 수 있다. 본 논문의 제안 방법은 non-parametric 기법으로서 실험에서는 기존 non-parametric 기법들보다 우수한 성능 및 낮은 복잡도를 보여줌을 증명하였다.

Keywords

Ⅰ. Introduction

In general, foreground object detection consists of a background modeling as well as a background subtraction in video frames. A major challenge to accurate foreground detection is a sudden illumination change in the scene. In these situations, the background is no longer stable and could be mistakenly classified as the foreground yielding false positives. Conventional background modeling belongs to either parametric or non-parametric approach. In both, current pixels varying significantly from the background image are chosen as foreground pixels[1-4].

The performance of a foreground detection highly depends on a reliable background model. The state-of-art methods[5-11] are mostly devised for gradual illumination changes and fail to handle sudden changes such as light on/off. Illumination change condition (ICC) can affect the performance of the foreground object detection due to many false alarms and may lead to a system malfunction.

In this letter, a non-parametric background modeling employing double backgrounds as well as illumination compensation is proposed. Two main functionalities are the utilization of double backgrounds as well as the fast compensation of them to new illumination condition. Fig. 1 shows the overall flow of the proposed method.

Fig. 1.The overall block diagram of the proposed method. 그림 1. 제안방법의 전체 블록도

 

Ⅱ. Proposed Method

The illumination-robust foreground detection uses two background models with slow and fast adaption speeds. Let and denote long-term background model (LTBM) and short-term background model (STBM), respectively. Comparing ith pixel of t-th frame It(i) with the LTBM, a foreground binary mask is obtained as

where TL is the long-term background threshold. Similar thresholding method is performed for the STBM with a threshold TS resulting in a binary mask . Since is used to extract all pixels with significant temporal activities, a smaller threshold is chosen (TS=0.4TL). TL=50, TS=20 are used in experiments.

Current LTBM is updated by integrating a current frame It(i) into a previous model and is computed by

where αL is an adaption parameter. Similarly, a STBM is computed using an adaption parameter αS.

Double backgrounds are utilized for ICC. The proposed method evaluates the responses of STBM and LTBM using the following thresholds:

The proposed updating strategy is used only in ICC, where the ratio of the number of foreground and background pixels of is higher than a threshold TR. The updating process consists of computing average illumination change followed by illumination compensation of background models.

The selective updating methodology is similarly performed for STBM. From the updated background, a final foreground mask FGt(i) is obtained by .

 

Ⅲ. Experimental Results

The performance of the proposed system is compared with five foreground detections methods; Double backgrounds (DBG)[9], Eigen background[4], MoG[5], KDE[2] and ViBe[6]. In Seq1, there are no moving objects during illumination change and humans enter a room after sudden changes while in two other sequences, moving humans exist during subsequent illumination changes. First we examined the performance of the algorithms during sudden illumination change in terms of FP (false positive) and TP (true positive) rates. The accuracy of foreground binary mask is evaluated through use of Recall=TP/(TP+FN) and Precision=TP/(TP+ FP). F-score compares the binary masks with ground truth (GT).

Table 1 compares the overall performance of the proposed algorithm with other methods in three sequences. As shown in the Table, our proposed approach significantly outperforms five methods in all sequences. The proposed method is able to detect the moving objects with high accuracy in three sequences and shows an acceptable performance. Fig. 2 shows the resulting foreground objects detected by five comparative methods and the proposed. The results show that our method outperforms other methods. The processing speed of the proposed method is faster than that of other approaches except DBG (with 218 fps vs. 145 fps of Eigen BG, 71 fps of MoG, 66 fps of ViBe, 42 fps of KDE and 296 fps of DBG for Seq3).

Table 1.Performance comparison of different methods 표 1. 비교 방법들과의 성능 비교

Fig. 2.Foreground objects extracted by five comparative methods and the proposed method. GT=ground truth 그림 2. 5개의 비교방법과 제안방법으로 추출된 전경객체. GT는 ground truth

Algorithms such as Eigen background and ViBE cannot adapt as fast as other methods like MoG to new illumination condition due to their updating methodology. Our method can be adapted to the background models to the new illumination condition right after the illumination has occurred. The most important part is the illumination compensation of the background models with an appropriate gain value of EAIC. First, we compute the amount of illumination change with high accuracy by choosing the effective pixels. Then a pixel-selective background updating is performed. For updating, a correct gain value is assigned to each pixel. The fast background compensation by assigning a correct compensation gain value to each pixel of the background model is very important.

 

Ⅴ. Conclusion

A novel foreground detection was proposed that can address the illumination change problem. The algorithm utilizes two background models with slow and fast adaptation rates for accurate illumination compensation. The proposed method delivers promising detection results in sudden illumination change and outperforms several state-of-art methods.

References

  1. M. Oral and U. Deniz, ″Centre of mass model - A novel approach to background modelling for segmentation of moving objects″, Image Vision and Computing, 25, pp. 1365–1376, 2007. https://doi.org/10.1016/j.imavis.2006.10.001
  2. A. Elagammal, D. Harwood and L. Davis, ″Non-parametric model for background subtraction″. Proc. European Conference on Computer Vision, Dublin, Ireland, pp 751-767, 2000.
  3. K. Kim, T. Chalidabhongse, D. Harwood and L. Davis, ″Real-time foreground background segmentation using codebook model″, Real-Time Imaging, 11(3), pp. 167-256, 2005. https://doi.org/10.1016/j.rti.2005.06.001
  4. N. Oliver, R. Rosario and A. Pentland, ″A bayesian computer vision system for modeling human interactions″, IEEE Trans. Pattern Analysis and Machine Intelligence, 22(8), pp. 831-843, 2000. https://doi.org/10.1109/34.868684
  5. Z. Zivkovic and F. Heijden, ″Efficient adaptive density estimation per image pixel for the task of background subtraction″, Pattern Recognition Letters, 27, pp. 773–780, 2006. https://doi.org/10.1016/j.patrec.2005.11.005
  6. O. Barnich and M. Droogenbroeck, ″ViBe: a powerful random technique to estimate the background in video sequences″, Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, pp. 945-948, 2009.
  7. L. Maddalena and A. Petrosino, ″A self-organizing approach to background subtraction for visual surveillance applications″, IEEE Trans. Image Processing, 17(7), pp. 1168-1177, 2008. https://doi.org/10.1109/TIP.2008.924285
  8. E. Jaraba, C. Urunuela, and J. Senar, ″Detected motion classification with a double-background and a neighborhood based difference″, Pattern Recognition Letters, 24, pp. 2079–2092, 2003. https://doi.org/10.1016/S0167-8655(03)00045-X
  9. S. Gruenwedel, N. Petrovic, L. Jovanov, J. Castaneda, A. Pizurica and W. Philips, ″Efficient foreground detection for real-time surveillance applications″, Electronics Letters, 49(18), 2013. https://doi.org/10.1049/el.2013.1944
  10. S. Hasan, and S. S. Cheung, "Background subtraction under sudden illumination change," IEEE International Workshop on Multimedia Signal Processing (MMSP), 2014.
  11. S. Parthipan, M. Sahfree, F. Li, and A. Wong, "PRIM: fast background subtraction under sudden, local illumination changes via probabilistic illumination range modelling," EEE International Conference on Image Processing (ICIP), 2015.