DOI QR코드

DOI QR Code

No-reference Image Quality Assessment With A Gradient-induced Dictionary

  • Li, Leida (School of Information and Electrical Engineering, China University of Mining and Technology) ;
  • Wu, Dong (School of Information and Electrical Engineering, China University of Mining and Technology) ;
  • Wu, Jinjian (School of Electronic Engineering, Xidian University) ;
  • Qian, Jiansheng (School of Information and Electrical Engineering, China University of Mining and Technology) ;
  • Chen, Beijing (School of Computer and Software, Nanjing University of Information Science and Technology)
  • Received : 2015.08.10
  • Accepted : 2015.11.06
  • Published : 2016.01.31

Abstract

Image distortions are typically characterized by degradations of structures. Dictionaries learned from natural images can capture the underlying structures in images, which are important for image quality assessment (IQA). This paper presents a general-purpose no-reference image quality metric using a GRadient-Induced Dictionary (GRID). A dictionary is first constructed based on gradients of natural images using K-means clustering. Then image features are extracted using the dictionary based on Euclidean-norm coding and max-pooling. A distortion classification model and several distortion-specific quality regression models are trained using the support vector machine (SVM) by combining image features with distortion types and subjective scores, respectively. To evaluate the quality of a test image, the distortion classification model is used to determine the probabilities that the image belongs to different kinds of distortions, while the regression models are used to predict the corresponding distortion-specific quality scores. Finally, an overall quality score is computed as the probability-weighted distortion-specific quality scores. The proposed metric can evaluate image quality accurately and efficiently using a small dictionary. The performance of the proposed method is verified on public image quality databases. Experimental results demonstrate that the proposed metric can generate quality scores highly consistent with human perception, and it outperforms the state-of-the-arts.

Keywords

References

  1. Y. H. Kim, J. Shin and H. Kim, “Lightweight quality metric based on no-reference bitstream for H.264/AVC video,” KSII Transactions on Internet and Information Systems, vol. 6, no. 5, pp. 1388-1399, May, 2012. Article (CrossRef Link) https://doi.org/10.3837/tiis.2012.05.008
  2. X. G. Liu, M. Chen, T.Wan and C. Yu, “Hybrid no-reference video quality assessment focusing on codec effects,” KSII Transactions on Internet and Information Systems, vol. 5, no. 3, pp. 592-606, March, 2011. Article (CrossRef Link) https://doi.org/10.3837/tiis.2011.03.008
  3. Z. Q. Pan, Y. Zhang and S. Kwong, “Efficient motion and disparity estimation optimization for low complexity multiview video coding,” IEEE Transactions on Broadcasting, vol. 61, no. 2, pp. 166-176, June, 2015. Article (CrossRef Link) https://doi.org/10.1109/TBC.2015.2419824
  4. D. T. Nguyen, Y. H. Park, K. Y. Shin and K. R. Park, “New finger-vein recognition method based on image quality assessment,” KSII Transactions on Internet and Information Systems, vol. 7, no. 2, pp. 347-365, February, 2013. Article (CrossRef Link) https://doi.org/10.3837/tiis.2013.02.010
  5. J. Li, X. L. Li, B. Yang and X. M. Sun, “Segmentation-based image copy-move forgery detection scheme,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 3, pp. 507-518, March, 2015. Article (CrossRef Link) https://doi.org/10.1109/TIFS.2014.2381872
  6. Z. H. Xia, X. H.Wang, X. M. Sun and B. W.Wang, “Steganalysis of least significant bit matching using multi-order differences,” Security and Communication Networks, vol. 7, no. 8, pp. 1283-1291, 2014. Article (CrossRef Link) https://doi.org/10.1002/sec.864
  7. Z. H. Xia, X. H. Wang, X. M. Sun, Q. S. Liu and N. X. Xiong, “Steganalysis of LSB matching using differences between nonadjacent pixels,” Multimedia Tools and Applications, DOI 10.1007/s11042-014-2381-8, 2015. Article (CrossRef Link)
  8. K. Gu, G. T. Zhai, X. K. Yang, W. J. Zhang and C. W. Chen, “Automatic contrast enhancement technology with saliency preservation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 9, pp. 1480-1494, 2015. Article (CrossRef Link) https://doi.org/10.1109/TCSVT.2014.2372392
  9. W. S. Lin and C-C. Jay Kuo, “Perceptual visual quality metrics: a survey,” Journal of Visual Communication and Image Representation, vol. 22, no. 4, pp. 297-312, 2011. Article (CrossRef Link) https://doi.org/10.1016/j.jvcir.2011.01.005
  10. K. Gu, G. T. Zhai, X. K. Yang and W. J. Zhang, “Using free energy principle for blind image quality assessment,” IEEE Transactions on Multimedia, vol. 17, no. 1, pp. 50-63, 2015. Article (CrossRef Link) https://doi.org/10.1109/TMM.2014.2373812
  11. L. D. Li, H. C. Zhu, G. B. Yang and J. S. Qian, “Referenceless measure of blocking artifacts by Tchebichef kernel analysis,” IEEE Signal Processing Letters, vol. 21, no. 1, pp. 122-125, 2014. Article (CrossRef Link) https://doi.org/10.1109/LSP.2013.2294333
  12. S. A. Golestaneh and D. M. Chandler, “No-reference quality assessment of JPEG images via a quality relevance map,” IEEE Signal Processing Letters, vol. 21, no. 2, pp. 155-158, 2014. https://doi.org/10.1109/LSP.2013.2296038
  13. L. D. Li, W. S. Lin and H. C. Zhu, “Learning structural regularity for evaluating blocking artifacts in JPEG images,” IEEE Signal Processing Letters, vol.21, no.8, pp. 918-922, 2014. Article (CrossRef Link) https://doi.org/10.1109/LSP.2014.2320743
  14. R. Hassen, Z. Wang and M. M. A Salama, “Image sharpness assessment based on local phase coherence,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2798-2810, July, 2013. Article (CrossRef Link) https://doi.org/10.1109/TIP.2013.2251643
  15. L. D. Li, W. S. Lin, X. S.Wang, G. B. Yang, K. Bahrami and A. C. Kot, “No-reference image blur assessment based on discrete orthogonal moments,” IEEE Transactions on Cybernetics, DOI: 10.1109/TCYB.2015.2392129, 2015. Article (CrossRef Link)
  16. K. Gu, G. T. Zhai, W. S. Lin, X. K. Yang and W. J. Zhang, “No-reference image sharpness assessment in autoregressive parameter space,” IEEE Transactions on Image Processing , vol. 24, no. 10, pp. 3218-3231, 2015. Article (CrossRef Link) https://doi.org/10.1109/TIP.2015.2439035
  17. A. K. Moorthy and A. C. Bovik. “A two-step framework for constructing blind image quality indices,” IEEE Signal Processing Letters, vol. 17, no. 5, pp. 513-516, 2010. Article CrossRef Link) https://doi.org/10.1109/LSP.2010.2043888
  18. A. K. Moorthy and A. C. Bovik, “Blind image quality assessment: from natural scene statistics to perceptual quality,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3350-3364, 2011. Article (CrossRef Link) https://doi.org/10.1109/TIP.2011.2147325
  19. M. A. Saad, A.C. Bovik, and C. Charrier, “Blind image quality assessment: a natural scene statistics approach in the DCT domain,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3339-3352, August, 2012. Article (CrossRef Link) https://doi.org/10.1109/TIP.2012.2191563
  20. A. Mittal, A. K. Moorthy and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12 pp. 4695-4708, 2012. Article (CrossRef Link) https://doi.org/10.1109/TIP.2012.2214050
  21. A. Mittal, R. Soundararajan and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, 2013. Article (CrossRef Link) https://doi.org/10.1109/LSP.2012.2227726
  22. L. X. Liu, B. Liu, H. Huang and A. C. Bovik, “No-reference image quality assessment based on spatial and spectral entropies,” Signal Processing: Image Communication, vol. 29, no. 8, pp. 856-863, 2014. Article (CrossRef Link) https://doi.org/10.1016/j.image.2014.06.006
  23. P. Ye, J. Kumar, L. Kang and D. Doermann, "Unsupervised feature learning framework for no-reference image quality assessment," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1098-1105, June 16-21, 2012. Article (CrossRef Link)
  24. G. E. Hinton and S. Osindero and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527-1554, 2006. Article (CrossRef Link) https://doi.org/10.1162/neco.2006.18.7.1527
  25. H. L. Lee, C. Ekanadham and A. Y. Ng. "Sparse deep belief net model for visual area V2," in Proc. of Advances in Neural Information Processing Systems, pp. 873-880, December 8-11, 2008. Article (CrossRef Link)
  26. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 13, pp. 607-609, June, 1996. Article (CrossRef Link) https://doi.org/10.1038/381607a0
  27. L. Zhang, L. Zhang, X. Q. Mou and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, 2011. Article (CrossRef Link) https://doi.org/10.1109/TIP.2011.2109730
  28. A. Coates, H. L. Lee and A. Y. Ng, "An analysis of single-layer networks in unsupervised feature learning," in Proc .of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 215-223, April 11-13, 2011. Article (CrossRef Link)
  29. A. K. Moorthy and A. C. Bovik, “Visual importance pooling for image quality assessment,” IEEE Journal of Selected Topics in Signal Processing, vol. 3, no. 2, pp. 193-201, 2009. Article (CrossRef Link) https://doi.org/10.1109/JSTSP.2009.2015374
  30. C. C. Chang, and C. J. Lin. “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, Article 27, 2011. Article (CrossRef Link) https://doi.org/10.1145/1961189.1961199
  31. Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April, 2004. Article (CrossRef Link) https://doi.org/10.1109/TIP.2003.819861
  32. E. C. Larson and D. M. Chandler, “Most apparent distortion: full-reference image quality assessment and the role of strategy,” Journal of Electronic Imaging, vol. 19, no. 1, pp. 001006:1-21, March, 2010. Article (CrossRef Link)
  33. N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli and F. Battisti, “TID2008 - a database for evaluation of full-reference visual quality assessment metrics,” Advsances of Modern Radioelectronics, vol. 10, no. 4, pp. 30-45, 2009. Article (CrossRef Link)
  34. N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti and C.-C. Jay Kuo, "Color image database TID2013: peculiarities and preliminary results," in Proc. of the 4th Europian Workshop on Visual Information Processing, pp. 106-111, June 10-12, 2013. Article (CrossRef Link)
  35. VQEG, "Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment-Phase II," available online: http://www.vqeg.org/, 2003. Article (CrossRef Link)
  36. W. F. Xue, L. Zhang and X. Q. Mou, "Learning without human scores for blind image quality assessment," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 995-1002, June 23-28, 2013. Article (CrossRef Link)
  37. D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Transactions on Image Processing, vol. 16, no. 9, pp. 2284-2298, 2007. Article (CrossRef Link) https://doi.org/10.1109/TIP.2007.901820
  38. D. Jayaraman, A. Mittal, A.K. Moorthy, A.C. Bovik, Objective quality assessment of multiply distorted images, in: Proc. of Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, pp. 1693-1697, Nov. 2012. Article (CrossRef Link)
  39. K. Gu, G. T. Zhai, X. K. Yang, and W. J. Zhang, "Hybrid no-reference quality metric for singly and multiply distorted images," IEEE Transactions on Broadcasting, vol. 60, no. 3, pp. 555-567, Sept. 2014. Article (CrossRef Link) https://doi.org/10.1109/TBC.2014.2344471
  40. K. Gu, G. T. Zhai, W. S. Lin, and M. Liu, "The analysis of image contrast: From quality assessment to automatic enhancement," IEEE Transactions on Cybernetics, 2015, DOI: 10.1109/TCYB.2015.2401732, 2015. Article (CrossRef Link)
  41. B. J. Chen, H. Z. Shu, G. Coatrieux, G. Chen, X. M. Sun and J. L. Coatrieux, “Color image analysis by quaternion-type moments,” Journal of Mathematical Imaging and Vision, vol. 51, no. 1, pp. 124-144, 2015. Article (CrossRef Link) https://doi.org/10.1007/s10851-014-0511-6

Cited by

  1. No-reference Image Blur Assessment Based on Multi-scale Spatial Local Features vol.14, pp.10, 2016, https://doi.org/10.3837/tiis.2020.10.008