Please wait a minute...
Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184

Frontiers of Information Technology & Electronic Engineering  2017, Vol. 18 Issue (7): 989-1001   https://doi.org/10.1631/FITEE.1601338
  本期目录
基于RGBD和稀疏学习的鲁棒目标跟踪
马子昂1(), 项志宇2()
1. 浙江大学信息与电子工程学院,杭州310027,中国
2. 浙江省综合信息网技术重点实验室,杭州310027,中国
Robust object tracking with RGBD-based sparse learning
Zi-ang MA1(), Zhi-yu XIANG2()
1. College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
2. Zhejiang Provincial Key Laboratory of Information Network Technology, Hangzhou 310027, China
 全文: PDF(2396 KB)   HTML
摘要:

鲁棒目标跟踪近年来成为计算机视觉领域一项重要的且极具挑战性的研究方向。随着深度传感器的普及,深度信息因其对光照变化与遮挡表现出一定的鲁棒性而被广泛应用于视觉目标跟踪算法中。本文提出了一种基于RGBD和稀疏学习的跟踪算法,从三个方面将深度信息应用到稀疏学习跟踪框架。首先将深度图像特征结合现有的基于彩色图像的视觉特征用于目标外观的鲁棒特征描述。为了适应跟踪过程中的各种遮挡情况,我们设计了一种特殊的遮挡物模板用于增广现有的超完备字典。最后,我们进一步提出了一种基于深度信息的遮挡物检测方法用于有效地指示模板更新。基于KITTI和Princeton数据集的大量实验证明了所提出算法的跟踪效果优于时下最先进的多种跟踪器,包括基于稀疏学习的跟踪以及基于RGBD的跟踪。

Abstract

Robust object tracking has been an important and challenging research area in the field of computer vision for decades. With the increasing popularity of affordable depth sensors, range data is widely used in visual tracking for its ability to provide robustness to varying illumination and occlusions. In this paper, a novel RGBD and sparse learning based tracker is proposed. The range data is integrated into the sparse learning framework in three respects. First, an extra depth view is added to the color image based visual features as an independent view for robust appearance modeling. Then, a special occlusion template set is designed to replenish the existing dictionary for handling various occlusion conditions. Finally, a depth-based occlusion detection method is proposed to efficiently determine an accurate time for the template update. Extensive experiments on both KITTI and Princeton data sets demonstrate that the proposed tracker outperforms the state-of-the-art tracking algorithms, including both sparse learning and RGBD based methods.

Key wordsObject tracking    Sparse learning    Depth view    Occlusion templates    Occlusion detection
收稿日期: 2016-06-14      出版日期: 2017-09-20
通讯作者: 马子昂,项志宇     E-mail: kobebean@zju.edu.cn;xiangzy@zju.edu.cn
Corresponding Author(s): Zi-ang MA,Zhi-yu XIANG   
 引用本文:   
马子昂, 项志宇. 基于RGBD和稀疏学习的鲁棒目标跟踪[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(7): 989-1001.
Zi-ang MA, Zhi-yu XIANG. Robust object tracking with RGBD-based sparse learning. Front. Inform. Technol. Electron. Eng, 2017, 18(7): 989-1001.
 链接本文:  
https://academic.hep.com.cn/fitee/CN/10.1631/FITEE.1601338
https://academic.hep.com.cn/fitee/CN/Y2017/V18/I7/989
1 Avidan, S., 2007. Ensemble tracking.IEEE Trans. Patt. Anal. Mach. Intell., 29(2):261–271.
2 Babenko, B., Yang, M.H., Belongie,S. , 2009. Visual tracking with online multiple instance learning.IEEE Conf. on Computer Vision and Pattern Recognition, p.983–990.
3 Bao, C.L., Wu, Y., Ling, H.B., et al., 2012. Real time robust L1 tracker using accelerated proximal gradient approach.IEEE Conf. on Computer Vision and Pattern Recognition, p.1830–1837.
4 Black, M.J., Jepson, A.D., 1998. EigenTracking: robust matching and tracking of articulated objects using a view-based representation. Int. J. Comput. Vis., 26(1): 63–84. https://doi.org/10.1023/A:1007939232436
5 Candes, E.J., Romberg , J.K., Tao, T. , 2006. Stable signal recovery from incomplete and inaccurate measurements.Commun. Pure Appl. Math., 59(8):1207–1223.
6 Chen, X., Pan, W.K., Kwok, J.T. , et al., 2009. Accelerated gradient method for multi-task sparse learning problem.9th IEEE Int. Conf. on Data Mining, p.746–751.
7 Choi, W., Pantofaru , C., Savarese, S. , 2011. Detecting and tracking people using an RGB-D camera via multiple detector fusion.IEEE Int. Conf. on Computer Vision Workshops, p.1076–1083.
8 Comaniciu, D., Ramesh, V., Meer, P., 2003. Kernel-based object tracking.IEEE Trans. Patt. Anal. Mach. Intell., 25(5):564–577.
9 Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection.IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.886–893.
10 Donoho, D.L., 2006. Compressed sensing.IEEE Trans. Inform. Theory, 52(4):1289–1306.
11 Hong, Z.B., Mei, X., Prokhorov, D. , et al., 2013. Tracking via robust multi-task multi-view joint sparse representation.IEEE Int. Conf. on Computer Vision, p.649–656.
12 Lan, X.Y., Ma, A., Yuen, P., 2014. Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation.IEEE Int. Conf. on Computer Vision and Pattern Recognition, p.1194–1201.
13 Ling, H.B., Bai, L., Blasch, E., et al., 2010. Robust infrared vehicle tracking across target pose change using L1 reg-ularization.IEEE Conf. on Information Fusion, p.1–8.
14 Liu, B.Y., Yang, L., Huang, J.Z. , et al., 2010. Robust and fast collaborative tracking with two stage sparse optimization.European Conf. on Computer Vision, p.624–637.
15 Luber, M., Spinello , L., Arras, K.O. , 2011. People tracking in RGB-D data with on-line boosted target models.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.3844–3849.
16 Ma, Z.A., Xiang, Z.Y., 2015. Robust visual tracking via bin-ocular multi-task multi-view joint sparse representation.SAI Intelligent Systems Conf., p.714–722.
17 Mei, X., Ling, H.B., 2009. Robust visual tracking using ℓ1 minimization.IEEE 12th Int. Conf. on Computer Vision, p.1436–1443.
18 Mei, X., Ling, H.B., 2011. Robust visual tracking and vehicle classification via sparse representation.IEEE Trans. Patt.Anal. Mach. Intell., 33(11):2259–2272. https://doi.org/10.1109/TPAMI.2011.66
19 Mei, X., Ling, H.B., Wu, Y. , et al., 2011. Minimum error bounded efficient ℓ1 tracker with occlusion detection.IEEE Conf. on Computer Vision and Pattern Recognition, p.1257–1264.
20 Ojala, T., Pietikäinen , M., Mäenpää, T., 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.IEEE Trans. Patt. Anal. Mach. Intell., 24(7):971–987.
21 Pei, S.C., Lin, C.N., 1995. Image normalization for pattern recognition.Image Vis. Comput., 13(10):711–723.
22 Porikli, F., Tuzel, O., Meer, P., 2006. Covariance tracking using model update based on Lie algebra.IEEE Computer Society Conf. on Computer Vision and Pattern Recogni-tion, p.728–735.
23 Ross, D.A., Lim, J., Lin, R.S., et al., 2008. Incremental learning for robust visual tracking.Int. J. Comput. Vis., 77(1-3):125–141.
24 Song, S.R., Xiao, J.X., 2013. Tracking revisited using RGBD camera: unified benchmark and baselines.IEEE Int. Conf. on Computer Vision, p.233–240.
25 Williams, O., Blake, A., Cipolla, R. , 2005. Sparse Bayesian learning for efficient visual tracking.IEEE Trans. Patt. Anal. Mach. Intell., 27(8):1292–1304.
26 Wright, J., Yang, A.Y., Ganesh, A. , et al., 2009. Robust face recognition via sparse representation.IEEE Trans. Patt. Anal. Mach. Intell., 31(2):210–227.
27 Wu, Y., Lim, J., Yang, M.H., 2013. Online object tracking: a benchmark.IEEE Conf. on Computer Vision and Pattern Recognition, p.2411–2418.
28 Yang, M., Zhang, L., 2010. Gabor feature-based sparse rep-resentation for face recognition with Gabor occlusion dictionary.European Conf. on Computer Vision, p.448–461.
29 Yilmaz, A., Javed, O., Shah, M., 2006. Object tracking: a survey. ACM Comput.Surv., 38(4):43–56.
30 Yin, Z.Z., Collins , R.T., 2008. Object tracking and detection after occlusion via numerical hybrid local and global mode-seeking.IEEE Conf. on Computer Vision and Pattern Recognition, p.1–8.
31 Zhang, K., Zhang, L., Yang, M.H., 2012. Real-time compres-sive tracking.European Conf. on Computer Vision, p.864–877. https://doi.org/10.1007/978-3-642-33712-3_62
32 Zhang, T.Z., Ghanem, B., Liu, S., et al., 2012a. Low-rank sparse learning for robust visual tracking.European Conf. on Computer Vision, p.470–484.
33 Zhang, T.Z., Ghanem, B., Liu, S., et al., 2012b. Robust visual tracking via multi-task sparse learning.IEEE Conf. on Computer Vision and Pattern Recognition, p.2042–2049.
34 Zhang, T.Z., Ghanem, B., Liu, S., et al., 2013. Robust visual tracking via structured multi-task sparse learning.Int. J. Comput. Vis., 101(2):367–383.
35 Zhang, T.Z., Liu, S., Ahuja, N.,et al., 2015a. Robust visual tracking via consistent low-rank sparse learning. Int. J. Comput. Vis., 111(2):171–190.
36 Zhang, T.Z., Liu, S., Xu, C.S., et al., 2015b. Structural sparse tracking.IEEE Conf. on Computer Vision and Pattern Recognition, p.150–158.
37 Zhang, Z.Y., 1994. Iterative point matching for registration of free-form curves and surfaces.Int. J. Comput. Vis., 13(2): 119–152.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed