Please wait a minute...
Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184

Frontiers of Information Technology & Electronic Engineering  2017, Vol. 18 Issue (1): 58-67   https://doi.org/10.1631/FITEE.1601804
  本期目录
AI2.0时代的类人与超人感知:研究综述与趋势展望
田永鸿1(),陈熙霖2,熊红凯3,李洪亮4,戴礼荣5,陈婧1,兴军亮6,陈靖7,吴玺宏1,胡卫明6,胡郁5,黄铁军1(),高文1
1. 北京大学信息科学技术学院
2. 中国科学院计算技术研究所
3. 上海交通大学电子工程系
4. 电子科技大学电子工程系
5. 中国科学技术大学电子工程与信息科学系
6. 中国科学院计算技术研究所
7. 北京理工大学光电工程学院
Towards human-like and transhuman perception in AI 2.0: a review
Yong-hong TIAN1(),Xi-lin CHEN2,Hong-kai XIONG3,Hong-liang LI4,Li-rong DAI5,Jing CHEN1,Jun-liang XING6,Jing CHEN7,Xi-hong WU1,Wei-min HU6,Yu HU5,Tie-jun HUANG1(),Wen GAO1
1. School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
2. Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
3. Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
4. School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611730, China
5. Department of Electronic Engineering and Information Sciences, University of Science and Technology of China, Hefei 230027, China
6. Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
7. School of Optoelectronics, Beijing Institute of Technology, Beijing 100081, China
 全文: PDF(461 KB)  
摘要:

感知是智能系统与现实世界的交互界面。如果没有复杂而灵活的感知能力,就不可能创造出高级的人工智能(AI)系统。最近,潘云鹤院士提出了AI 2.0的概念,其最重要的特征就是未来的AI系统应拥有类人甚至超人的智能感知能力。本文简要回顾了不同智能感知领域的研究现状,包括视觉感知、听觉感知、言语感知、感知信息处理与学习引擎等方面。在此基础上,论文对即将到来的AI 2.0时代智能感知领域需要大力研究发展的重点方向进行了展望,包括:(1)类人和超人的主动视觉;(2)自然声学场景的听知觉感知;(3)自然交互环境的言语感知及计算;(4)面向媒体感知的自主学习;(5)大规模感知信息处理与学习引擎;(6)城市全维度智能感知推理引擎。这些研究方向应在未来AI 2.0的研究规划中进行重点布局。

Abstract

Perception is the interaction interface between an intelligent system and the real world. Without sophisticated and flexible perceptual capabilities, it is impossible to create advanced artificial intelligence (AI) systems. For the next-generation AI, called ‘AI 2.0’, one of the most significant features will be that AI is empowered with intelligent perceptual capabilities, which can simulate human brain’s mechanisms and are likely to surpass human brain in terms of performance. In this paper, we briefly review the state-of-the-art advances across different areas of perception, including visual perception, auditory perception, speech per-ception, and perceptual information processing and learning engines. On this basis, we envision several R&D trends in intelligent perception for the forthcoming era of AI 2.0, including: (1) human-like and transhuman active vision; (2) auditory perception and computation in an actual auditory setting; (3) speech perception and computation in a natural interaction setting; (4) autonomous learning of perceptual information; (5) large-scale perceptual information processing and learning platforms; and (6) urban om-nidirectional intelligent perception and reasoning engines. We believe these research directions should be highlighted in the future plans for AI 2.0.

Key wordsIntelligent perception    Active vision    Auditory perception    Speech perception    Autonomous learning
收稿日期: 2016-12-12      出版日期: 2017-02-27
通讯作者: 田永鸿,黄铁军     E-mail: yhtian@pku.edu.cn;tjhuang@pku.edu.cn
Corresponding Author(s): Yong-hong TIAN,Tie-jun HUANG   
 引用本文:   
田永鸿,陈熙霖,熊红凯,李洪亮,戴礼荣,陈婧,兴军亮,陈靖,吴玺宏,胡卫明,胡郁,黄铁军,高文. AI2.0时代的类人与超人感知:研究综述与趋势展望[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 58-67.
Yong-hong TIAN,Xi-lin CHEN,Hong-kai XIONG,Hong-liang LI,Li-rong DAI,Jing CHEN,Jun-liang XING,Jing CHEN,Xi-hong WU,Wei-min HU,Yu HU,Tie-jun HUANG,Wen GAO. Towards human-like and transhuman perception in AI 2.0: a review. Front. Inform. Technol. Electron. Eng, 2017, 18(1): 58-67.
 链接本文:  
https://academic.hep.com.cn/fitee/CN/10.1631/FITEE.1601804
https://academic.hep.com.cn/fitee/CN/Y2017/V18/I1/58
1 Amodei, D., Anubhai, R., Battenberg, E., , 2015. Deep Speech 2: end-to-end speech recognition in English and Mandarin. arXiv:1512.02595.
2 Bear, M.F., Connors, B.W., Paradiso, M.A., 2001. Neurosci-ence. Lippincott Williams and Wilkins, Maryland, p.208.
3 Bruna, J., Mallat, S., 2013. Invariant scattering convolution networks. IEEE Trans. Patt. Anal. Mach. Intell., 35(8): 1872–1886.
4 Candès, E., Romberg, J., Tao, T., 2006. Robust uncertainty principles: exact signal reconstruction from highly in-complete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509.
5 Deng, J., Dong, W., Socher, R., , 2009. ImageNet: a large-scale hierarchical image database. IEEE Conf. on Computer Vision and Pattern Recognition, p.248–255.
6 Duarte, M., Davenport, M., Takhar, D., , 2008. Single- pixel imaging via compressive sampling. IEEE Signal Proc. Mag., 25(2):83–91.
7 Han, J., Shao, L., Xu, D., , 2013. Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern., 43(5):1318–1334.
8 Hinton, G., Deng, L., Yu, D., , 2012. Deep neural net-works for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Proc. Mag., 29(6):82–97.
9 Hochreiter, S., Schmidhuber, J., 1997. Long short-term memory. Neur. Comput., 9(8):1735–1780.
10 Hou, Y.Z.,Jiao, L.F., 2014. Survey of smart city construction study from home and abroad. Ind. Sci. Trib., 13(24):94–97 (in Chinese).
11 Jiang, H., Huang, G., Wilford, P., 2014. Multi-view in lensless compressive imaging. Apsipa Trans. Signal Inform. Proc., 3(15):1–10.
12 Kadambi, A., Whyte, R., Bhandari, A., , 2013. Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph., 32(6):1–10.
13 Kale, P.V., Sharma, S.D., 2014. A review of securing home using video surveillance. Int. J. Sci. Res., 3(5):1150–1154.
14 Kendrick, K.M., 1998. Intelligent perception. Appl. Animal Behav. Sci., 57(3-4):213–231.
15 King, S., 2014. Measuring a decade of progress in text-to- speech. Loquens, 1(1):e006.
16 Krizhevsk, A., Sutskever, I., Hinton, G., 2012. ImageNet clas-sification with deep convolutional neural networks. Ad-vances in Neural Information Processing Systems, p.1097–1105.
17 Lacey, G., Taylor, G.W., Areibi, S., 2016. Deep learning on FPGAs: past, present, and future. arXiv:1602.04283.
18 LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature, 521(7553):436–444.
19 Li, T., Chang, H., Wang, M., , 2015. Crowded scene analysis: a survey. IEEE Trans. Circ. Syst. Video Technol., 25(3):367–386.
20 Ling, Z.H., Kang, S.Y., Zen, H., , 2015. Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Proc. Mag., 32(3):35–52.
21 Lippmann, R.P., 1997. Speech recognition by machines and humans. Speech Commun., 22(1):1–15.
22 Litovsky, R.Y., Colburn, H.S., Yost, W.A., , 1999. The precedence effect. J. Acoust. Soc. Am., 106:1633–1654.
23 Mahendran, A., Vedaldi, A., 2015. Understanding deep image representations by inverting them. IEEE Int. Conf. on Computer Vision Pattern Recognition, p.5188–5196.
24 Makhoul, J., 2016. A 50-year retrospective on speech and language processing. Int. Conf. on Interspeech, p.1.
25 Mattys, S.L., Davis, M.H., Bradlow, A.R., , 2012. Speech recognition in adverse conditions: a review. Lang. Cogn. Proc., 27:953–978.
26 McMackin, L., Herman, M.A., Chatterjee, B., , 2012. A high-resolution SWIR camera via compressed sensing. SPIE, 8353:835303.
27 Mountcastle, V., 1978. An organizing principle for cerebral function: the unit model and the distributed system. In: Edelman, G.M., Mountcastle, V.B. (Eds.), The Mindful Brain. MIT Press, Cambridge.
28 Musialski, P., Wonka, P., Aliaga, D.G., , 2013. A survey of urban reconstruction. Comput. Graph. Forum, 32(6): 146–177.
29 Ngiam, J., Khosla, A., Kim, M., , 2011. Multimodal deep learning. 28th In. Conf. on Machine Learning, p.689–696.
30 Niwa, K., Koizumi, Y., Kawase, T., , 2016. Pinpoint extraction of distant sound source based on DNN map-ping from multiple beamforming outputs to prior SNR. IEEE Int. Conf. on Acoustics, Speech and Signal Pro-cessing, p.435–439.
31 Oord, A., Dieleman, S., Zen, H., , 2016. WaveNet: a generative model for raw audio. arXiv:1609.03499.
32 Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409–413.
33 Pratt, G., Manzo, J., 2013. The DARPA robotics challenge. IEEE Robot. Autom. Mag., 20(2):10–12.
34 Priano, F.H., Armas, R.L., Guerra, C.F., 2016. A model for the smart development of island territories. Int. Conf. on Digital Government Research, p.465–474.
35 Raina, R., Battle, A., Lee, H., , 2007. Self-taught learning: transfer learning from unlabeled data. 24th Int. Conf. on Machine Learning, p.759–766.
36 Robinson, E.A., Treitel, S., 1967. Principles of digital Wiener filtering. Geophys. Prospect., 15(3):311–332.
37 Roy, R., Kailath, T., 1989. ESPRIT-estimation of signal pa-rameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process., 37(7):984–995.
38 Salakhutdinov, R., Hinton, G., 2009. Deep Boltzmann ma-chines. J. Mach. Learn. Res., 5:448–455.
39 Saon, G., Kuo, H.K.J., Rennie, S., , 2015. The IBM 2015 English conversational telephone speech recognition system. arXiv:1505.05899.
40 Seide, F., Li, G., Yu, D., 2011. Conversational speech tran-scription using context-dependent deep neural networks. Int. Conf. on Interspeech, p.437–440.
41 Soltau, H., Saon, G., Sainath, T.N., 2014. Joint training of convolutional and nonconvolutional neural networks. IEEE Int. Conf. on Acoustics, Speech and Signal Pro-cessing, p.5572–5576.
42 Song, T., Chen, J., Zhang, D.B., , 2016. A sound source localization algorithm using microphone array with rigid body. Int. Congress on Acoustics, p.1–8.
43 Suzuki, L.R., 2015. Data as Infrastructure for Smart Cities. PhD Thesis, University College London, London, UK.
44 Tadano, R., Pediredla, A., Veeraraghavan, A., 2015. Depth selective camera: a direct, on-chip, programmable tech-nique for depth selectivity in photography. Int. Conf. on Computer Vision, p.3595–3603.
45 Tokuda, K., Nankaku, Y., Toda, T., , 2013. Speech syn-thesis based on hidden Markov models. Proc. IEEE, 101(5):1234–1252.
46 Turk, M., Pentland, A., 1991. Eigenfaces for recognition. J. Cogn. Neurosci., 3(1):71–86.
47 Veselý, K., Ghoshal, A., Burget, L., , 2013. Sequence- discriminative training of deep neural networks. Int. Conf. on Interspeech, p.2345–2349.
48 Wang, W., Xu, S., Xu, B., 2016. First step towards end-to-end parametric TTS synthesis: generating spectral parameters with neural attention. Int. Conf. on Interspeech, p.2243–2247.
49 Xiong, W., Droppo, J., Huang, X., , 2016. Achieving human parity in conversational speech recognition. arXiv:1610.05256.
50 Zhang, J.P., Wang, F.Y., Wang, K.F., , 2011. Data-driven intelligent transportation systems: a survey. IEEE Trans. Intell. Transp. Syst., 12(4):1624–1639.
51 Zheng, L., Yang, Y., Hauptmann, A.G., 2016. Person re-identification: past, present and future. arXiv:1610. 02984.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed