On the principles of Parsimony and Self-consistency for the emergence of intelligence
Yi MA1(), Doris TSAO2(), Heung-Yeung SHUM3()
1. Electrical Engineering and Computer Science Department, University of California, Berkeley, CA 94720, USA 2. Department of Molecular & Cell Biology and Howard Hughes Medical Institute, University of California, Berkeley, CA 94720, USA 3. International Digital Economy Academy, Shenzhen 518045, China
Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, which address two fundamental questions regarding intelligence: what to learn and how to learn, respectively. We believe the two principles serve as the cornerstone for the emergence of intelligence, artificial or natural. While they have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, which unifies and explains the evolution of modern deep networks and most practices of artificial intelligence. While we use mainly visual data modeling as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.
马毅, 曹颖, 沈向洋. 论智能起源中的简约与自洽原则[J]. Frontiers of Information Technology & Electronic Engineering, 2022, 23(9): 1298-1323.
Yi MA, Doris TSAO, Heung-Yeung SHUM. On the principles of Parsimony and Self-consistency for the emergence of intelligence. Front. Inform. Technol. Electron. Eng, 2022, 23(9): 1298-1323.