Biography:Juyang Weng

From HandWiki
Juyang (John) Weng
Juyang Wang1.jpg
NationalityChinese-American
OccupationComputer engineer, neuroscientist, author, and academic
Academic background
EducationBSc., Computer Science
MSc., Computer Science
PhD., Computer Science
Alma materFudan University
University of Illinois at Urbana-Champaign
Thesis (1989)
Doctoral advisorThomas S. Huang
Narendra Ahuja
Academic work
InstitutionsBrain-Mind Institute
GENISAMA
Michigan State University

Juyang (John) Weng is a Chinese-American computer engineer, neuroscientist, author, and academic. He is a former professor at the Department of Computer Science and Engineering at Michigan State University and the President of Brain-Mind Institute and GENISAMA.[1]

Weng has conducted research on grounded machine learning by intersecting computer science and engineering, with brain and cognitive science. In collaborative research efforts with his coworkers, he has explored mental architectures and computational models for autonomous development across various domains such as vision, audition, touch, behaviors, and motivational systems, both in biological and engineered systems. He has authored two books, Natural and Artificial Intelligence: Introduction to Computational Brain-Mind and Motion and Structure from Image Sequences, and is the editor of the book series 'New Frontiers in Robotics.' In addition, he has published over 300 articles.

Weng is a Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the Founder and President of the Brain-Mind Institute, and the startup GENISAMA. He is also the Founder and Editor-in-chief of the International Journal of Humanoid Robotics and the Brain-Mind Magazine and an Associate Editor of the IEEE Transactions on Autonomous Mental Development (now Cognitive and Developmental Systems).[2] Additionally, he served as a Guest editor for five special issues, including What AI and Neuroscience Can Learn from Each Other: Open Problems in Models and Theories, Cognitive Computation,[3] The Special Issue on Brain Imaging-informed Multimodal Analysis, IEEE Transactions on Autonomous Mental Development,[4] and The Special Issue on Autonomous Mental Development, International Journal of Humanoid Robotics.[5]

Education

Weng obtained his BS degree from Fudan University in 1982, followed by earning his M.Sc. and Ph.D. degrees in computer science from the University of Illinois at Urbana-Champaign in 1985 and 1989, respectively.[6]

Career

Following his Ph.D., in 1990, Weng began his academic career as a visiting assistant research professor at the Beckman Institute of the University of Illinois, Urbana. From 1992 to 1998, he served as an assistant professor at Michigan State University, becoming associate professor in 1998 and professor in 2003.[7]

Research

Weng's research revolves around grounded machine learning, spanning vision, audition, natural language understanding, planning, and real-time hardware implementations. He is also involved in technology transfer through his startup, GENISAMA, which focuses on grounded, emergent, natural, incremental, skull-closed, attentive, motivated, and abstract systems. His theoretical contributions include mathematically proving that Developmental Networks (DNs) he developed can learn any universal turing machines and establishing a theory on Autonomous Programming For General Purposes (APFGP), supporting Conscious Machine Learning.[8][9]

Weng has worked on developmental networks from Cresceptron to DN3 to achieve the first-ever conscious learning algorithm which is free from "deep learning" misconduct.[10] His research has been featured on 'Discovery Channel, Enel, and BBC.[11]

Motion and structure analysis

From 1983 to 1989, Weng's research work during his master's degree and Ph.D. degree focused on the analysis of the motion of objects and estimating 3D structures from motion.[12] He realized that such model-based approaches can provide with piecemeal insights but are too restrictive for understanding how animal brains learn vision and other brain skills. Soon after his Ph.D. degree work, he started Cresceptron.[13]

Cresceptron

Cresceptron represented a direction that Weng later termed Autonomous Mental Development (AMD). In 1992, he and his collaborators pioneered the development of a framework titled Cresceptron for segmenting and recognizing real-world 3D objects from their images through automated learning.[13] The framework was tested for visual recognition, specifically recognizing 3-D objects from 2-D images and segmenting them from cluttered backgrounds without the need for handcrafted 3D models. It employed techniques such as stochastic distortion modeling, view-based interpolation, and a combination of individual and class-based learning approaches. Cresceptron achieved seven significant accomplishments, including the development of techniques like learning large-scale 3D objects with a deep convolutional neural network (CNN) and feature-independent learning for extensive datasets, among others. It was also established that Cresceptron significantly differs from later "deep learning" networks due to its approach of developing a sole network using Hebbian learning (i.e., unsupervised in all hidden layers).[14][15]

SHOSLIF

Weng introduced another framework named SHOSLIF which provided a unified theory and methodology for comprehensive sensor-actuator learning.[16] It addressed single sensory problems as well as critical issues that Cresceptron faces, such as the automated selection of the most valuable features, the automatic organization of sensory and control information through a coarse-to-fine space partition tree, resulting in a remarkably low, logarithmic time complexity for content-based retrieval from extensive visual knowledge bases.[17] It also deals with handling invariance through learning, enabling online incremental learning, and facilitating autonomous learning, among other objectives.[18][19]

SAIL and Dav robots

From 1998 to 2010, Weng developed SAIL[20] and Dav robots[21] using sensory mapping models including self-aware self-effecting (SASE), staggered hierarchical mapping (SHM), and incremental hierarchical discriminant regression (IHDR) methods. It has been applied to the recognition of occluded objects,[22] speech recognition,[23] vision-guided navigation,[24] and range-based collision avoidance.[25]

Autonomous developmental networks

Since 2005, Weng and his team have been working on the development of brain-like and cortex-like Developmental Networks (DNs) and their embodiments Where-What Networks (WWNs)[26] using brain-like architecture, including modeling pathways, laminar 6-layer cortex, and brain areas.[27][28] In addition, they have analyzed how the brain deals with modulation, time, and space and have created three versions (DN1 through DN3) by 2023. A significant enhancement introduced in the transition from DN-2 to DN-3 involves initiating the brain-size network from a single-cell zygote. This means a fully autonomous process for brain patterning from a single cell. The key mechanisms of patterning include the Lobe Component Analysis (LCA)[29] and Synaptic Maintenance,[30] which automatically maintain the global smoothness of brain representation and local refinements of area representations. This approach enabled the developmental algorithm to progressively develop sensors, a complex brain, and motor functions in a sequential and self-organizing manner, ensuring that the wiring and pattern formation processes occur automatically from the initial conception stages throughout the entire life of the system.[31]

These Developmental Networks (DNs) and Where-What Networks (WWNs 1–9) have been developed for versatile visual learning in complex environments.[32] DNs can recognize objects and autonomously determine where and what to focus on using self-generated task context. Furthermore, these WWNs and DNs have been applied to general-purpose vision,[33] temporal visual event recognition,[34] vision-guided navigation,[35] learning audition while learning to speak,[36] and language acquisition as brain's responses to text temporal events.[37]

Weng is the first to formally raise that robotic consciousness is necessary for AI, consciousness can and should be learned (i.e., developed), and proposed a fully implementable algorithm to do so. He proposed DN3[31] as the engine to conduct conscious learning[38] where a robot is able to become increasingly conscious, like an infant and then a child, through its 'living' experience in the physical world which typically include human parents and teachers. However, there is no central controller within DN3's skull, emphasizing that consciousness should not be statically handcrafted and must encompass elements beyond a programmer's design.[31]

Controversies

Since 2016, Weng has alleged instances of plagiarism and post-selection misconduct on a worldwide scale, but the implicated institutions have not yet acknowledged his allegations.

Plagiarism controversy

Weng alleged that many deep learning networks that use images of 3D objects copied their key idea from Cresceptron[13] but almost all later deep learning publications did not cite Cresceptron. He highlighted that the Cresceptron (for 3D) is very different from the Neocognitron[39] (for 2D) because the Cresceptron is a fundamental departure from Neocognitron. Cresceptron enables a neural network to grow incrementally from a zero-neuron hierarchy and learn 3D objects from their 2D images in cluttered scenes. This is different from the aspect graphs of the 1990s and all other methods that had an inside-skull human teacher as a central controller.[40] This alleged plagiarism includes HMAX at MIT[41] and the ACM Turing Award 2018.[42] Without internal weight supervision like human manual selections[39][41] and error-backprop,[42] feature learning and sharing in hidden areas of Cresceptron are based on (unsupervised) Hebbian mechanisms.[43]

Post-Selection controversy

Weng raised the issue of Post-Selection in AI and argued that it constitutes misconduct. He addressed that many AI methods require two steps in their training stage. The first step consists of training multiple systems by randomly fitting a fit data set. The second step consists of Post-Model Selection (Post-Selection). The Post-Selection chooses a few luckiest trained systems or relies on human manual parameter-tuning based on the systems’ errors on a validation data set. He alleged that Post-Selection in AI contains two types of misconduct: (1) cheating in the absence of a test, because the Post-Selection step belongs to the training stage; (2) hiding bad-looking data, because less lucky systems were not reported.[10]

Weng further alleged that more categories of AI methods suffered from their Post-Selection steps, such as Neocognitron, HMAX, Deep Learning, Long Short-Term Memories, Extreme Learning Machines, Evolving Networks, Reservoir Computing, Transformers, Large Language Models, ChatGPT, and Bard, as long as they contain the Post-Selection step, which is either automatic or requires human manual tuning. He mathematically reasoned that the luckiest system on a validation set gives only an expected performance on a future test set that is only near the average performance of all trained systems on the validation set.[10]

Weng has sued institutions to address the issue of alleged misconduct outside of academia including Alphabet, in the United States District Court for the Western District of Michigan (Civil Action No. 1:22-cv-998)[44] and the US Court of Appeal 6th Circuit (Civil Action No, 23–1567).[45]

Awards and honors

  • 1994 – Research Initiation Award, NSF[46]
  • 2009 – Life Fellow, IEEE[47]

Bibliography

Selected books

  • Motion and Structure from Image Sequences (1993) ISBN 978-3642776458
  • Natural and Artificial Intelligence: Introduction to Computational Brain-Mind (2019) ISBN 978-0-985875718

Selected articles

  • Weng, J., Huang, T. S., & Ahuja, N. (1989). Motion and structure from two perspective views: Algorithms, error analysis, and error estimation. IEEE transactions on pattern analysis and machine intelligence, 11(5), 451–476.
  • Weng, J., Cohen, P., & Herniou, M. (1992). Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on pattern analysis and machine intelligence, 14(10), 965–980.
  • Weng, J., Ahuja, N., & Huang, T. S. (1993). Optimal motion and structure estimation. IEEE Transactions on pattern analysis and machine intelligence, 15(9), 864–884.
  • Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M., & Thelen, E. (2001). Autonomous mental development by robots and animals. Science, 291(5504), 599–600.
  • Weng, J., Zhang, Y., & Hwang, W. S. (2003). Candid covariance-free incremental principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8), 1034–1040.

References

  1. "Weng, Juyang". http://www.cse.msu.edu/~weng/. 
  2. "Brain-Mind Institute: Programs". http://www.brain-mind-institute.org/ICBM-2015/Weng-Juyang.html. 
  3. "Cognitive Computation". https://www.springer.com/journal/12559/updates/20028534. 
  4. "Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging—Part I". IEEE Transactions on Autonomous Mental Development 7 (3): 158–161. 2015. doi:10.1109/TAMD.2015.2495698. https://ieeexplore.ieee.org/document/7317842. 
  5. "EDITORIAL". International Journal of Humanoid Robotics 04 (2): 207–210. June 1, 2007. doi:10.1142/S0219843607001047. https://www.worldscientific.com/doi/abs/10.1142/S0219843607001047. 
  6. "The First Conscious Learning Algorithm Avoids 'Deep Learning' Misconduct". https://micdat-conference.com/juyang-weng.html. 
  7. "School of Computer Science". https://gs.fudan.edu.cn/gsenglish/4d/03/c2770a19715/page.htm. 
  8. Weng, Juyang (August 1, 2020). "Autonomous Programming for General Purposes: Theory". International Journal of Humanoid Robotics 17 (4): 2050016. doi:10.1142/S0219843620500164. https://www.worldscientific.com/doi/abs/10.1142/S0219843620500164. 
  9. Weng, Juyang; Zheng, Zejia; Wu, Xiang; Castro-Garcia, Juan (2020). "Autonomous Programming for General Purposes: Theory and Experiments". 2020 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. doi:10.1109/IJCNN48605.2020.9207149. ISBN 978-1-7281-6926-2. https://ieeexplore.ieee.org/document/9207149. 
  10. 10.0 10.1 10.2 Weng, Juyang (January 12, 2023). "On "Deep Learning" Misconduct". arXiv:2211.16350 [cs.LG].
  11. "BBC News - SCI/TECH - Time for real intelligence?". http://news.bbc.co.uk/2/hi/science/nature/1136870.stm-. 
  12. Weng, J.; Huang, T.S.; Ahuja, N. (1989). "Motion and structure from two perspective views: algorithms, error analysis, and error estimation". IEEE Transactions on Pattern Analysis and Machine Intelligence 11 (5): 451–476. doi:10.1109/34.24779. https://ieeexplore.ieee.org/document/24779. 
  13. 13.0 13.1 13.2 Weng, J.; Ahuja, N.; Huang, T.S. (1992). "Cresceptron: A self-organizing neural network which grows adaptively". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. 1. pp. 576–581. doi:10.1109/IJCNN.1992.287150. ISBN 0-7803-0559-0. https://ieeexplore.ieee.org/document/287150. 
  14. Weng, J.J.; Ahuja, N.; Huang, T.S. (1993). "Learning recognition and segmentation of 3-D objects from 2-D images". 1993 (4th) International Conference on Computer Vision. pp. 121–128. doi:10.1109/ICCV.1993.378228. ISBN 0-8186-3870-2. https://ieeexplore.ieee.org/document/378228. 
  15. Weng, John (Juyang); Ahuja, Narendra; Huang, Thomas S. (November 1, 1997). "Learning Recognition and Segmentation Using the Cresceptron". International Journal of Computer Vision 25 (2): 109–143. doi:10.1023/A:1007967800668. https://doi.org/10.1023/A:1007967800668. 
  16. "SHOSLIF: A Framework for Sensor-Based Learning for High-Dimensional Complex Systems". https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=8a219af6e7075675ecd13fee237961b798d27c86. 
  17. Weng, Juyang; Chen, Shaoyun (October 1, 1998). "Vision-guided navigation using SHOSLIF". Neural Networks 11 (7): 1511–1529. doi:10.1016/S0893-6080(98)00079-3. PMID 12662765. https://www.sciencedirect.com/science/article/pii/S0893608098000793. 
  18. "Cresceptron and SHOSLIF: Toward Comprehensive Visual Learning". https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=95a40aceea9eb217f08a2dac062d633f90f579d5. 
  19. "On Comprehensive Visual Learning". https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=3855ef92bfabff16b42171bebc64866ec3309e65. 
  20. Yilu Zhang; Juyang Weng (2001). "Grounded auditory development by a developmental robot". IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222). 2. pp. 1059–1064. doi:10.1109/IJCNN.2001.939507. ISBN 0-7803-7044-9. https://ieeexplore.ieee.org/document/939507. 
  21. Zeng, Shuqing; Weng, Juyang (November 1, 2007). "Online-learning and Attention-based Approach to Obstacle Avoidance Using a Range Finder". Journal of Intelligent and Robotic Systems 50 (3): 219–239. doi:10.1007/s10846-007-9162-9. https://doi.org/10.1007/s10846-007-9162-9. 
  22. "A Developing Sensory Mapping for Robots". http://www.cse.msu.edu/~weng/research/SHM.pdf. 
  23. "Auditory Learning: A Developmental Method". http://www.cse.msu.edu/~weng/research/TNNaudition2005.pdf. 
  24. "Incremental Hierarchical Discriminant Regression". http://www.cse.msu.edu/~weng/research/TNN-IHDR.pdf. 
  25. "Online-learning and Attention-based Approach to Obstacle Avoidance Using a Range Finder". http://www.cse.msu.edu/~weng/research/JIRS.pdf. 
  26. Weng, Juyang; Luciw, Matthew D. (November 1, 2014). "Brain-Inspired Concept Networks: Learning Concepts from Cluttered Scenes". IEEE Intelligent Systems 29 (6): 14–22. doi:10.1109/MIS.2014.75. https://ieeexplore.ieee.org/document/6916494. 
  27. Weng, Juyang (August 1, 2007). "On developmental mental architectures". Neurocomputing 70 (13): 2303–2323. doi:10.1016/j.neucom.2006.07.017. https://www.sciencedirect.com/science/article/pii/S0925231206005194. 
  28. Weng, Juyang; Zeng, Shuqing (June 1, 2005). "A Theory of Developmental Mental Architecture and the Dav Architecture Design". International Journal of Humanoid Robotics 02 (2): 145–179. doi:10.1142/S0219843605000454. https://www.worldscientific.com/doi/abs/10.1142/S0219843605000454. 
  29. Juyang Weng; Luciw, M. (2009). "Dually Optimal Neuronal Layers: Lobe Component Analysis". IEEE Transactions on Autonomous Mental Development 1: 68–85. doi:10.1109/TAMD.2009.2021698. https://ieeexplore.ieee.org/document/4895712. 
  30. Wang, Yuekai; Wu, Xiaofeng; Weng, Juyang (2011). "Synapse maintenance in the Where-What Networks". The 2011 International Joint Conference on Neural Networks. pp. 2822–2829. doi:10.1109/IJCNN.2011.6033591. ISBN 978-1-4244-9635-8. https://ieeexplore.ieee.org/document/6033591. 
  31. 31.0 31.1 31.2 "A Developmental Network Model of Conscious Learning in Biological Brains". June 7, 2022. https://www.researchsquare.com/. 
  32. Solgi, Mojtaba; Weng, Juyang (January 1, 2015). "WWN-8: Incremental Online Stereo with Shape-from-X Using Life-Long Big Data from Multiple Modalities". Procedia Computer Science 53: 316–326. doi:10.1016/j.procs.2015.07.309. 
  33. Wang, Yuekai; Wu, Xiaofeng; Weng, Juyang (November 1, 2012). "Skull-closed autonomous development: WWN-6 using natural video". The 2012 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. doi:10.1109/IJCNN.2012.6252491. ISBN 978-1-4673-1490-9. https://www.academia.edu/68904111. 
  34. Luciw, Matthew D.; Weng, Juyang; Zeng, Shuqing (2008). "Motor initiated expectation through top-down connections as abstract context in a physical world". 2008 7th IEEE International Conference on Development and Learning. pp. 115–120. doi:10.1109/DEVLRN.2008.4640815. ISBN 978-1-4244-2661-4. https://ieeexplore.ieee.org/document/4640815. 
  35. Zheng, Zejia; He, Xie; Weng, Juyang (January 1, 2015). "Approaching Camera-based Real-World Navigation Using Object Recognition". Procedia Computer Science 53: 428–436. doi:10.1016/j.procs.2015.07.320. 
  36. Wu, Xiang; Weng, Juyang (November 1, 2021). "Learning to recognize while learning to speak: Self-supervision and developing a speaking motor". Neural Networks 143: 28–41. doi:10.1016/j.neunet.2021.05.006. PMID 34082380. https://www.sciencedirect.com/science/article/pii/S0893608021001982. 
  37. "Conjunctive Visual and Auditory Development via Real-Time Dialogue". http://www.cse.msu.edu/~weng/research/EpiRob2003.pdf. 
  38. Weng, Juyang (John) (April 15, 2022). "An Algorithmic Theory for Conscious Learning". 2022 the 3rd International Conference on Artificial Intelligence in Electronics Engineering. Association for Computing Machinery. pp. 1–8. doi:10.1145/3512826.3512827. ISBN 9781450395489. https://doi.org/10.1145/3512826.3512827. 
  39. 39.0 39.1 Fukushima, Kunihiko (April 1, 1980). "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics 36 (4): 193–202. doi:10.1007/BF00344251. PMID 7370364. https://doi.org/10.1007/BF00344251. 
  40. "Learning Recognition and Segmentation Using the Cresceptron". https://www.cse.msu.edu/~weng/research/cresceptron.html. 
  41. 41.0 41.1 Serre, T.; Wolf, L.; Bileschi, S.; Riesenhuber, M.; Poggio, T. (2007). "Robust Object Recognition with Cortex-Like Mechanisms". IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (3): 411–426. doi:10.1109/TPAMI.2007.56. PMID 17224612. https://ieeexplore.ieee.org/document/4069258. 
  42. 42.0 42.1 LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (May 1, 2015). "Deep learning". Nature 521 (7553): 436–444. doi:10.1038/nature14539. PMID 26017442. Bibcode2015Natur.521..436L. https://www.nature.com/articles/nature14539. 
  43. Weng, Juyang (2021). "Post-Selections in AI and How to Avoid Them". arXiv:2106.13233v2 [cs.LG].
  44. "Weng v. Nat'l Sci. Found.". https://case-law.vlex.com/vid/weng-v-nat-l-932028002. 
  45. "Juyang Weng, et al v. Natl Science Fndtn, et al". https://dockets.justia.com/docket/circuit-courts/ca6/23-1567. 
  46. "NSF Award Search: Award # 9410741 - RIA: Learning-Based Object Recognition from Images". https://www.nsf.gov/awardsearch/showAward?AWD_ID=9410741&HistoricalAwards=false. 
  47. "IEEE Fellow". https://ieeexplore.ieee.org/author/37269351500.