Neuroscience and AI Share the Same Elegant Mathematical Trap
- DOI
- 10.2991/agi.2009.18How to use a DOI?
- Abstract
Animals display exceptionally robust recognition abilities to analyze scenes compared to artificial means. The prevailing hypothesis in both the neuroscience and AI literatures is that the brain recognizes its environment using optimized connections. These connections are determined through a gradual update of weights mediated by learning. The training and test distributions can be constrained to be similar so weights can be optimized for any arbitrary pattern. Thus both fields fit a mathematical-statistical framework that is well defined and elegant. Despite its prevalence in the literature, it remains difficult to find strong experimental support for this mechanism within neuroscience. Furthermore this approach is not ideally optimized for novel combinations of previously learned patterns which typically form a scene. It may require an exponential amount of training data to achieve good precision. The purpose of paper is to 1) review the difficulties associated with this approach in both neuroscience experiments and AI scenarios. 2) Direct the reader towards `less elegant' mathematically-difficult inherently nonlinear methods that also address both literatures (better optimized for scenes and emulating experiments) but perform recognition without optimized weight parameters.
- Copyright
- © 2009, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
Cite this article
TY - CONF AU - Tsvi Achler AU - Eyal Amir PY - 2009/06 DA - 2009/06 TI - Neuroscience and AI Share the Same Elegant Mathematical Trap BT - Proceedings of the 2nd Conference on Artificial General Intelligence (2009) PB - Atlantis Press SP - 86 EP - 87 SN - 1951-6851 UR - https://doi.org/10.2991/agi.2009.18 DO - 10.2991/agi.2009.18 ID - Achler2009/06 ER -