Confabulation (neural networks)

A confabulation, also known as a false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (nonartificial) neural mistake-making process leading to a false memory (confabulation).

Cognitive science

In cognitive science, the generation of confabulatory patterns is symptomatic of some forms of brain trauma.[1] In this, confabulations relate to pathologically induced neural activation patterns depart from direct experience and learned relationships. In computational modeling of such damage, related brain pathologies such as dyslexia and hallucination result from simulated lesioning[2] and neuron death.[3] Forms of confabulation in which missing or incomplete information is incorrectly filled in by the brain are generally modelled by the well known neural network process called pattern completion.[4]

Neural networks

Confabulation is central to a theory of cognition and consciousness by S. L. Thaler in which thoughts and ideas originate in both biological and synthetic neural networks as false or degraded memories nucleate upon various forms of neuronal and synaptic fluctuations and damage.[5][6] Such novel patterns of neural activation are promoted to ideas as other neural nets perceive utility or value to them (i.e., the thalamo-cortical loop).[7][8] The exploitation of these false memories by other artificial neural networks forms the basis of inventive artificial intelligence systems currently utilized in product design,[9][10] materials discovery[11] and improvisational military robots.[12] Compound, confabulatory systems of this kind[13] have been used as sensemaking systems for military intelligence and planning,[12] self-organizing control systems for robots and space vehicles,[14] and entertainment.[12] The concept of such opportunistic confabulation grew out of experiments with artificial neural networks that simulated brain cell apoptosis.[15] It was discovered that novel perception, ideation, and motor planning could arise from either reversible or irreversible neurobiological damage.[16][17]

Computational inductive reasoning

The term confabulation is also used by Robert Hecht-Nielsen in describing inductive reasoning accomplished via Bayesian networks.[18] Confabulation is used to select the expectancy of the concept that follows a particular context. This is not an Aristotelian deductive process, although it yields simple deduction when memory only holds unique events. However, most events and concepts occur in multiple, conflicting contexts and so confabulation yields a consensus of an expected event that may only be minimally more likely than many other events. However, given the winner take all constraint of the theory, that is the event/symbol/concept/attribute that is then expected. This parallel computation on many contexts is postulated to occur in less than a tenth of a second. Confabulation grew out of vector analysis of data retrieval like that of latent semantic analysis and support vector machines. It is being implemented computationally on parallel computers.

References

  1. Gazzaniga, M. S. (1995). "The Cognitive Neurosciences", A Bradford Book, The MIT Press, Cambridge, Massachusetts.
  2. Plaut, D.C. (1993). "Deep Dyslexia: A case of connectionist neuropsychology" Archived 2016-03-03 at the Wayback Machine. Cognitive Neuropsychology, 10(5), 377-500.
  3. Yam, P. (1993). "Daisy, Daisy" Do computers have near-death experience, Scientific American, May 1993.
  4. "Neural associative memory". rni.org. Archived from the original on 2008-10-20. Retrieved 2009-07-22.
  5. Thaler, S. L. (1997a). U.S. 5,659,666, "Device for the Autonomous Generation of Useful Information", Issued 8/19/1997.
  6. Thaler, S. L. (1997b). "A Quantitative Model of Seminal Cognition: the creativity machine paradigm", Proceedings of the Mind II Conference, Dublin, Ireland, 1997.
  7. Thaler, S. L. (2011, "The Creativity Machine Paradigm: Withstanding the Argument from Consciousness" Archived 2012-11-15 at the Wayback Machine
  8. Thaler, S. L. (2013) "The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship", (ed.) E.G. Carayannis, Springer Science+Business Media
  9. Pickover, C. A. (2005). Sex, Drugs, Einstein, & Elves, SmartPublications, Petaluma, CA.
  10. Plotkin, R. (2009). The Genie in the Machine: How Computer-Automated Inventing is Revolutionizing Law and Business, Stanford University Press
  11. Thaler, S. L. (1998). Predicting ultra-hard binary compounds via cascaded auto- and hetero-associative neural networks, Journal of Alloys and Compounds, 279(1998), 47-59.
  12. Hesman, T. (2004). The Machine That Invents, St. Louis Post-Dispatch, Jan. 25, 2004.
  13. Thaler, S. L. (1996). A Proposed Symbolism for Network-Implemented Discovery Processes, In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
  14. Patrick, M. C., Stevenson-Chavis, K., Thaler, S. L. (2007). "Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations", Aerospace Conference, 2007, 3–10 March 2007 IEEE
  15. Yam, P. (1995). As They Lay Dying, Scientific American, May 1995.
  16. Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995.
  17. Thaler, S. L. (2012). The Creativity Machine Paradigm: Withstanding the Argument from Consciousness, APA Newsletter, Volume 11, Number 2, Spring 2012.
  18. Hecht-Nielsen, R (2005). "Cogent confabulation" Archived 2016-01-14 at the Wayback Machine. Neural Networks 18:111-115.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.