Dynamical neuroscience

The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space.[1] Such systems can experience bifurcation (a qualitative change in behavior) as a function of its bifurcation parameters and often exhibit chaos.[2] Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells[3] to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation.[4]

Neurons have been modeled as nonlinear systems for decades now, but dynamical systems emerge in numerous other ways in the nervous system. From chemistry, chemical species models like the Gray–Scott model exhibit rich, chaotic dynamics.[5][6] Dynamic interactions between extracellular fluid pathways reshapes our view of intraneural communication.[7] Information theory draws on thermodynamics in the development of infodynamics which can involve nonlinear systems, especially with regards to the brain.

History

One of the first well-known incidences in which neurons were modeled on a mathematical and physical basis was the integrate-and-fire model developed in 1907. Decades later, the discovery of the squid giant axon eventually led Alan Hodgkin and Andrew Huxley (half-brother to Aldous Huxley) to develop the Hodgkin–Huxley model of the neuron in 1952.[8] This model was simplified with the FitzHugh–Nagumo model in 1962.[9] By 1981, the Morris–Lecar model had been developed for the barnacle muscle.

These mathematical models proved useful and are still used by the field of biophysics today, but a late 20th century development propelled the dynamical study of neurons even further: computer technology. The largest issue with physiological equations like the ones developed above is that they were nonlinear. This made the standard analysis impossible and any advanced kinds of analysis included a number of (nearly) endless possibilities. Computers opened a lot of doors for all of the hard sciences in terms of their ability to approximate solutions to nonlinear equations. This is the aspect of computational neuroscience that dynamical systems encompasses.

In 2007, a canonical text book was written by Eugene Izhikivech called Dynamical Systems in Neuroscience, assisting the transformation of an obscure research topic into a line of academic study.

Neuron dynamics

(intro needed here)

Electrophysiology of the neuron

The motivation for a dynamical approach to neuroscience stems from an interest in the physical complexity of neuron behavior. As an example, consider the coupled interaction between a neuron's membrane potential and the activation of ion channels throughout the neuron. As the membrane potential of a neuron increases sufficiently, channels in the membrane open up to allow more ions in or out. The ion flux further alters the membrane potential, which further affects the activation of the ion channels, which affects the membrane potential, and so on. This is often the nature of coupled nonlinear equations. A nearly straight forward example of this is the Morris–Lecar model:

See the Morris–Lecar paper[10] for an in-depth understanding of the model. A more brief summary of the Morris Lecar model is given by Scholarpedia.[11]

In this article, the point is to demonstrate the physiological basis of dynamical neuron models, so this discussion will only cover the two variables of the equation:

  • represents the membrane's current potential
  • is the so-called "recovery variable", which gives us the probability that a particular potassium channel is open to allow ion conduction.

Most importantly, the first equation states that the change of with respect to time depends on both and , as does the change in with respect to time. and are both functions of . So we have two coupled functions, and .

Different types of neuron models utilize different channels, depending on the physiology of the organism involved. For instance, the simplified two-dimensional Hodgkins–Huxley model considers sodium channels, while the Morris–Lecar model considers calcium channels. Both models consider potassium and leak current. Note, however, that the Hodgkins–Huxley model is canonically four-dimensional.[12]

Excitability of neurons

One of the predominant themes in classical neurobiology is the concept of a digital component to neurons. This concept was quickly absorbed by computer scientists where it evolved into the simple weighting function for coupled artificial neural networks. Neurobiologists call the critical voltage at which neurons fire a threshold. The dynamical criticism of this digital concept is that neurons don't truly exhibit all-or-none firing and should instead be thought of as resonators.[13]

In dynamical systems, this kind of property is known as excitability. An excitable system starts at some stable point. Imagine an empty lake at the top of a mountain with a ball in it. The ball is in a stable point. Gravity is pulling it down, so it's fixed at the lake bottom. If we give it a big enough push, it will pop out of the lake and roll down the side of the mountain, gaining momentum and going faster. Let's say we fashioned a loop-de-loop around the base of the mountain so that the ball will shoot up it and return to the lake (no rolling friction or air resistance). Now we have a system that stays in its rest state (the ball in the lake) until a perturbation knocks it out (rolling down the hill) but eventually returns to its rest state (back in the lake). In this example, gravity is the driving force and spatial dimensions x (horizontal) and y (vertical) are the variables. In the Morris Lecar neuron, the fundamental force is electromagnetic and and are the new phase space, but the dynamical picture is essentially the same. The electromagnetic force acts along just as gravity acts along . The shape of the mountain and the loop-de-loop act to couple the y and x dimensions to each other. In the neuron, nature has already decided how and are coupled, but the relationship is much more complicated than the gravitational example.

This property of excitability is what gives neurons the ability to transmit information to each other, so it is important to dynamical neuron networks, but the Morris Lecar can also operate in another parameter regime where it exhibits oscillatory behavior, forever oscillating around in phase space. This behavior is comparable to pacemaker cells in the heart, that don't rely on excitability but may excite neurons that do.

Global neurodynamics

The global dynamics of a network of neurons depend on at least the first three of four attributes:

  1. individual neuron dynamics (primarily, their thresholds or excitability)
  2. information transfer between neurons (generally either synapses or gap junctions
  3. network topology
  4. external forces (such as thermodynamic gradients)

There are many combinations of neural networks that can be modeled between the choices of these four attributes that can result in a versatile array of global dynamics.

Biological neural network modeling

Biological neural networks can be modeled by choosing an appropriate biological neuron model to describe the physiology of the organism and appropriate coupling terms to describe the physical interactions between neurons (forming the network). Other global considerations must be taken into consideration, such as the initial conditions and parameters of each neuron.

In terms of nonlinear dynamics, this requires evolving the state of the system through the functions. Following from the Morris Lecar example, the alterations to the equation would be:

where now has the subscript , indicating that it is the ith neuron in the network and a coupling function has been added to the first equation. The coupling function, D, is chosen based on the particular network being modeled. The two major candidates are synaptic junctions and gap junctions.

Attractor network

  • Point attractors – memory, pattern completion, categorizing, noise reduction
  • Line attractors – neural integration: oculomotor control
  • Ring attractors – neural integration: spatial orientation
  • Plane attractors – neural integration: (higher dimension of oculomotor control)
  • Cyclic attractors – central pattern generators
  • Chaotic attractors – recognition of odors and chaos is often mistaken for random noise.

Please see Scholarpedia's page for a formal review of attractor networks.[14]

Beyond neurons

While neurons play a lead role in brain dynamics, it is becoming more clear to neuroscientists that neuron behavior is highly dependent on their environment. But the environment is not a simple background, and there is a lot happening right outside of the neuron membrane, in the extracellular space. Neurons share this space with glial cells and the extracellular space itself may contain several agents of interaction with the neurons.[15]

Glia

Glia, once considered a mere support system for neurons, have been found to serve a significant role in the brain.[16][17] The subject of how the interaction between neuron and glia have an influence on neuron excitability is a question of dynamics.[18]

Neurochemistry

Like any other cell, neurons operate on an undoubtedly complex set of molecular reactions. Each cell is a tiny community of molecular machinery (organelles) working in tandem and encased in a lipid membrane. These organelles communicate largely via chemicals like G-proteins and neurotransmitters, consuming ATP for energy. Such chemical complexity is of interest to physiological studies of the neuron.

Neuromodulation

Neurons in the brain live in an extracellular fluid, capable of propagating both chemical and physical energy alike through reaction-diffusion and bond manipulation that leads to thermal gradients. Volume transmission has been associated with thermal gradients caused by biological reactions in the brain.[19] Such complex transmission has been associated with migraines.[20]

Cognitive neuroscience

The computational approaches to theoretical neuroscience often employ artificial neural networks that simplify the dynamics of single neurons in favor of examining more global dynamics. While neural networks are often associated with artificial intelligence, they have also been productive in the cognitive sciences.[21] Artificial neural networks use simple neuron models, but their global dynamics are capable of exhibiting both Hopfield and Attractor-like network dynamics.

Hopfield network

The Lyapunov function is a nonlinear technique used to analyze the stability of the zero solutions of a system of differential equations. Hopfield networks were specifically designed such that their underlying dynamics could be described by the Lyapunov function. Stability in biological systems is called homeostasis. Particularly of interest to the cognitive sciences, Hopfield networks have been implicated in the role of associative memory (memory triggered by cues).[22]

See also

References

  1. Gerstner, Wulfram; Kistler, Werner M.; Naud, Richard; Paninski, Liam (2014-07-24). Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press. ISBN 978-1-107-06083-8.
  2. Strogatz, Steven H. (2018-05-04). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press. ISBN 978-0-429-97219-5.
  3. Izhikevich, E. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Massachusetts: The MIT Press, 2007.
  4. "Agenda of the Dynamical Neuroscience XVIII: The resting brain: not at rest!". Archived from the original on 2011-07-09. Retrieved 2010-08-07.
  5. Wackerbauer, Renate; Showalter, Kenneth (2003-10-22). "Collapse of Spatiotemporal Chaos". Physical Review Letters. American Physical Society (APS). 91 (17): 174103. Bibcode:2003PhRvL..91q4103W. doi:10.1103/physrevlett.91.174103. ISSN 0031-9007. PMID 14611350.
  6. Lefèvre, Julien; Mangin, Jean-François (2010-04-22). Friston, Karl J. (ed.). "A Reaction-Diffusion Model of Human Brain Development". PLOS Computational Biology. Public Library of Science (PLoS). 6 (4): e1000749. Bibcode:2010PLSCB...6E0749L. doi:10.1371/journal.pcbi.1000749. ISSN 1553-7358. PMC 2858670. PMID 20421989.
  7. Agnati, L.F.; Zoli, M.; Strömberg, I.; Fuxe, K. (1995). "Intercellular communication in the brain: Wiring versus volume transmission". Neuroscience. Elsevier BV. 69 (3): 711–726. doi:10.1016/0306-4522(95)00308-6. ISSN 0306-4522. PMID 8596642. S2CID 9752747.
  8. Hodgkin, A. L.; Huxley, A. F. (1952-08-28). "A quantitative description of membrane current and its application to conduction and excitation in nerve". The Journal of Physiology. 117 (4): 500–544. doi:10.1113/jphysiol.1952.sp004764. ISSN 0022-3751. PMC 1392413. PMID 12991237.
  9. Izhikevich E. and FitzHugh R. (2006), Scholarpedia, 1(9):1349
  10. Morris, C.; Lecar, H. (1981). "Voltage oscillations in the barnacle giant muscle fiber". Biophysical Journal. Elsevier BV. 35 (1): 193–213. Bibcode:1981BpJ....35..193M. doi:10.1016/s0006-3495(81)84782-0. ISSN 0006-3495. PMC 1327511. PMID 7260316.
  11. Lecar, H. (2007), Scholarpedia, 2(10):1333
  12. Hodgkin, A. and Huxley, A. (1952): A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117:500–544. PMID 12991237
  13. Izhikevich, E. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Massachusetts: The MIT Press, 2007.
  14. Eliasmith, C. (2007), Scholarpedia, 2(10):1380
  15. Dahlem, Yuliya A.; Dahlem, Markus A.; Mair, Thomas; Braun, Katharina; Müller, Stefan C. (2003-09-01). "Extracellular potassium alters frequency and profile of retinal spreading depression waves". Experimental Brain Research. Springer Science and Business Media LLC. 152 (2): 221–228. doi:10.1007/s00221-003-1545-y. ISSN 0014-4819. PMID 12879176. S2CID 10752622.
  16. Ullian, Erik M.; Christopherson, Karen S.; Barres, Ben A. (2004). "Role for glia in synaptogenesis" (PDF). Glia. Wiley. 47 (3): 209–216. doi:10.1002/glia.20082. ISSN 0894-1491. PMID 15252809. S2CID 7439962. Archived from the original (PDF) on 2011-03-05. Retrieved 2010-08-07.
  17. Keyser, David O.; Pellmar, Terry C. (1994). "Synaptic transmission in the hippocampus: Critical role for glial cells". Glia. Wiley. 10 (4): 237–243. doi:10.1002/glia.440100402. ISSN 0894-1491. PMID 7914511. S2CID 28877566.
  18. Nadkarni, S. (2005) Dynamics of Dressed Neurons: Modeling the Neural-Glial Circuit and Exploring its Normal and Pathological Implications. Doctoral dissertation. Ohio University, Ohio. Archived 2011-07-16 at the Wayback Machine
  19. Fuxe, K., Rivera, A., Jacobsen, K., Hoistad, M., Leo, G., Horvath, T., Stained, W., De la calle, A. and Agnati, L. (2005) Dynamics of volume transmission in the brain. Focus on catecholamine and opioid peptide communication and the role of uncoupling protein 2. Journal of Neural Transmission, 112:1.
  20. "Dahlem, M. (2009) Migraine and Chaos. SciLogs, 25 November". Archived from the original on 2010-06-13. Retrieved 2010-08-07.
  21. Gluck, M. 2001. Gateway to Memory: An Introduction to Neural Network Modeling of the Hippocampus and Learning. Massachusetts: MIT.
  22. Hopfield, J. (2007), Scholarpedia, 2(5):1977
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.