Professor William W. Lytton – Uniting Biology And Maths To Understand The Human Brain

Feb 11, 2017Engineering & Computer Science, Life Sciences & Biology, Medical & Health Sciences, Psychology and Neuroscience

Neurologist and computational neuroscientist Professor Bill Lytton and his colleagues at the Neurosimulation Laboratory of the State University of New York in Brooklyn are using computer simulation to investigate brain function and disease. Their research has far-ranging implications in addressing human illness.

Bringing Biology and Math Together

‘Everyone agrees that we need a paradigm shift – really several – before we can begin to understand the brain, and further understand the mind and brain in disease: for example, schizophrenia, stroke, epilepsy, Alzheimer’s and autism,’ Professor Lytton tells us. As with prior paradigm shifts, understanding comes from detailed observation using new technologies combined with changes in concepts from models. Think of how Galileo’s work with the telescope or Van Leeuwenhoek’s microscopy advanced the fields of observational astronomy and microbiology respectively. However, unlike prior shifts, nowadays new models will be based upon complex computer simulation rather than on closed form equations such as those used by Newton or Einstein.

At the end of second world war, three strands of brain and neural modelling emerged. One of these was that of McCulloch, Pitts, Turing and von Neumann, who developed ideas of logical units and extended that to a description of neurons. This field then bifurcated into computer science and artificial neural networks or ANNs – a computational approach based on a large collection of neural units that loosely model how brain solves problems.

Early on, John von Neumann recognised the limitations of what he called the ‘Turing-cum-Pitts-and-McCulloch’ approach. Pointing out that the generality of phenomenological laws limited their usefulness for any particular implementation, he noted that ‘nothing that we may know or learn about the functioning of the organism can give, without “microscopic” cytological work, any clues regarding the further details of the neural mechanism.’ In other words, high-level, ‘top-down’ models of the brain may be correct, but their generality limits their usefulness for understanding the brain. For example, one large scale theory suggests that a major part of cortical feedback loops across hierarchies of high to lower centres permits the brain to predict the future both for planning, and for producing augmented ‘active’ perception that depends on knowing what is likely to be seen or heard before it is seen or heard. This is an important brain function, and attributing this function to the cortex seems reasonable. The theory is made more concrete by the fact that it can be identified as Bayes’ theorem, which describes the likelihood of an event given prior knowledge. The theory fits performance on a variety of cognitive tasks. However, this is not a mechanistic model – it will not tell us what physiological or neurotransmitter anomalies could cause individuals to show poor prediction, or poor judgement, in certain brain diseases.

Simulation of a single cortical layer 5 neuron. The apical dendrite (top) extends upwards through many layers, gathering information from as many sources to be conveyed to the soma near the bottom, at the centre of its web. Information then goes out through the axon at the bottom (purple line). The insets show results of parameter explorations and indicate another difficulty of neural simulation: neurons are individuals that themselves learn specific attributes in concert with other neurons.

The descriptive approach to modelling also stems from the long tradition of mathematical modelling in physics, which is necessarily phenomenological rather than mechanistic. Epistemologically, physics is unique in that it treats the most fundamental levels, which, by virtue of being fundamental, are irreducible to a lower level. The problem with descriptive/phenomenological models of the brain is that they do not give us access to the underlying ‘knobs and levers’. Understanding at the knob level will be necessary both intellectually and clinically – it is these knobs and levers that must be turned and pressed to alter the pathologies of damaged brain or disturbed mind. Professor Lytton’s work aims to build on the von Neumann observation of a need for microscopic, cytological work by utilising detailed multiscale mechanistic computer models to determine how dynamic and functional phenomenology arises.

Professor Lytton’s group strives to bring together math and biology by focusing on the details of biology (bottom-up) at the same time as considering model dynamics in terms of functional attributes (top-down). The team develops simulation tools and models, by collaborating closely with experimentalists who measure wiring and activity in both normal and dysfunctional brain tissue. Looking from the bottom up, these simulations themselves become experimental objects: they are so complicated that one has to manipulate them experimentally to understand them. Through this interactive experimentation, the experimental objects (simulations) are repeatedly modified to provide closer representations of the explanandum/simulandum and to seek experimentally-verifiable predictions. Looking from the top down, one is then re-modelling the simulations themselves, using tools to understand how dynamics can be encapsulated and how activity can be understood in terms of information processing and of neural codes. Professor Lytton notes that it is great fun mentally to jump back and forth between the abstractions of math and the intricate details of biology.

Computer models are improved over time by additional mathematics, statistics, and computational techniques to incrementally fine-tune them to better fit the biological observations at multiple scales. The more the computer simulations allow scientists to find out about the brain, the more data from the brain they can feed into the simulation to make it more exact, and the more they learn about the brain. It’s almost as though Professor Lytton and his fellow researchers are themselves part of an algorithm that investigates the functioning of their own brains’ ability to function as an algorithm designed to learn about the brain. Circular investigations, perhaps like being part of the Matrix while trying to find out how the Matrix works. However, whether or not they are themselves part of the system they’re trying to study, they have made some real progress.

Design for a future biomimetic brain-machine interface (BMI) model. Left to right: Information about what target to reach can be gathered from electrodes in the brain. This modulates ongoing activity in the biomimetic cortical and spinal cord models which then drives the virtual arm, which is then mirrored by the robot arm. Right to left: haptic feedback (touch sensation) could then be delivered back in the other direction so that the user could feel what is being touched.

The Simple is Really Quite Complex

The extraordinary success of artificial neural networks and computer learning in seemingly simulating the human mind – for example, computers playing chess and actually defeating human opponents – are not the things that are really difficult. They are also, of course, largely not the things that evolution has pressured human brains to be able to do. Instead, it’s the effortless things that we share with many other animals – acute visual and auditory perception under multiple conditions, control of locomotion across uneven terrain at different speeds – that really bring into play the complex processing for which brain coding is optimised. In fact, the inability to program these complex perceptual and motor skills has been a major impediment to the development of useful robots. News reports are quite glowing when someone exhibits a pondering metal being that can slowly negotiate complex terrain. It’s a difficult problem for robots and their programmers, even though human infants can usually walk over uneven ground with ease by the age of two, sometimes even earlier.

Look at the motor system generally. Beyond what other animals can do in terms of locomotion and other muscular activities, we humans have added fine motor control of hands and larynx to the basic mammalian substrate. With these two remarkable innovations in place, humans developed complex capabilities – namely language and tool-making – that made us human. This all took place relatively recently and is quite important to our human existence as we know it. After all, without fine dexterity in our fingers and ability to conceptualise complex language, you wouldn’t be reading this article to begin with.

Professor Lytton’s interest in the brain’s motor cortex is precisely due to this being one of the areas – along with the cerebellum, thalamus, basal ganglia, red nucleus, anterior cord, etc. – that is responsible for the coordinated control of muscles to effect alterations in the environment. To attempt to understand how all of that works, what else would he do but build a cybernetic arm?

Driving an Arm with Multiscale Simulation!

Professor Lytton and his team put together a biomimetic brain area, linked to a virtual arm, both running in computer simulation, finally all harnessed to an actual robotic arm with servos. The model, which can be seen at http://neurosimlab.org/billl/salv_robot.mp4, is trained using a reinforcement learning algorithm.

The biomimetic M1 is programmed as a multiscale simulation. In this version of the model, there are three basic layers in the network that can be grossly mapped onto layers of cortex: a motor layer that outputs muscle excitations; a somatosensory layer that coordinates between the two other layers; and the proprioceptive layer that feeds back muscle lengths, i.e., the size of the virtual muscles as they contract and relax. These three layers are also divided into the various virtual muscles that control the extension and flexion of shoulder and elbow. Left to its own devices – that is, untrained – the virtual arm simply flails around without purpose, causing the same to happen with its robotic alter ego. With training, however, things are much different.

Two levels of complexity: network level (a), molecular level (b). Chemicals are believed to be the basic level where memories are stored. Networks are believed to be an important level where memories are accessed and passed on to other networks and to higher levels.

Programming the virtual arm to pick up an object on a table is accomplished by rewarding the arm for getting closer to the object or punishing it for getting farther away. Of course, rewarding or punishing a computer program is obviously analogising to the human experience. In practice, what Professor Lytton’s team does is to potentiate or depress synapses based on spike timing depending on whether the arm wanders closer or further from the target. This causes the virtual arm to ‘learn’ to get closer and closer to the object. When the hand is stabilised over the object, a grasping function, intrinsic to the robot arm, is activated and the arm picks up the object. The purpose of this multiscale model is to incorporate multiple levels of biologic function into a computer model. Here we have cellular dynamics, immediate network interactions, learning, and interactions between arm and brain. In the long run, the ideal would be to include many more parts in the model – cortex, cerebellum, thalamus, basal ganglia, red nucleus, anterior cord and other relevant areas, some modelled in detail, others only at a high scale. Once you can fashion a multiscale model of complex brain pathways, you can start to program it to understand complex brain functioning, both in health and disease.

What’s Next in This Research?

There is a great deal of technical work that that Professor Lytton and his colleagues are doing to make massive multiscale simulations and their analysis possible on modern supercomputers. At the same time, they are working to incorporate new streams of data coming in from the US BRAIN project – Brain Research by Advancing Innovative Neurotechnologies – as well as the EU Human Brain Project, the Swiss Blue Brain Project and the Allen Institute brain projects. All of these are collaborative initiatives aimed at understanding brain function.

‘In applying our complex simulations to neurological and psychiatric disease, there is always a trade-off about what to include and what to omit in doing any model,’ Professor Lytton tells us. ‘Models for different purposes – e.g., conceptual versus clinical – need different details, different simulation experiments and different analyses. Schizophrenia is a disease of particular interest to us since it is a brain disease that presents as a disease of the mind. It may, therefore, offer insight into the classical duality of mind and brain.’ He and his team are examining the relationships among brain oscillations – popularly called brain waves – and information flow in the brain, their current, weak proxy for the information processing of mind. Just like the virtual computer arm modelling the function of the human arm, they hope to create more complex models for the more complex functions of the brain. In other words, they want to create functional cybernetic androids – brain region by brain region – so they can study biological human beings. Or is it the other way around?

Meet the researcher

Professor William W. Lytton

Professor of Physiology & Pharmacology

Professor of Neurology

State University of New York, Downstate

Kings County Hospital

Brooklyn, New York, USA

Professor Bill Lytton received his Bachelor’s degree in 1978 from Harvard College and his Medical Degree in 1983 from Columbia’s College of Physicians and Surgeons, followed by a medicine internship at the University of Alabama Birmingham, and a residency in Neurology from 1984 to 1987 at Columbia University’s Neurological Institute in New York. Thereafter, he trained under an NIA fellowship at the Johns Hopkins University in Baltimore, Maryland, and the Salk Institute in La Jolla, California. He was Assistant Professor at the University of Wisconsin, Madison from 1992 to 1998 and became Tenured Associate Professor in 1998. Professor Lytton joined the faculty of the Departments of Physiology & Pharmacology, and Department of Neurology at SUNY Downstate, Brooklyn, in 2000, where he is now Professor.

Professor Lytton’s research interest is in Computational Neuroscience, with a focus on the application of Multiscale Modelling to disorders of the brain, including schizophrenia, dystonia, epilepsy, Alzheimer’s Disease and stroke. He has authored or co-authored over 70 articles published in peer-reviewed journals, almost all as first or last author. He has also published multiple review chapters and the book ‘From Computer to Brain’, a basic introduction to the field of Computational Neuroscience. He is licensed to practice medicine in several states, board certified by the American Board of Psychiatry and Neurology and works as a clinical neurologist at Kings County Hospital, seeing patients with a variety of brain ailments.

CONTACT

KEY COLLABORATORS

Srdjan Antic, University of Connecticut Health Center

George Augustine, Nanyang Technological University

Joseph T Francis, University of Houston

Michael Hines, Yale University

Andrzej J. Przekwas, CFD Research Corporation

Gordon MG Shepherd, Northwestern University

LAB PROJECT LEADERSHIP

Salvador Dura-Bernal

Robert McDougal

Samuel Neymotin

FUNDING

Brain Research through Advancing Innovative Neurotechnologies

National Institute of Biomedical Imaging and Bioengineering

National Institute of Mental Health

Defense Advanced Research Projects Agency

National Institute of Neurological Disorders and Stroke

REFERENCES

S Dura-Bernal, SA Neymotin, CC Kerr, S Sivagnanam, A Majumdar, JT Francis, WW Lytton, Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis, IBM Journal of Research and Development, 2016.

SA Neymotin, MA Sherif, JQ Jung, JJ Kabariti, WW Lytton WW, Genome-wide associations of schizophrenia studied with computer simulation, Hippocampal Microcircuits: A Computational Modeler’s Resource Book, 2016, Vol. 2., ed.: V Cutsuridis, BP Graham, S Cobb, I Vida, Springer 2016.

SA Neymotin, S Dura-Bernal, P Lakatos, TD Sanger, WW Lytton, Multitarget multiscale simulation for pharmacological treatment of dystonia in motor cortex, Frontiers in Pharmacology, 2016, 7, 157.

WW Lytton, Computer modelling of epilepsy, Nat. Rev. Neurosci., 2008, 9, 626–37.

WW Lytton, From Computer to Brain; Springer, 2002. Japanese translation 2006.