The Milstein Lab is interested in how the brain rapidly learns and remembers sequences of events based on very little experience, sometimes requiring only a single presentation! For decades, much experimental and theoretical work has sought to pin down the synaptic plasticity rules that store memories by converting patterns of neuronal activity during experience into changes in the strengths of synaptic connections between neurons. This has led to the widely accepted view that plasticity in most brain regions is driven when connected neurons are precisely synchronously co-active (within ~100 ms). How, then, do inactive neurons become recruited to store new representations of experience “from scratch”, without pre-existing correlations between neuronal input and output? Furthermore, how do neurons determine which synapses to modify during learning when neuronal activity corresponding to sensory input and behavioral choices are often separated from behavioral outcomes by long delays (~1-10 s)?
Aaron’s previous work identified a candidate neural circuit mechanism for synaptic plasticity in the hippocampus that appears to solve some of the above problems. The solution depends on the following fundamental principles of neurophysiology: 1) Neurons with extended dendrites receive synaptic input from multiple pathways originating from distinct brain regions and carrying different types of information. 2) While activity from a single input pathway can drive action potential output (spikes), simultaneous activation of multiple input pathways can recruit voltage-dependent NMDA-type glutamate receptors and calcium channels in dendrites to produce long-duration dendritic spikes. 3) While inhibitory neurons that form synapses near neuronal cell bodies modulate the rate and timing of action potential output, distinct classes of inhibitory neurons that synapse onto neuronal dendrites can potently suppress nonlinear synaptic integration and dendritic spiking. 4) A single dendritic spike can completely re-shape the selectivity of a neuron for a long sequence of events by inducing a potent form of bidirectional synaptic plasticity. This non-Hebbian form of plasticity is capable of modifying synaptic strengths even when presynaptic spiking is separated in time from a postsynaptic dendritic spike by up to multiple seconds. This property could allow memory storage of an entire sequence of events between a behavioral choice and a delayed outcome, offering a solution to the long-standing “temporal credit assignment” problem.
The above finding highlights the importance of the organization of a neural circuit, and its functionally-diverse cell types, in shaping the learning rules that it implements. The Milstein Lab plans to use this as a lens to better understand how pathological alterations in neural circuit organization and function result in deficits in learning and memory. To date, very little insight from synaptic and circuit physiology research has been successfully translated to impact clinical outcomes. Common neurodevelopmental disorders, like lissencephaly and polymicrogyria, are associated with abnormal brain wiring, and often result in epilepsy and delayed development of motor control and speech. No treatments currently exist for these learning deficits, primarily due to the absence of overarching theories of how information processing and plasticity is altered in miswired cortex. To address this bottleneck, the Milstein Lab is modeling the effects of changes in neural circuit architecture associated with neurodevelopmental disorders on network dynamics and synaptic plasticity, an approach which could identify new avenues to treat the sensorimotor learning deficits associated with this disorder.
Another aspect of the lab will focus on applying the above-described fundamental principles of neurophysiology to design artificial learning systems with more brain-like capabilities. The artificial neural networks that have fueled recent advances in machine learning are only superficially inspired by the brain. Neuron-like elements have modifiable connections like synapses, are organized in layers, and can be trained to reach or exceed human performance on certain tasks that have a well-defined objective like categorization of pre-labeled visual images. However, neither the architectures of these networks nor the learning rules used for training are biologically plausible, and they typically require very large datasets containing many examples of each category. We are currently developing a new class of biologically-plausible artificial neural networks with the dual purpose of benchmarking specific neural circuit architectures and synaptic learning rules against well-defined learning tasks, and achieving artificial systems with new capabilities, like one-shot learning from unlabeled data.
Read More › about Neuromorphic one-shot learning utilizing a phase-transition material
Read More › about Recurrent Excitatory Feedback From Mossy Cells Enhances Sparsity and Pattern Separation in the Dentate Gyrus via Indirect Feedback Inhibition
Read More › about Conversations with CABM Director: An interview with Dr. Aaron Milstein
Read More › about Aaron Milstein poses the question "What selects the "content" of an offline memory replay event
Read More › about CABM welcomes Aaron Milstein as a resident faculty member