Model Types Intro

Show video

welcome to neuromatch academy we have prepared for you a very exciting program for the next three weeks during which you will learn about state-of-the-art approaches to modeling and model-based data analyses my name is gunablum i'm a co-founder and co-organizer of neuromuch academy and a founder of cosmo i'm a professor at queen's university in canada and i study sensory motor control in my lab and in my spare time i'm also big time into brewing beer scientific progress is made through an interplay of experiments data analysis and modeling neuromuch academy is all about data analysis and models and before we can fully dive into what specific models can do for us we first need to understand a little bit more what models are and how we go about modeling so why are models so helpful for scientific progress well they can do many different things for us for example they can synthesize knowledge in other words they can summarize a large body of findings into a rather small compact description which is the model they allow us to identify hidden assumption hypotheses unknowns in fact once we have to start modeling once we think about quantifying our models and writing down the mathematics very often we come across findings or facts that we need that are not available or that we haven't thought of models can provide mechanistic insights that is one of the main reasons why we might want to have models is that we cannot measure certain things but we can infer them if we can capture a mechanism that the data underlies for example we can retrieve latent information so latent information is information that is not directly present in the data but with the model we could potentially extract such latent information and i will show you an example in a bit about this models can be a test bed for medical interventions if you have a working model of the brain and you can somehow manipulate that in a way that is interesting for medicine then you can ask essentially what does this manipulation does to the brain models allow us to design useful experiments for example by making quantitative predictions and this can be particularly important if these experiments involve animals that sacrifice their lives for us so instead of doing arbitrary experiments that follow simply our inspiration or mood of the day we can focus on only conducting the important experiments that make meaningful predictions for our models or experiments that allow us to test between different hypotheses or between different proposed models finally models can inspire new technologies or applications and again i will show you an example about that but there is the whole field of neural networks for example as the name already indicates that has been inspired by at least roughly inspired by how the brain works okay so specifically let's look at three different models so first let's look at the popular cosine tuning model of neural activity the famous hodgkin-huxley model of single neuron spike generation for which they got the nobel prize and the very fashionable reinforcement learning model and that should give you a bit of an idea at the end we'll come back to the list of advantages and uh for different models so that should give you a bit of an idea what different models can potentially do for us and why they're useful so here we go cosine tuning refers to a often observed spatial response profile of neurons with respect to a stimulus or a movement that can be well approximated by a cosine function here for example we have five repeated movements of a monkey she moves in each of the eight movement directions from a central starting point the neural activity recorded from a primary mortal cortex pyramidal cell is shown as a rasta plot each line corresponds to one movement instance and each tick represents one action potential around the movement onset time denoted by the vertical bar this neuron's activity is modulated from either highly active to silent depending on the movement direction so you can plot the average rate of spiking as a function of movement direction and this is well described by a simple model containing only one equation and this is the equation the equation takes the firing rate f as a function of the movement direction s subtracts the baseline firing rate f0 and divides that difference by the maximum firing rate of that particular neuron so now you have a normalized firing rate and that across the different stimulus uh the different movement direction is well approximated by a cosine function that peaks at this neuron's preferred movement direction as p so why is this cosine tuning a helpful model well it provides a really nice compact summary a description of the data it generalizes across movements so you can essentially well predict what would happen if you record it in the same cell for movements that were not part of the eight different movements that we have currently seen but somewhere in between let's say the cosine tuning model also applies to sensory stimuli and we can use this for applications for example to build a brain machine interface for prosthetics right so if we were to be able to quantify each neuron's cosine tuning function parameters and then use that to decode the movement intention of the brain that could potentially work fairly well however this is a purely descriptive model there's no consideration about how or why cosine tuning arises in the brain and therefore it provides scientifically limited insight okay now let's look at a very different model in 1939 alan hodgkin and andrew huxley recorded the intracellular potential of the giant axons of the squid this resulted in a first recording of an action potential unfortunately then the second world war happened and both were busy with military duties but after the end of the war they got back together to understand what mechanisms led to the depolarization at the beginning and the hyperpolarization at the end of the action potential today we know it's due to the opening and closing of specific ion channels when a threshold membrane potential is reached first sodium channels open raising the potential that's the panel number two that leads to potassium channels to open so the raise rising of the potential leads to potassium channels to open and the sodium channels to close that brings the potential back down so that's panel three and there's an overshoot in the potential because potassium channels close slowly resulting in a refractory period where spiking is suppressed that's panel four so hodgkin and huxley knew that there was electrical currents involved because that's essentially what they measured with their electrodes and as they used basic electricity theory to formulate their model so their model was has the form of a relatively simple first order differential equation that abstracts the axon or neuron by a capacitor and describes the change of electrical potential between intra and extracellular media as an equilibrium equation involving sodium ions potassium ions permeability of the membrane through leakiness and external currents so the individual ions and leakage channel reversal potentials are represented as batteries and the corresponding channels as resistors and then they introduced ion channel opening and closing variables n m and h as first order differential equations and found that they had to take some power of those variables to accurately describe channel opening and closing kinetics now why is the hodgkin-huxley model helpful well hachi and huxley provide a mechanism for the generation of action potential right it's the opening and closing sequences of the different ion channels the model synthesizes large amounts of neural data and describes variables that are not easily measurable so these latent variables right so for example the probability of channel opening and closing or the currents going through each of those channels and hodgkin-huxley also or the model also allows for studying effects of interventions so for example we could study how a potassium channel blocker would affect spiking behavior of their neurons and you can make real predictions so for example you can look at the conditions that control the timing of the action potential onsets so the threshold or refractory period but we really have no idea why ion channels open and close what the molecular mechanism is that makes them open and close depending on the difference in voltage between the intracellular and extracellular medium all right so here's yet another very different model that's called reinforcement learning so while the previous two models tried to capture spike statistics and spike generation mechanisms respectively reinforcement learning asks how agents or people or animals should be acting to maximize overall reward so imagine an agent trying to get out of an ordered labyrinth environment the agent will take an action so a movement right and then observe the result that is the new position that the agent is now in after the movement is completed right so the new position that the agent is in and in the environment and a reward if it got out of it so if the if the movement or the action was positive it will receive an reward so if the agent receives a reward then that means that this movement was a good choice and the agent will just update its policy meaning it will update the rule the policy is the rule by which it chooses movements when encountering the same situation or the state so in that way the agent will reinforce actions with rewarding outcomes and thus learn how to move optimally through the environment so why is reinforcement learning such a nice model well it provides a normative benchmark for what is best in theory so normative essentially means exactly what is best in theory given certain assumptions and that includes predictions for optimal behavior so outside of what has been tested experimentally for example now the reinforcement learning model also allows us to synthesize large amounts of behavioral and also some neural data it describes variables that are not easily measurable so these latent variables that are hidden that we cannot directly measure in experiments for example things like the reward prediction error and reinforcement learning has inspired new technology in fact reinforcement learning is central to modern artificial intelligence but we have no idea how the brain achieves it how does reinforcement is reinforcement learning implemented in the neural circuits of the brain so let us summarize again why we thought these three models were great and we'll go back to our list from before why we think models are great so we we talked about knowledge synthesis and in fact all three models cosine tuning hodgkin and huxley and reinforcement learning are able to synthesize knowledge by capturing large amounts of data hodgkin and huxley also allowed us to identify hidden assumptions hypotheses and unknowns things like discovering ion channels for example and provided mechanistic insight so why does the action potential actually occur right well it's due to the opening and closing sequence of the ion channels and then hodgkin-huxley and reinforcement learning models allowed us to retrieve latent information things that we cannot directly or or that are difficult to directly measure like in hodgkin-huxley that was for example the opening of closing of the channels and in reinforcement learning that was the reward prediction error hydrogen huxley could also be used as a test bench for medical interventions remember we could be applying a potassium channel blocker and see how that affects spiking and the hodgkin-huxley and reinforcement learning models could allow us to design new useful experiments through quantitative predictions they're making and then finally we said that another benefit of models could be that they can inspire new technologies or applications and we said that cosine tuning can do that through for example helping us designing brain machine interfaces and also the reinforcement learning model remember ai okay so now while we have now seen some models and gotten a bit of an appreciation what different kinds of models can do for us there's one important thing that i have not told you yet and that is what are these things that we call models and the answer is actually quite simple models are an abstraction of reality as such models are partial imperfect descriptions of the universe that are developed by science for our understanding of the universe that is otherwise too complex to grasp by the limits of the human mind and that of course immediately raises the question how do i know what the right level of abstraction is the simple answer is keep it as simple as possible but as detailed as needed this is often also known as occam's razor in fact william ockham famously said entities should not be multiplied without necessity in other words shave away unnecessary elements now practically the right level of abstraction is largely determined by our exact question the hypotheses we would like to test or implement with the model and our model goals and that is what tomorrow's day will be devoted to and you will learn all about how this works but ultimately we want models to empower us scientifically right to give us something that we cannot directly get from the data so there's two big things that we cannot directly get from the data one is understanding it's some insight that is not directly accessible by the experiments or the data and control in the form of interventions either experimental or clinical interventions for example and of course each model requires to be validated right and model validation is essentially done through experiments in other words a model is a hypothesis a model is a precise mathematical instantiation a mathematical description of a hypothesis and as such experiments perform hypothesis testing and if you want to compare between multiple models or multiple hypotheses that is called model comparison okay and that brings us back to experiments and data and how they both mutually inform models and that is pretty much the cycle of discovery right you start off with a phenomenon a question something that's unknown you think about it you have an idea a hypothesis and then ideally you want to build a model to quantify to instantiate this idea this hypothesis that you have in your head this model can then make predictions so you get essentially a model outcome that you can compare to your original phenomenon but often you also get predictions for which you do not currently have data and so you want to run new experiments to collect that data and test those predictions right so therefore models really inspire experiments but vice versa of course it's a data that inspires the models and the data most importantly constrain the models and provide new questions we would like to understand or be able to answer okay so i showed you three specific examples of models but there's a whole universe of models out there so one way to classify models would be to just look at the scale of the system they're describing so here is essentially a hierarchy of levels of model organization that goes all the way from the multiple people interactions to the single brain to the individual systems so multiple brain areas working together to individual brain areas or maps to networks to soft neurons to single neurons to single synapses to the molecules in the single synapse and some models bridge multiple levels of organization so for example you have can have models that are in the form of neural maps that describe behavior very well or you can have a model that bridges a network to the system such models could be very useful for example in understanding let's say brain imaging signals now how do the models do that well they do that for example by providing relationships between different brain signals from different levels of abstraction or how such brain signals relate to sensation and behavior and in fact much of what we will be talking about during neuromuch academy is exactly this what are the modeling and analysis techniques that can give us insight into how the brain reacts to its environment and how it decides on and generates behavior now it turns out that this universe of models can be summarized in a hierarchy of three different levels of generality of models so at the lowest level we have the descriptive or what model they provide a compact summary of large amounts of data and essentially ask the quest question what is it that we want to describe and you will see such a model in tutorial 1. the next level that is a little bit more general is the mechanistic or how level and it shows how neural circuits perform complex functions and we'll see one of those in tutorial too and the highest level or the yeah the highest level of generality is the interpretive or explanatory level the y level and the y level is concerned about explaining why the brain does something for example because it's optimal and you will see such an example in tutorial 3. so in order for you to appreciate the differences between such models you will be exploring them in the tutorials as i said we will see how different questions will lead to very different models so as you all know neurons communicate by sending action potentials to other neurons and one striking feature of those is that the time interval between two action potentials emitted by a neuron follows a very specific distribution so our scientists we're detectives right and we can ask different questions about that we can ask what is a good description of that distribution or we can ask how is this distribution generated in neurons or we can ask why is this the best thing for a neuron to do so one thing we can ask is what distribution best describes the interspec interval and that's an interesting question because it could tell us something about the brain's physiology for example for most parts of the nervous system you can capture this distribution by a poisson process a poisson process is what generates a poisson distribution which is part of the exponential functions family if you include a gaussian refractory period to capture the fact that once spiked neurons cannot easily spike again right away due to being hyperpolarized then you can fit into spike intervals well it turns out that a poisson process is independent of history this tells us as scientific detectives that we don't have recurrency meaning neurons do not project back to themselves which would produce a history effect so in tutorial one you will develop a simple descriptive model of the interspec interval and learn what kind of scientific questions this model can answer then in tutorial 2 we will impersonate a very different scientific detective and ask what mechanism gives rise to the inter-spike interval distribution how does it happen remember from the hodgkin-huxley model that neurons have a capacitance well that means input currents will accumulate charges on this capacitor which is akin to integration but is that integration enough to generate the inter-spike interval distribution well you will see that excitation inhibition balance is key and that such a model can answer potentially very different questions than the one from tutorial 1. in the third tutorial we will put on yet another scientific detective hat and ask why this particular distribution of the interspike interval is the best thing a neuron can do in fact you will see why this particular spike distribution is the optimal thing for a neuron to do if it wants to maximize information transmission with a given or constant number of spikes let's say in a given time window and finally we will have a group discussion to ask what you think the best model is and why you think so and again you will learn details about these models later during neuromuch academy here it's important to appreciate how diverse models can answer different questions and how different questions lead to different models and with that i hope you will enjoy discovering for yourself how different models that describe action potentials can answer as different meaningful scientific questions enjoy

2021-07-13

Show video