These are notes on a Coursera course you can find here. It's called 'Synapses, Neurons and Brains', but I generally call it 'Synapses, Neurons and Brains, Oh My' for the craic.

Week One

What is a neuron? What are the connexions between them (i.e. what are synapses)? How do networks of neurons create behaviors? The goal of the course is to understand these questions. Modern neuroscientists must be multidisciplinary: there is computational-mathematical stuff, anatomy, pathology, child development. Theoretical neuroscience is the foundation to it all. Theoretical neuroscience must give a unified explanation across nanometer scales (DNA and other molecules), cells, networks (millimeter and centimeter scales), up to macroscopic behavior.
Major modern research is happening at:


Camillo Golgi and Ranon Y Cajal around 1900 started neuroanatomy. They stained brain tissue with a silver-based method that stained only about 1% of cells, so rather than a mess of dye, they saw delicate cobwebs of neurons: the first glimpse of neural networks. They couldn't see synapses. Golgi thought the brain was not made of cells. Ramon Y Cajal differed.


Take wafer-thing slices of brain tissue (on the nanometer scale). Scan each slice with an electron microscope. Stack these 2D slices back into a 3D shape. This creates a 3D model, complete at the level of networks/synapses/connexions, of that chunk of the brain. The goal is to understand the connexion between this 3D connectome model and behavior. How do these networks cause behavior, or cognition, or mood?

The 'brainbow':

The brain is grey. It is uniform in color. The brainbow overcomes this by dyeing them fun colors. Developed recently by a group from Harvard. They genetically insert pigments. Not fine enough to see synaptic connexions, but you can see a strand of, say, blue tissue going towards green tissue. (So connectomics is at a smaller scale than the brainbow.) The brainbow can tell us how learning changes the structure of the brain. Because it is a genetic intervention, it can tell us what genes are expressed in what areas/systems. If we make a particular cell type purple, then the brainbow technique will let us see where that type is. It is useful for discovering the long strands that go from one side of the brain to the other.


Use genetic manipulation to implant light-sensitive things into neurons. The retina naturally has light-sensitive receptors, so you're turning other brain cells into retina-type, light-responsive cells. They transform photons into electrical activity. Then you use fiberoptics to beam light at these mutant cells and they fire. A particular ion channel called rhodopsin fires when exposed to blue light. (We can confirm this by recording the cell's activity with a microelectrode.) Natronomanas pharaonis is sensitive to yellow light. But when exposed to yellow light, it DOES NOT fire. Inhibitory effect. This is very specific control: a specific cell type, in a specific region, can be turned either on or off as we wish. Researchers (led by Karel Svoboda) have controlled a mouse's behavior in this way: made it drink or stop drinking. About 100 neurons were under control.

Brain-machine interface:

To build a brain-machine interface, we need to understand the electrical language/electrical code of the brain in real time. Poke a tiny microelectrode into a particular neuron. Read the electrical activity, the highs and lows, going on in it. Do this with a bunch of neurons, and you'll get the music, made of multiple beats, that represents an image, or an intention, or whatever, depending on the region of the brain. Read this concert of electrodes, send it to a machine. The machine controls a robotic arm if you're paralyzed, or a toy helicopter if you're bored. A monkey has successfully used a robotic arm to feed itself using this method. (Research by Andrew Schwartz of University of Pittsburgh) The other direxion is from the machine to the brain e.g. cochlear implants By recording the basal ganglia's activity with microelectrodes, we can see that in Parkinson's the pattern is very wrong. An implant can be put into the basal ganglia and stimulate the basal ganglia with a more normal, healthy, electrical pattern. (A battery is implanted in the chest.) This is an effective clinical intervention. In the future, we want to use nanotechnology to make a long-term implant that will record the electrical activity in the brain, and can wirelessly transmit recordings, or wirelessly stimulate. The software also needs to be good enough to interpret the signals in real time. Obama's brain mapping project aims to develop the algorithms to read millions of electrodes simultaneously. Sense input could be inputted by implanted electrodes (in the future). Prosthetics could have sensitive skin.

Blue brain project:

in silico simulation/modelling of neuronal circuits using an IBM supercomputer. A full mathematical model of the brain. Record the spiking activity (electrical activity) of neurons, and write an equation that recreates that. This is a mathematical model of one neuron's firing activity. Put a whole heap of these simulated neurons together, link them up, and you have a virtual model of a system. This has been done for about 10,000 cells in about two cubic millimeters of cortex. (There are about ten million times as many neurons in the human brain.) We could make computer models sick with Parkinson's, Alzheimer's etc., then make them well again.

Week Two

This week, we're learning about the neuron, the neuron as an input-output device, the dendrite and dendritic spines (the input part of the neuron), the axon (the output part), types of neurons, synapses (the bridge between two neurons), and two types of electrical signals: spikes (or action potentials) and synaptic potentials. In 1665, Robert Hooke caught the first glimpse of cells. In 1839, Theodor Schwann proposed that all living matter is made of cells. In 1870, Gogol developed the method we mentioned last week for staining a small percentage of neurons. In 1887 Ramon Y Cajal proposes the neuron doctrine. In 1891 Heinrich Waldeyer introduced the word 'neuron'. In 1897 Charles Sherrington coined the word 'synapse'. Sigmund Freud started as a neurobiologist. He drew crayfish neurons in 1882. He followed the neuron doctrine. Ramon Y Cajal stained and drew tissues from different parts of different brains from different species. He made the conceptual leap that the networks he drew were conduits of information. Information flows from axon, to dendrite, to cell body, to next axon, and so on.

Theory of dynamic polarization:

The cell receiving the information becomes electrically polarized, and passes on that polarization. Dendrites are input devices. Axons are output devices. Ramon Y Cajal got all this right. (Though he did not know that there are inhibitory and excitatory inputs.) Golgi wrongly thought that neurons were not discrete; he thought there was a continuous strand along which information flowed. In 1906, receiving the Nobel prize, they presented their differing views. The axon has varicosities (aka boutons) on it. These contain neurotransmitters. One axon might have 5000. The dendritic tree of a given neuron has many many axons touching it. When a neurotransmitter pings across these synapses, it creates an itty-bitty change in voltage. This itty-bitty change in voltage is called the synaptic potential. There are also axons touching the dendritic tree that ping with neurotransmitters and REDUCE the voltage. These are inhibitory presynaptic neurons. All these plusses and minuses come into the dendritic tree (i.e. the cell's input jack). They all sum up and their resultant voltage reaches the cell body of the postsynaptic neuron.

Axon as output device

The axon is the output device. The exit from the soma is called the axon initial segment. This segment consists of special ion channels that enable the firing of an action potential. Off the action potential goes down the axonal tree. Along the axon are nodes of Ranvier. Between these nodes are internodes. The internodes are wrapped in myelin sheathes. Myelin is a lipid. The nodes of Ranvier are uninsulated by myelin - analogous to a bare piece of copper with the insulation stripped off on a wire. The nodes of Ranvier are similar to the axon initial segment in that they are electrically hot with ion channels that can boost the signal. At the end of the branches of the axonal tree are varicosities. These are the launch pads that send neurotransmitters across synapses. There is no myelin here because you need an uninsulated point to transmit the signal. Where there is myelin (i.e. at the internodes), there are no synapses; and where there are synapses there is no myelin. At the internode, there are special cells that wrap myelin around the axon. These cells are called glial cells or oligodendrocytes. They are the insulators of the brain. They might wrap hundreds of layers of myelin around the axon. Most - but not all - nerve cells are myelinated. (Multiple sclerosis is where myelination is sub-par and signals don't propagate well.) Note that only axons have myelin, not dendrites. Nodes of Ranvier are a few microns wide. They are gaps in myelin. They are excitable. What makes them excitable? Ion channels. Their signal-boosting power allows signals to propagate. Axonal trees can branch locally, or they can branch distally (reaching across the brain). The diameter is about a micrometer, a thousandth of a millimeter. So the axon is an output device that generates a signal (initial segment), conducts it (axonal tree), insulates it (myelin/glial cells), boosts it (nodes of Ranvier), and passes it along (boutons).

Dendrite as input device

Different cell types have differently-shaped dendritic trees. Each dendritic tree will receive inputs from many axons coming from many cells. Axons are interwoven with dendrites. Some cell types have spines on the dendrites. These dendritic spines are where axons dock. You can categorize cell types as spiny and non-spiny. Typical numbers for a cell: dendritic area = 20000 microns2, dendritic spines = 5000-8000 (Pulkinje cells have 200000), area of one spine: 1 micron2, number of axonal inputs=10000 50-60% of the area of the cortex is dendrite. Axons are longer, but dendrites are thicker, have more surface area.

Types of neurons

We've already seen spiny and non-spiny, excitatory and inhibitory. But the categorization is more complex than this. There are about 100 billion neurons. We can classify neurons - We have not yet finalized the taxonomy of neurons. You could say that the cortex has thousands of types. It depends where you want to draw the lines. Principal neurons send excitatory signals to other parts of the brain. Interneurons stay in the circuit, controlling things with inhibitory signals. In the neocortex, excitatory neurons tend to have distal axons, whereas inhibitory neurons tend to have local axons. Grouping of interneurons by the shape of the dendritic tree: chandelier, large basket, common basket, horsetail, Martinotti, neurogliaform, arcade, Cajal-Retzius - DeFelipe, 2013 We're not sure why we need so many types with different shapes.

The Synapse

A synapse is the gap where a presynaptic axon almost touches a postsynaptic dendrite. The varicosty/bouton of the axon contacts the dendritic spine. In the axon's boutons at the synapse, there are vesicles containing neurotransmitters. There might be 5000 molecules of a neurotransmitter. On the postsynaptic part (the dendrite), there are receptors for these. A spike (which is digital) causes the vesicles at the varicosity to cross over. When the receptors receive the neurotransmitter, they pass on a voltage to their cell body. This is analog. The synapse can be described as a digital-to-analog converter. There are strong synapses that generate large-amplitude voltages in the dendrite, and weak ones that generate smaller voltages. Axon: digital voltage. Gap: chemical signal. Dendrite: analog voltage. An example spiny stellate cell in layer 4 of the cotex receives: 1430 inputs from others the same as it, 3105 inputs from layer 6 pyramidal cells, 355 inputs from local smooth cells, 360 from synapses from faraway in the thalamus. Some of these are excitatory and some are inhibitory. At the cell body, all the inputs are summed. The axon initial segment then decides if it reached the threshold. If it did, the neuron fires. If it did not, it does not.

Week Three

A neuron, like any cell, can roughly be thought of as a sphere of membrane. There can be a difference in charge (i.e. a voltage) in the inside and the outside. When they stick an electrode in a cell and inject positive current, the voltage slowly (not sharply) rises. Positive voltage change is called depolarization. Negative voltage change is called hyperpolarization. If the cell was a simple resistor, a sharp current would cause a sharp change in voltage. But the fact that voltage changes smoothly means it is better modelled as a resistance-capacitance device. A basic resistance-capacitance circuit - * Capacitative current = C(dV/dt) * Resistive current = V/R * Adding these two must give you the injected current, because the injected current got split into these two. So C(dV/dt)+(V/R). Solve this for voltage and you get V(t) + I * R * (1-e^-t/RC)  or m is another way of saying R*C. This is the membrane time constant. It determines the rate at which voltage increases and decreases. Cells have different membrane time constants; some have little memory and gain and lose voltage quickly. In brain cells, it is about 20 milliseconds. If the membrane is particularly leaky, it might be 5 milliseconds. Because I*R is the maximal value of voltage in the equation above, cells with more resistant membranes (i.e. higher R), can have more voltage. If you inject current on, off, on, off, on, off, the ons will summate. Remember we saw that voltage attenuates slowly. If it hasn't gotten back to zero when the second pulse comes, then you see temporal summation. Of course, if you wait to long, it will have attenuated back to zero. How long this is depends on the temporal constant (m). The same applies to hyperpolarizing/inhibitory/negative current inputs. Cells normally, in their resting state, have a negative charge of about 70mV. This is called the resting potential or Erest. (This has been measured by microelectrodes.) Because of this, we need to adjust our resistor+capacitor model of the neuron, and make it resistor+capacitor+battery. The capacitor is getting charged with negative and positive current when we're in action. When neurotransmitter hits the dendrite's receptors, ion channels open in the postsynaptic cell. These are new paths by which ions can flow. They can flow either into the cell or out of it. These new paths are called 'active' (or 'synaptic'). They exist alongside the 'passive' channels that do the capacitance-resistance stuff. There are two types of active ion channels: potassium channels carry negative current, sodium channels carry positive current. Conductance is the inverse of resistance: g = 1/R. Call conductance of the passive channels gR. Call conductance of the active channels gsyn. When the neurotransmitters open the active channels, the cell's conductance is gR + gsyn. There are a lot of NA+ channels outside the cell, few inside. There are a lot of K+ channels inside the cell, few outside. The channels that neurotransmitters open are very specific to the kind of ion. So if a neurotransmitter make the membrane permeable to sodium ions, they will flow in (making it more positive). If the neurotransmitter opens it to potassium ions, they leave the cell and make it more negative. In other words, in addition to the 'battery' part of the passive capacitor-resister-battery circuit we saw, there's a new ('active' or 'synaptic') battery, and this has a different voltage. So the overall voltage of the cell changes when this comes into play. C(dV/dt)+grest(V-Erest states that what flows thru the passive channel depends on the difference between the two sides of the conductor, multiplied by its conductance. Add the synaptic channel: Synaptic current = gsyn(V-Esyn The sum of the passive current, capacitative current must sum to zero (zero extra current apart from these). In other words C(dV/dt) + grest(V-Erest + gsyn(V-Esyn = 0. Solve for V to get the change in voltage due to the opening of the synaptic channel: V(t) = (grestErest + gsynEsyn) / grest + gsyn . If there is no synaptic channel open, gsynEsyn is zero, so it's just grestErest / grest + gsyn, which is your resting potential -70mV At the other extreme, as gsynEsyn approaches infinity, i.e. when the synaptic channels are very strong, the equation approaches gsynEsyn) / grest + gsyn, which simplifies to Esyn) / grest Between these two extremes, i.e. between the resting potential and the synaptic battery, is where you swing, depending on the strength of the synaptic conductance. There is a maximum and a minimum value to the voltages, set by the number of sodium and potassium and other ions present. From -20mV, it might go between -90 and 200mV


Spike is an all-or-nothing thing happening in the presynaptic axon.Let's look at why it's all-or-nothing. Hodgkin&Huxley were the guys who figured out a model for the spike, using the space clamp and the voltage clamp method. They won the Nobel Prize in 1963. They studied the squid because it has big axons, about half a millimeter thick, rather than the few microns of ours. Working pres-WW2 A neuron has synaptic potential in its dendrites, spikes in its axons. Synaptic potential generates spike. H&H poked an electrode into those big squid axons. What they saw: At rest, the inside of the axon is more negative than the environment. At the spike, it becomes suddenly positive. (After, there is a period where it is more negative than at rest.) This lasts about a millisecond, varying a bit depending on temperature. H&H, in 1952, came up with four equations that describe the spike. If a certain current comes from the dendrite, it dissipates. But once the quantity of current hits a threshold, instead of dissipating, it goes up even more. This threshold is about 10mV above rest. (So if you take it from -70mV to -60mV, it'll suddenly go up to +70mV or something.) What machinery in the axonal membrane causes this spike to occur? H&H developed two techiques to study the axon: the space clamp and the voltage clamp. The space clamp makes the axon isopotential (i.e. makes the voltage along the whole axon's length the same by putting a good conductor in there). The voltage clamp is a more sophisticated method that fixes the voltage inside and outside the membrane of the axon at a chosen value. The voltage clamp is a feedback system; it injects current to exactly counterbalance the membrane's own current. It measures how much current it is injecting, and by that you know the voltage. If you set a voltage clamp to hold the voltage above the neuron's threshold, current flows first *into* the neuron, then *out*. It is a biphasic current. There is a fast inward activation, then after that come a later, slow outward current. Either phase can be blocked by a drug (inward by TTX, outward with TEA), implying that they are two different currents. By playing with the chemistry, H&H found that the inward current is a sodium current flowing from outside to inside, the outward current is potassium ions flowing from inside to outside. The inward flow of sodium ions lasts such a short time because it inactivates itself. The cell body has synaptic and passive channels, and capacitance. The axon has an outward active potassium channel, an inward active sodium channels, a passive channel, and capacitance. Because current is conductance times driving force (i.e. difference in ion concentrations), H&H could figure out the conductance of the sodium and potassium ion channels. Injecting current with a voltage clamp creates a proportional conductance by opening ion channels in the axon. Sodium channels change conductance quicker than potassium ones. When you keep the voltage-clamp open, the sodium channel fades away (inactivates) after time. So the sodium channel is inactivating, but the potassium channel is not. H&H then faced the challenge of writing equations that modelled how these conductances grew and faded. The rising phase of the K-current is (1-exponent(-t))4, and its decay is exponent(-4). The potassium conductance in the membrane = gK = gbarKn4. n is a number between 0 and 1 that gets higher for higher voltages in the voltage clamp. n depends on time as well. It represents the proportion of K-ion channels in the membrane that are open at that moment. The power of four comes into it because for a potassium ion to cross the membrane, there must be four similar particles there. You can think of the membrane as having four gates in series. They must all be open for a potassium ion to pass thru to the outside. When you depolarize the membrane (i.e. put a positive charge in it), the gates start to open. Enough depolarization and all four are open. n can be thought of as the probability that a given gate is open. dn/dt = n (1 - n) - nn represents the rate parameters that change value depending on voltage and time. If alpha is big, gates move towards open; if beta is big, gates become closed. Sodium requires another variable to account for inactivating. gNa = m3hg, where h is the inactivating variable. dm / dt = m (1 - m) - mm dh / dt = h (1 - h) - hh A flow of sodium ions requires 3 gates rather than 4 to open. These are called m-gates. There is also a h-gate which closes slowly and inactivates the channel. The reason a threshold stimulus from the presynaptic neurons causes a spike is that it opens sodium channels. The positive current from the inputs opens gates to positive current from the sodium ions - this is what a spike is. The spike is self-limiting because the voltage it creates opens potassium channels, which carry positive charge out of the cell. This is why after a spike, the voltage goes below the resting -70; the potassium channels are open, making things more negative. After a spike, the h-gate has closed, and so another spike can't happen. This is the absolute refractory period. You also have sodium ions creating negative current at this time, contributing to the relative refractory period. Some cells might have a refractory period of 5 milliseconds, meaning they can fire 200 times a second, but 10ms is more normal.


Plasticity is what makes us learn. Possibly the fact that memory is stored plastically suggests that memories are unreliable - it suggests that they are not stored statically, but flexibly. Sensory substitution Learning implies predicting what is coming, categorizing things, and create consensus between people. Held and Hein showed in 1963 that in order to learn to see, we must act. Action-perception loop. A cat that was restricted from moving at the beginning of its life became functionally blind. Brain reconstructs reality from very sparse information, by categorizing and remembering. It does this thru mechanisms that have been learned plactically. Practice leads to new intercortical connexions. This leads to memory and learning, and the development of abilities. The hippocampus is important in learning. It is built from a unique type of neuron. Three possible activities underlying learning:
  1. New cells are born, create new networks
  2. New synaptic connexions are formed. New networks made of not-new cells
  3. Existing connexions, existing cells, but new strength of synapses. (Hebbian learning)
Hebb hypothesis, Donald Hebb, 1949: Neurons that fire together wire together. When cell A consistently activates cell B, some growth takes place and the connexion becomes stronger, and cell A can now fire cell B more effectively. The Hebb hypothesis has been shown true at some cortical and hippocampal synapses. The same spike from cell A causes more depolarization in cell B. What change underlies this strengthening? One possibility is that new receptors sprout at the dendrite. So the same spike opens more ion channels post-synaptically. Another possibility: the same spike releases more neurotransmitters from the cell A Spike Timing-Dependent Synaptic Placticity (STDP). Use an electrode to inject a current into the presynaptic cell. Use another electrode to generate another spike in the postsynaptic cell. Do this enough times and you find the presynaptic spike now generates a stronger depolarization in the postsynaptic cell. This potentiation lasts a long time, from minutes to a lifetime. (Long-term potentiation (LTP).) By playing with different timings of these two spikes, they found that if cell B fires second, the synapse becomes weaker. (This is long-term depression (LTP).) The greatest LTD and LTP happened where the interval was about 40ms. This explains some learning, but not things like Pavlovian learning, where the interval is much longer than 40ms. It is only one of many mechanisms of learning. The amount of potentiation/depression at different intervals was modeled by a mathematical formula. This is used in artificial intelligence to make machines that learn. Structural plasticity exists i.e. there are anatomical changes in the brain when it learns Close the eye of a cat, and the associated neurons wind up with fewer dendritic spines. Two-photon microscope: a microscope that can show the spines in a living brain. You can see new dentritic spines sprouting and the disappearing. We don't understand why some spines are stable and some vanish - maybe they never found a friendly axon. We can see that connexions between axons and dendrites come and go from day to day. So, from the evidence of the two-photon microscope, there are new synapses forming between two cells as a result of learning. Plus Hebb's law is also going on: existing synapses are becoming more impactful.


Learning about neurogenesis might help us develop therapies for neurodegenerative diseases. In 1985, Pasko Rakic from Yale said that adult brains had no neurons in them. In 1997, Elizabeth Gould from Princeton said that she saw neurogenesis in tree shrews, then in primates in 1998. Paper from 1999 showed by staining techniques Now with the two-photon microscope, you can see the development of new neurons over days. The more challenging a task given to a mouse, the more new cells are born The new cells are born as cells in a particular niche in the hippocampus, then shunted to a different part of the hippocampus where they sprout connexions. (This is mice we're talking about.) Neurogenesis happens in the olfactory bulb and the hippocampus in the mouse, not all brain-parts.

Computational neuroscience

Even a single neuron can be considered to compute Not just a neuron, but even a dendrite can execute some computations, as shown by recent work looking at neurons in the retina with new techniques. What problem needs to be solved by the organism? What mathematical techniques are needed to solve it? What hardware implements these algorithms? Different areas of the brain have different hardware and implement different algorithms to solve different problems. An example would be computing the distance to a cup I want to pick up, based on visual inputs. The brain computes from visual inputs what bits clump together as an object i.e. figure-ground separation. The brain has an algorithm telling us which parts of a face to point the peepers at. We can see this in eye-tracking experiments - people look at the eyes and the mouth more than would happen by random chance. The visual system also has an algorithm that spits out recognitions: this is a face, this is a house ( The visual system also has an algorithm that identifies motion: this is moving left, this is moving towards me, this is not moving Using the outputs of these computations, we can model our behavior. I know it's a car, and I know it's moving towards me, so I know to stop at the kerb. Hubel & Wiesel won the Nobel Prize in 1981 for experiments where they implanted microelectrodes in the neurons of a living cat. They found a cell that fired when and only when the cat saw a line, at a particular angle, moving in a particular direxion. When a vertical line moved, it fired a lot. When a horizontal line, it didn't fire at all. For lines that are pretty vertical, it fired quite a lot. One early theory about the neuron as a computational devicewas by McCulloch & Pitts in 1943, in a paper called 'A logical calculus of the ideas immanent in nervous activity'. This paper influenced computer science even more than it influenced neuroscience. It was inspired by the binary/digital nature of the neuron, and by the idea that synapses are either excitatory or inhibitory. Their theory was this: suppose there is a hypothetical neuron with 3 excitatory inputs, and 1 inhibitory one. The E inputs are each +1, the I input is -4. The threshold is 1. In this neuron, the neuron fires if E1, or E2, or E3 are active, and I is not active. Now you have a logical formalism describing the rule controlling the firing of the neuron. Neuron qua logical device. Using logical devices, you can build a complex computer that can compute anything. It's interesting that this idea from neuroscience influenced computing. And ideas from computing often influence neuroscience, and back and forth. Neurons are different from the hypothetical neuron in McCulloch & Pitts's model. Why? Because they are spread across quite a bit of space, whereas M&P modelled the thing as a point. What are the implications of this. Computational neuroscience is about creating mathematical models of brain activity. If we have a mathematical model of the thing, we understand that thing. This allows us to interpret results, and to predict results of future experiments. When we have a parsimonious mathematical model, it tells us which variables are worth paying attention to (e.g. conductance in the H&H model), and which can be ignored. A mathematical model also allows us to look at the thing as a functional element (e.g. as a computational component in McCulloch&Pitts's model).

Cable theory of dendrites by Wilfrid Rall

Aims to create a mathematical model of how distant dendrites affect the output of a soma or axon. Rall in 1959-64 put forward a theory expanding on McCulloch & Pitts. He wanted to use a more realistic model of a neuron, taking into account the large extent of the dendritic tree. One thing this implies is that if you inject a current into the soma, most of it actually flows out the dendrites. Potential changes along the length of the dendrite; it attenuates. There is a time-lag between the synapse receiving the input, and the input reaching the soma. Synapses closer to the some will have a smaller lag. Model a dendrite as a series of cylinders. These have varying diameters and lengths and conductances. At some point on a cylinder, there is a dendritic spine and a synapse at it. If current is injected here, it won't just flow towards the soma; it will flow in both direxions. As it's flowing, some will leak out; it's attenuating coz of resistance. As well as the attentuation/leak, it will lose a whole lot of current when it reaches a fork in the road. Now, how do we describe mathematically this diminishing current? The current is proportional to the derivative of voltage and distance (dV/dx) Axial current that is lost becomes membrane current. (It leaks out thru the membranes.) The change in axial current = membrane current. In other words, axial current + membrane current = 0. At branching points, there is a leakiness. If a current of 30mV is injected at the distal end of a dendritic tree, 1mV might reach the soma. Because of how current spreads in all direxions (not just somaward), synapses affect other synapses near them on the dendritic tree. There "are" neighbourhoods of synapses on the dendritic tree, and these neighbourhoods compute. Rall's theory says that more time, or with more distance, makes the current diminish. Interestingly, when time is low, distant points on the cable are much less affected by a current, but for larger values of t, this is not so significant. Rall's theory allows us to look at a current reaching the some and guess how distant its origin is. Close synapses will be narrow (i.e. short-lived), and distal ones will be broader. The idea of neighbourhoods within the dendritic tree allows us to think of two kinds of computation: they compute to do with their dendritic tree, and they compute at the soma. Dendrites classify inputs.. Dendrites can compute the direction of motion (e.g. visual motion). They can localize sound. Rall's neurons allow more complicated computation than the neuron of McCulloch&Pitts. The M&P neuron has no subtlety with regard to location. With a bunch of excitors, inhibitors, excitors, inhibitors along a cable, there are a lot of IFs and THENs that can veto each other. Summation of inputs happens locally on the tree, and then the results of these also summate and enter the soma. Synapses are clustered on the tree, and that cluster has an output that is different than it would be without this cable interference. An example of the local sensitivity of the synapses: if activation sweeps thru excitatory synapses in a distal-to-proximal order, they arrive almost together and create a larger summated voltage at the some. (Consider the implications if this additional voltage makes it hit the threshold.) If they fire in the reverse (proximal-to-distal) order, the spike at the soma is broader. Because of this direxional sensitivity (i.e. the fact that proximal-to-distal is different from distal-to-proximal), consider a dendritic tree with sequential inputs from visual receptor neurons. If they sweep left-to-right it may fire, but not if they sweep right-to-left. This is how your brain tells what direxion things are moving. A neuron was recorded in a mouse's brain in vivo that responds only to a line of particular orientation. It was found that the input neurons respond to lines of various orientations. We don't yet know if the resultant firing at the neuron that was studied is simply summation of the inputs, or a more interesting dendritic computation of the inputs creating a response to its own specialty orientation. Retina is composed of several layers, starting with receptor cels, then bipolar cells. The ganglion cells output to the optic nerve. These ganglion cells have the direxional selectivity thing going on too. Reichard detector was an early theory to explain direxional sensitivity. It states that there is more inhibition in one direxion, more excitation in the other. An asymmetry. They are reconstructing the connectome of the retina from slices to figure this out. First find the orientation a cell is sensitive to, then reconstruct the synapses around it. Inhibitory amacrine cells inhibit these retinal ganglion cells. The connectomics supports the Reichard theory, because it shows that there are more inhibitory synapses on one side, so light sweeping from that side turns the cell off - direxional selectivity. The complexity and computation of the brain is not the emergent property of simple elements; the elements (i.e. neurons) are complex and capable of computing.

Mega projects to map the brain

Allen Institute, Janelia farm, EU human brain project, and Obama's 'brain activity map' are 4 big, heavily-funded projects to map brains. Allen Institute is focusing on the visual system of the mouse. They already made an atlas of gene expression in the mouse's brain. Janelia farm is in Washington D.C. EU human brain project is the Blue Brain project. Based in Lausanne. Obama's brain activity map (BAM) aims to measure the spiking activity of neurons, millions of billions of neurons simultaneously. There are 560 known neurological diseases. There is a project called the 'diseasome' that aims to map them genetically and otherwise. Anatomical or activational screwiness causes disease. Part of the motivation of the Blue Brain Project was to make raw data (not just papers) accessible to scientists. The more controversial aspect of the project is simulation. The Human Brain Project emerged from this. Simulation-based research can teach us some things about the workings of the brain. Connectomics is controversial because it is very labour-intensive and some argue that less detailed modelling could serve the same purpose. In the neocortex of different mammals, just under the skull, we see similar things. There are about 30,000 cells per mm3. There are pyramidal cells in there. There are axonal inputs from distal regions. But the inputs aren't random; they can be organized into six layers. The layers have different cell types (remember there are multiple ways of categorizing cells) A column in the somatosensory cortex of the mouse looks like a barrel and is called the barrel cortex. This computes data coming from a particular whisker. You can see separate barrels in the brain, and these map topographically to the layout of the whiskers. Primarily, each barrel computes for its whisker, but then after that, the information spreads. In the cat's V1, there are whole columns that are sensitive to visual lines of a particular orientation. The Blue Brain Project is focused on cortical columns for now. By simulation-based research, it aims to take all we have learned about cell types, synapses, spiking activity, and conclude from that how outputs are formed. Remember that they have different firing patterns: stuttering, regular etc. The Blue Brain Project needs to figure out which cells connect to which, and also the firing properties of each cell type. We need a Hodgkin-Huxley equation for each cell type. The Blue Brain Project also uses the passive cable equations we covered (and the active cable equations we did not) to understand other things like dendritic computations. Idan Segev and others extended the Hodgkin-Huxley equations to be an even better fit to measurement. We also need mathematical rules for plasticity. We have spike-timing-dependent plasticity equations, but we need more plasticity equations. The IBM computer can compute 100,000 cells. Moore's Law says we'll be able to Blu Brain the whole human brain by 2023. In the Netherlands, there is a group analyzing human brains slice-by-slice when chunks o' brain are removed for surgical reasons. The Human Brain project comprises hundreds of institutions combining their efforts. It is medical, computational, physiological etc.

Sensation, perception

Sensation = translating external events into neural activity. Perception = translating this activity into useful sensory information. A 2D image falls on the retina. The brain must interpret this, to know what is where/to make a 3D model of the scene. Perception-action loop. Sensory information is processed (e.g. binaural gap used to localize sound), and that influences action (e.g. turning the head towards the sound) Sound waves enter the earhole, go down the ear canal, to the tympanic membrane. The membrane vibrates, and the vibration is transferred thru 3 tiny bones to the fluid that fills up the cochlea. The waves in the fluid move the basilar membrane. The basilar membrane moves hair cells in the cochlea. The movement opens ion channels. Spikes ensue. The spike go down the auditory nerve fiber, which leads to the brain. The basilar membrane is arranged like a piano: with a scale responsive to frequency. There are two types of hair cells. One is piezoelectric. Localizing sound is interesting, because nothing about the location/direxion of sound hits the ear. It requires computation, based on binaural gap. There is a time-lag between the two ears, (interaural time difference) but also a difference in amplitude (interaural level difference). e.g. if a sound comes from my left, I hear it louder in my left ear, because the head blocks it. Interaural time difference is less than one millisecond. We can localize something in front of us with about one degree of precision - this is an interaural time difference of a few tens of microseconds. Each ear has an ear canal, tympanic membrane, cochlea, cochlear nucleus. The two cochlear nuclei both send axons to on structure called the medial superior olive. This is nearer the right ear than the left, so signals from the right should hit them about 300ms earlier. If they hit simultaneously, that means the sound is to the left (it hit the left ear 300ms prior), and the MSO has simultanaeity-sensitive neurons that fire when this happens. In order to pull off this trick, these specialized neurons have to have no dendritic computation, and no temporal summation (because they have to move on to the next calculation so fast). Similar mechanisms are used for localizing in other senses. Barn owls can hunt in complete darkness because they're really good at sound localization. They have one ear higher than the other, so they can localize sound vertically too. They put kinky goggles on barn owls to make everything look 20 degrees right of where it really was. They started looking at visual stimuli 20 degrees right of normal, but kept looking at auditory stimuli accurately. But after a few weeks, they started looking at auditory stimuli shifted too; the error in the visual system spread to the auditory responses. Conclusion: vision teaches the auditory sytem where things are in the world. The barn owl has a brain-structure called the optic tectum. This receives inputs from different senses and sends signals that control orientation movements out to the brain stem, to control head movements. In mammals, the superior collicullus plays this role, and controls eye movements as well as head movements (head movements when the turn is to a more peripheral location). The neurons of the cochlea are frequency-sensitive. But to localize sound, we need to eliminate this information;frequency is not relevant to location. The external nucleus of the inferior collicullus does this eliminating. All the neurons sensitive to a particular interaural time difference, regardless of frequency, converge on the same part of the external nucleus. The external nucleus talks to the optic tectum (we're talking about barn owls here). At the optic tectum, neurons with no interaural time difference (meaning straight ahead) converge with neurons from the visual input of a thing that is straight ahead. Something similar has been shown in the collicullus of mammals, including cats. The mystery of this model is this: with the goggles, we saw the auditory system made a plastic change: the neurons (from the cochlea) that used to point to straight ahead (in the external nucleus) now point to 20 degrees left. But how does the visual system cause a plastic change in the connexions between these two parts of the auditory sytem? The solution is that the tectum does talk to the external nucleus, but this signal is inhibited. When a substance that block inhibition is introduced, visual flashes cause reaxions in the external nucleus. So when there is a mismatch (e.g. caused by the goggles) between the tectum and the external nucleus of the inferior collicullus, the inhbition is released, and the tectum can cause plastic change in the nucleus. Things in the world tend to trigger more than one sense at once (e.g. are visible and audible) so it makes sense for sense-channels to interact and be able to change each other. McGurk effect. The fact that we react with surprise to certain stimuli, but not to others, implies that we are predicting the sort of thing that will happen. The brain forms expectations. EEG studies find that there is more negative activity in response to surprising stimuli. This is called mismatch negativity. It occurs in many types of surprise, about 150 milliseconds after the stimulus, well before it reaches consciousness. This happens, for example, when a muscial rhythm breaks the beat. Mismatch negativity has been found even in the neurons of animals, even anaesthetized animals. This phenomenon turns out to be quite sensitive to breaking pattern. If you play an anaesthetized rat a sequence in which every 4th tone is different, the neurons won't act surprised at that 4th tone, because there is a regularity.


Involve physiological activation, like skin conductance. The conscious experience of the emotions is not the whole biological story. One interesting feature of emotions is that they are shared across different animals. The amygdala is one part of the brain that is involved with emotion. It is activated when fear events start. There are many others. In one study, researchers asked people to bring a piece of music that gives them chills. This subjective experience did indeed correlate with heart rate changes. PET scans that showed changes in blood flow found that the amygdala was less activated, the anterior cingulate and other areas were activated. It's interesting that music activates emotional areas that have little to do with auditory processing. We don't well understand how stimuli, processed by sensory processing areas, then activate emotions.

The future of neuroscience

Two big challenges are relating electrical activity to psychological and phenomenological events, and understanding how electrical faults, as in Parkinson's, lead to problematic behavior. The neuroethical problems of brain-reading (lie detection for one thing), and of brain-control. Is it ethical to change my brain if I am sick? To change your brain if you are sick? To change my brain if I am not sick, but just want higher abilities? Who draws the line about how sick is sick enough to require intervention? (We could be talking about pharmacological intervention, electrical, or other.) Some good news might be that there is not enough similarity between brains to read my thoughts without calibrating it with a lot of personal data, from me personally. In a study of brain responses to paintings, abstract paintings correlated with a lack of activity in any of the thing-recognizing areas. Both lovely and ugly paintings activated the medial ordito-frontal cortex & the somato-motor cortex, but lovely ones heavier on the former, ugly ones heavier on the latter. MRI to communicate with locked-in people. Teach the locked-in person to voluntarily activate the motor region for yes, some other region for no. Benjamin Libet's experiments could totally hange the human view of free will. The subject press a button whenever they felt like pressing it. Libet found the brain signature of making that decision before the subject knew they'd made the decision. Desmurget et al., Science, 2009. Electrical stimulation provokes desire, e.g. desire to lick lips, move, talk, etc.