Comments and reactions to the posts are welcome! Simply click on the "comment" line below each post to see previous comments or on the pencil icon to add a new one!

Hey! Small world!


            A major point of interest for INI-ots one and all is the local arrangement and connectivity of neurons in the neocortex.  But I'm working right now at Kinderspital Zurich, so I won't get into any of that nonsense.  What I'm going to talk about is the long-range connectivity of neurons.  And then I'll get a bit meta, but you don't have to read that part.
            Duncan Watts and Steven Strogatz (whose columns for the New York Times are a much better read than this) published their original paper on so-called “Small World Networks” in 1998.  Before I explain what they are, I'll explain where they are.  Everywhere.  For those who remain unconvinced, more compelling examples are below.  A small-world network is an attempt to mesh two common network types, those that are completely random, and those that are entirely ordered.  Many real-life networks tend to fall somewhere in between.  The brain is a good example of this: though most connectivity is local, long-range projections are what make quick computations possible.  Computations in a brain comprised of purely local connections would take so long as to make the incoming information irrelevant in the first place.  Lions as clumsy as cows would stumble after humans lumbering at a snail's pace, in a scene more reminiscent of Absinthe than Attenborough. 
            Watts and Strogatz's contribution was the idea that a certain amount of randomness in the organization of their network could decrease the distance between nodes without adding a noticeable detriment to the local clustering of the network.  The sense behind this is apparent: removal of a few short connections rarely adds more than one new link between neighboring nodes, while it shortens the number of links between distant clusters logarithmically, comparable to the advent of the freeway/motorway.  Whereas one originally had to take whichever local roads were available, being able to drive to a high-speed thoroughfare, even if its access point is in the wrong direction from the start, greatly decreases the time to get across town.  Even if it cuts off a few local streets (apologies to Arthur Dent). 
            To prove that this concept is not limited to the famed “Six Degrees of Separation” theory, they applied this to collaborations between film actors (Six Degrees of Kevin Bacon seems to be an overestimate; 3(.65) degrees would suffice on average), connectivity between generators, transformers, and substations on a power grid, based on the power lines connecting them, and, more notably, synapses and gap junctions in the flatworm C. elegans.  All showed the same principles, namely, a reasonably short average path length, with high amounts of clustering between neighbors.  Principles characteristic of small-world networks.  While we have yet to map every connection in the human brain (a bit more difficult with more than the 302 neurons of C. elegans), it is nevertheless extremely slick (sorry) that  a real-life neural network can be described by such a simple model.



Figure 1: The left-most graph features only local connectivity.  By exchanging one close connection for one random connection, the average path length decreases dramatically, but the local clustering remains high.  (Creative Commons Licensed from http://en.wikipedia.org/wiki/File:Watts-Strogatz-rewire.png)

            The gamma rhythm, which many, though not all, link to our ability to consider detached visual percepts (like your ability to recognize my feet as being associated with my face, even if the stuff in the middle is obscured) as being part of the same object, is too fast to be possible without such long-range connections.  Furthermore, small-world networks have been shown to be bistable, displaying both stable loops across their long-distance connections while also being capable of quiet behavior.  Memory consolidation and retrieval is often hypothesized to function using such networks; high network activity levels first imprint a memory in the synaptic strengths of a loop, and the network then lies quiescent until activity causes the memory to be recalled.  So this small-world stuff might be pretty big.
            I promised that this would get a bit meta, so I'll finish with one last example.  As a tongue-in-cheek tribute to the mathematician Paul Erdős, the Erdős number commemorates Erdős's penchant for collaboration by giving an Erdős number of 1 to all of his direct collaborators, a 2 to their collaborators, and so on.  While most scientists haven't worked on graph theory (Erdős and one of his collaborators, Alfréd Rényi, came up with a more random precursor to Watts and Strogatz's model), a very large and even more eclectic spread of scientists have an Erdős number, and it's average (per the Erdős number project website) is a mere 4.65, matching the short path length of a random graph, and the high clustering (after all, not many neuroscientists with Erdős numbers collaborate with the Erdős-endowed political scientists) of a tightly ordered graph.  Just 4.65 links between most scientists, and one of the forefathers of this concept itself.  A very small world indeed.

- G. Lee

Citations:

The original paper:
D.J. Watts and S.H. Strogatz.  Collective dynamics of 'small-world' networks. Nature 393, 440-442 (1998).

An introduction to the Temporal Binding hypothesis:
Wolf Singer (2007) Binding by synchrony. Scholarpedia, 2(12):1657.

And its detractors:
Shadlen, MN and Movshon, JA (1999). "Synchrony unbound: a critical evaluation of the temporal binding hypothesis." Neuron, 24: 67-77.

A quick introduction to bistability in small-world networks:
Cohen, Philip. Small world networks key to memory. New Scientist. 26 May 2004.

Steven Strogatz's Really Quite Well-Written Series, On the Elements of Math:
http://topics.nytimes.com/top/opinion/series/steven_strogatz_on_the_elements_of_math/index.html

The Erdős number project:
http://www.oakland.edu/enp

What is sparse coding for?


While reading David Field’s seminal paper “What is the role of sensory coding?”, I became aware of compelling evidence supporting the hypothesis that our cortex represents sensory information using a ‘sparse’ code. Sparse coding has its roots in information theory and relates to the optimal coding principle whereby a system’s aim is to represent a signal using as few ‘bits’ as possible. In an optimal code, each bit must be as independent as possible from other bits since any information carried simultaneously by two bits is redundant and decreases the optimality of the code. In the context of Neuroscience, this translates to our cortex using as few neurons as possible to encode sensory events, with active neurons carrying mutually independent information. Several findings substantiate this claim, but perhaps the most remarkable one comes from computer simulation using unsupervised feature-learning algorithms. Similarly to our sensory cortices, the aim of such algorithms is to learn a representational basis to encode sensory stimuli. Simulations have shown that when these algorithms implement sparse coding strategies (i.e., when they are forced to use as few elements as possible to represent a stimulus), they develop ‘receptive fields’ that are strikingly similar to those of neurons in our cortex. The close correspondence between simulation and biology indicates that our cortex might indeed perform a sparse encoding of sensory information.

Towards the end of his paper, Field mentions several interesting reasons why a sparse code might be beneficial. However, I failed to develop a personal intuition as to why our cortex might implement such a sparse code. After finishing the paper, I looked up from my computer and I gazed through the window. At this moment, I saw a gigantic object appear in front of me. It was a very complex stimulus that filled up my whole visual field. It was made of hundreds, if not thousands, of small moving patches each containing infinite details. Yet, however intricate this stimulus was, a single percept came to my mind and I thought: “a tree”. It is at this point that I realized what the main advantage of sparse coding is: it summarizes our sensory environment to essential features. I then got up and walked around the Institute. As I looked around me, objects emerged from their background. I didn’t see keys, letters, buttons and cables but a computer; nor did I see a detailed fabric, wheels and armrests, but simply a chair. I could almost feel my cortex process visual information. It was efficiently summarizing my sensory environment so that I only perceived large-scale objects. In other words, my cortex was sparsely encoding my environment: with just a few concepts (‘chair’, ‘computer’, ‘desk’) it described the information that my retina perceived through millions of photoreceptors.

Is this then the role of sensory coding? To sparsely represent sensory information such that relevant components are readily accessible? One of the purposes of cortical circuitry should then be to guide the development of sensory neurons’ receptive field so that each captures the maximum statistical structure in the environment. During perception, neurons should compete so that solely those that carry the most information about a sensory stimulus are allowed to be active. In this way, only a few active neurons carrying mutually independent information can accurately summarize a complex sensory environment.

- R. Holca

H. Keffer Hartline and lateral inhibition



Haldan Keffer Hartline (1903–1983) might always have been destined for a life in biological research [1].  His parents were highly educated teachers at the Bloomsburg State Normal School in Pennsylvania; his father was a field naturalist with a Masters degree, with experience at the Cold Spring Harbor marine biology station and who studied for some time in Bonn and Heidelberg.  His mother, an English teacher at the school, also had a keen interest in botany.  From early on, Hartline had a researcher’s attitude towards natural phenomena in which others might have taken only a passing interest: after observing that bright sparks rising from a fire left shorter trails than dim sparks, he began experiments at home to explore persistence of vision.
As an undergraduate at Lafayette college (ca. 1921) he began working with small land isopods (small woodlice), quantifying their light-avoidance behaviour.  This early research led to his first published paper at the age of twenty [2].  While studying medicine at Johns Hopkins University, he was allowed access to the Department of Physiology’s string galvanometer — an enormous device for recording extremely small electrical currents.  Hartline used the instrument to record retinal action potentials in small vertebrates.  One day he caught a fly that was buzzing around his laboratory, and mounted his electrodes to record from an ommatidium — Hartline quickly found that retinal potentials from the fly were ten times larger than those in the cat, and shifted his focus to visual responses in invertebrates.
Hartline began his work with the horseshoe crab Limulus at the Woods Hole marine biological station, where he continued his research on shadow reactions and dark adaptation in small marine creatures.  Unsatisfied with the mechanically-amplified responses of the string galvanometer, he built his own three-valve amplifier from scratch to drive a Matthews oscillograph [3].  The increased sensitivity allowed Hartline’s work on retinal action potentials to progress to recordings from single fibres in the Limulus optic nerve.  After quantifying the responses of single units to spots of illumination, he noticed that ambient light in the laboratory reduced the responses of an ommatidium and attached fibre prepared for recording.  It was a short step to show that shading neighbouring ommatidia restored the activity of the isolated fibre.  Hartline and his lab explored the dynamics and mechanism of this phenomena as an early description of lateral inhibition in the retina (beaten to print by Kuffler working in the cat and Barlow in the frog) [4, 5].   Such was their zeal in precisely quantifying the effect of lateral interactions that they produced a set of simultaneous equations that accurately describe the responses of a set of nearby ommatidia.
Two productive decades followed their initial observations; their work cemented the idea that the retina does not merely transmit counts of incident photons to the brain, but actively filters and processes incoming stimuli.  The visual phenomena of Mach bands (see figures at left) illustrate that our perception of the visual world is illusory from the moment of sensation. 
Based on the work of Hartline and others in the retina, the concept of lateral inhibition has been extraordinarily pervasive in neuroscience.  Models describing not only visual processing [8], but also development of cortical maps [9] and dynamic activity in cortex [10] all stem from Hartline’s observations.  In the retina, lateral inhibition underlies spatial and temporal contrast enhancement, as described in detail by Hartline and his colleagues.  In networks of neurons, lateral inhibition can perform sharpening of neural representation, decision making via competitive mechanisms, auto-associative memory storage, noise rejection, and so on [6].  Many abstract models of cortex adopted widespread inhibition as an anatomical fait accompli, and continue to do so in spite of the weight of evidence against long-range inhibition in cortex.  Aside from consistency with existing theoretical literature, this is probably due to the computational power conferred by lateral inhibitory interactions.
Hartline was awarded the Nobel Prize in Physiology or Medicine in 1967, along with Ragnar Granit and George Wald, for his “discoveries of the most fundamental principles for data processing in neuronal networks which serve sensory functions. In the case of vision they are vital for the understanding of the mechanisms underlying perceptions of brightness, form and movement.” [7]

Mach Bands In the 1860s, the Austrian physicist Ernst Mach recognised that the illumination at a point on the retina is not perceived objectively, but only in reference to its neighbours.  The optical illusion shown at left illustrates this phenomenon: although there is a simple linear gradient from light to dark in the centre of the image, we perceive a brighter band at the left edge of the gradient, and a darker band at the right edge.  Physiologically this effect is due to the changing influence of surround inhibition across the image.  Mach interpreted this phenomenon quite philosophically, claiming that our perception of the world is always relative to other stimuli; that our impression of unchanging events is actively weakened, with the effect of enhancing the importance of events that change temporally or spatially.
- D.R. Muir 
References and further reading
[1] R Granitt and F Ratliff, 1985. Haldan Keffer Hartline. 22 December 1903–18 March 1983. Biographical Memoirs of Fellows of the Royal Society 31, pp 262–292.
[2] HK Hartline, 1920. Influence of light of very low intensity on phototropic reactions of animals, J General Physiology 6 (2), pp 137–152.
[3] HK Hartline, 1974. Foreword. In: Studies on excitation and inhibition in the retina, Eds: F Ratliff and HK Hartline, pp xiii–xx. Rockefeller University Press, New York.
[4] HK Hartline, 1949. Inhibition of activity of visual receptors by illuminating nearby retinal areas in the Limulus eye, Federation Proceedings 8 (1), p 69.
[5] HK Hartline, HG Wagner and F Ratliff, 1956. Inhibition in the eye of Limulus, J General Physiology 39 (5), pp 651–673.
[6] RJ Douglas and KAC Martin, 2007. Recurrent neuronal circuits of the neocortex, Current Biology 17 (13), pp R496–R500.
[7] CG Bernhard, 1968. In: Les Prix Nobel en 1967, Ed: R Granit. Nobel Foundation, Stockholm.
[8] G Sperling, 1970. Model of visual adaptation and contrast detection, Perception and Psychophysics 8 (3), pp 143–157.
[9] C von der Malsburg, 1930. Self-organization of orientation sensitive cells in the striate cortex, Kybernetic 14, pp 85–100.
[10] HR Wilson and JD Cowan, 1973. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue, Kybernetic 13, pp 55–80.

Google summer of code 2012

As a PhD student at INI, I have had the pleasure this summer to 
participate to the Google Summer of Code 2012 
(http://code.google.com/soc/), a worldwide program run by Google that 
funds students to work for three months for open source organisations as 
software developers. Among the participating 180 open source 
organisations this year, there have been many projects related to the 
development of tools for scientific purposes, such as bioinformatic 
software, crowdsourcing biological games to involve the general public, 
libraries for hardware and sensors, and even the International 
Neuroinformatics Coordinating Facility (http://www.incf.org/) has hosted 
6 interesting projects.

Personally I have contributed to the development of an application for 
Cytoscape 3.0 (http://www.cytoscape.org/) to support visualisation and 
analysis of dynamic networks. Cytoscape is a powerful tool to visualise 
network data in the form of graphs with nodes, edges, and attributes, 
but until now focused mainly on static data. Developing the software 
infrastructure to deal with dynamic data, that is graphs that change in 
time, while the main code of Cytoscape 3.0 was under active development, 
has been an interesting - and successful- challenge.

http://www.youtube.com/watch?v=R6RkMQpOmDs

The ability to visualize and understand dynamic data is very important 
in our own research, which addresses how genetic regulatory programs 
orchestrate the development of cortical architectures in different 
species. The development of open source tools to address this type of 
data is not only of personal interest, but is also directed to reach a 
broader scientific community, and possibly involve more people in the 
future of this project. I'm looking forward to what the next Google 
Summer of Code has to bring!
 
- S. Pfister 

Two questions on VISION and OLFACTION


As a human, I know that I’m a very visual creature, spending most of my work hours reading papers (cough) and examining interesting objects in my visual field (ahem). A greater proportion of the information that the human brain has to process comes from the visual system. Most of our memories are stored as visual imagery. Taking this into account, when one compares the anatomical structures and connectivity of the visual system to the olfactory system, which processes the sense of smell, one finds a curious imbalance: the olfactory system seems to be directly connected to our emotion-cognitive brain structures such as the Amygdala and the Hypothalamus, as well as the more decision oriented frontal cortex, and even more astonishingly, it is the only sensory system which has – in its sensory layer in the epithelium of our nose – the ability to replace and renew olfactory sensory neurons. The visual system, or for that matter any other sensory system, does not possess these special characteristics. The direct connectivity to the Amygdala probably allows us to assign positive or negative associations to certain smells which directly influences our behavior towards them. We know that the smell of something putrid will invoke an immediate aversive response from us (or any other animal), and a unique smell, like the smell of fresh linen, can instantly and almost involuntarily trigger a recall of memories from the remote past. Here in lies the first of my two questions, why does visual input lag behind olfactory input in stimulating behavior directly and effectively? Does the answer lie in a stronghold maintained by the evolutionary older olfactory system?

It is obvious that the olfactory system is of great importance to many species of animals: insects such as the widely studied fruit fly, rodents such as mice, and our very own best friend the dog, are all highly dependent on their sense of smell for survival, and it is well known that their sense of smell is much more refined and has a much larger range than ours; for example, the sensory epithelium of a dog has a surface area forty times that of humans. Through the course of evolution, as animals moved to land from water bodies, the greater variety of smells they encountered on land required a larger and more complex olfactory system to be able to perceive and discriminate smells. This is the reason why fish possess fewer olfactory coding genes (about 100) than mice (1000 genes, the work of Nobel laureates Axel and Buck). There is a certain trend, in our species and related species such as the Gorilla, to have become less dependent on olfaction and more dependent on vision; evidence comes from studies in loss of gene function (carta.anthropogeny.org). So it seems that evolution is driving us toward becoming more and more visually oriented animals. Here’s a surprise though: a recent study by Bastir and colleagues (Nature communications 2011) reports an increase in olfactory bulb and orbito-frontal cortex size (both involved in olfactory processing) in modern humans compared to Neanderthals. As we all know, Neanderthals are extinct and we are thriving! I wouldn’t want to read too far into the results but they do seem to confuse the evolutionary trend!

The second question is more of a wild conjecture into the remote unknown future of  humanity: how will our brain look like in another, say ten thousand years, assuming the rate of evolutionary changes in the brain is increasing and we haven’t done anything calamitously stupid? Will the evolutionary relaxation on selection of the olfactory system result in a changed anatomy? Will the visual system start projecting directly to our emotive and cognitive structures?  Will the olfactory system shrink in comparison? Will we start regenerating cells in our retina and lose this ability in our nasal epithelium? It would be sad though, because I definitely love the smell of tea and Indian food, and I don’t think their appearance alone would suffice ;-)

- G. Narula