Comments and reactions to the posts are welcome! Simply click on the "comment" line below each post to see previous comments or on the pencil icon to add a new one!

Hey! Small world!


            A major point of interest for INI-ots one and all is the local arrangement and connectivity of neurons in the neocortex.  But I'm working right now at Kinderspital Zurich, so I won't get into any of that nonsense.  What I'm going to talk about is the long-range connectivity of neurons.  And then I'll get a bit meta, but you don't have to read that part.
            Duncan Watts and Steven Strogatz (whose columns for the New York Times are a much better read than this) published their original paper on so-called “Small World Networks” in 1998.  Before I explain what they are, I'll explain where they are.  Everywhere.  For those who remain unconvinced, more compelling examples are below.  A small-world network is an attempt to mesh two common network types, those that are completely random, and those that are entirely ordered.  Many real-life networks tend to fall somewhere in between.  The brain is a good example of this: though most connectivity is local, long-range projections are what make quick computations possible.  Computations in a brain comprised of purely local connections would take so long as to make the incoming information irrelevant in the first place.  Lions as clumsy as cows would stumble after humans lumbering at a snail's pace, in a scene more reminiscent of Absinthe than Attenborough. 
            Watts and Strogatz's contribution was the idea that a certain amount of randomness in the organization of their network could decrease the distance between nodes without adding a noticeable detriment to the local clustering of the network.  The sense behind this is apparent: removal of a few short connections rarely adds more than one new link between neighboring nodes, while it shortens the number of links between distant clusters logarithmically, comparable to the advent of the freeway/motorway.  Whereas one originally had to take whichever local roads were available, being able to drive to a high-speed thoroughfare, even if its access point is in the wrong direction from the start, greatly decreases the time to get across town.  Even if it cuts off a few local streets (apologies to Arthur Dent). 
            To prove that this concept is not limited to the famed “Six Degrees of Separation” theory, they applied this to collaborations between film actors (Six Degrees of Kevin Bacon seems to be an overestimate; 3(.65) degrees would suffice on average), connectivity between generators, transformers, and substations on a power grid, based on the power lines connecting them, and, more notably, synapses and gap junctions in the flatworm C. elegans.  All showed the same principles, namely, a reasonably short average path length, with high amounts of clustering between neighbors.  Principles characteristic of small-world networks.  While we have yet to map every connection in the human brain (a bit more difficult with more than the 302 neurons of C. elegans), it is nevertheless extremely slick (sorry) that  a real-life neural network can be described by such a simple model.



Figure 1: The left-most graph features only local connectivity.  By exchanging one close connection for one random connection, the average path length decreases dramatically, but the local clustering remains high.  (Creative Commons Licensed from http://en.wikipedia.org/wiki/File:Watts-Strogatz-rewire.png)

            The gamma rhythm, which many, though not all, link to our ability to consider detached visual percepts (like your ability to recognize my feet as being associated with my face, even if the stuff in the middle is obscured) as being part of the same object, is too fast to be possible without such long-range connections.  Furthermore, small-world networks have been shown to be bistable, displaying both stable loops across their long-distance connections while also being capable of quiet behavior.  Memory consolidation and retrieval is often hypothesized to function using such networks; high network activity levels first imprint a memory in the synaptic strengths of a loop, and the network then lies quiescent until activity causes the memory to be recalled.  So this small-world stuff might be pretty big.
            I promised that this would get a bit meta, so I'll finish with one last example.  As a tongue-in-cheek tribute to the mathematician Paul Erdős, the Erdős number commemorates Erdős's penchant for collaboration by giving an Erdős number of 1 to all of his direct collaborators, a 2 to their collaborators, and so on.  While most scientists haven't worked on graph theory (Erdős and one of his collaborators, Alfréd Rényi, came up with a more random precursor to Watts and Strogatz's model), a very large and even more eclectic spread of scientists have an Erdős number, and it's average (per the Erdős number project website) is a mere 4.65, matching the short path length of a random graph, and the high clustering (after all, not many neuroscientists with Erdős numbers collaborate with the Erdős-endowed political scientists) of a tightly ordered graph.  Just 4.65 links between most scientists, and one of the forefathers of this concept itself.  A very small world indeed.

- G. Lee

Citations:

The original paper:
D.J. Watts and S.H. Strogatz.  Collective dynamics of 'small-world' networks. Nature 393, 440-442 (1998).

An introduction to the Temporal Binding hypothesis:
Wolf Singer (2007) Binding by synchrony. Scholarpedia, 2(12):1657.

And its detractors:
Shadlen, MN and Movshon, JA (1999). "Synchrony unbound: a critical evaluation of the temporal binding hypothesis." Neuron, 24: 67-77.

A quick introduction to bistability in small-world networks:
Cohen, Philip. Small world networks key to memory. New Scientist. 26 May 2004.

Steven Strogatz's Really Quite Well-Written Series, On the Elements of Math:
http://topics.nytimes.com/top/opinion/series/steven_strogatz_on_the_elements_of_math/index.html

The Erdős number project:
http://www.oakland.edu/enp