top of page
  • Writer's pictureChristopher Soelistyo

Reading "The Idea of the Brain" by Matthew Cobb



Zoology professor Matthew Cobb has delivered an exciting and accessible history of humanity's enduring journey to understand its own inner world. From antiquity to the early twenty-first century, he reveals how the centuries have brought seismic shifts in the way we have thought about mind and brain, and how we have imagined the link between them. The result is a book that is at once comprehensive and highly readable.


Cobb devotes much attention to the discovery of certain physiological features of the nervous system, such as its use of electrical signals, the role of neurotransmitter molecules, the structure of the neuron, and even the centuries-long relocation of the mind from the heart to the brain (Aristotle (384 - 322 BC) claimed that "the motions of pleasure and pain, and generally all sensation plainly have their source in the heart" (p.20)).


However, what interested me the most whilst reading this book was Cobb's exploration of our attempts to model and understand the brain in terms of readily available technological metaphors, leading to our modern conception of the brain as a "computer". The constant search for such metaphors reflects a widespread conviction perhaps most clearly articulated by the Danish scientist Nicolaus Steno, whose quote opens the entire book: "the brain indeed being a machine, we must not hope to find its artifice through other ways than those which are used to find the artifice of the other machines. It thus remains to do what we would do for any other machine; I mean to dismantle it piece by piece and to consider what these can do separately and together" (from On the Brain, 1669).


I: The Brain as a Machine


As Steno's remark reveals, the idea of the brain as a machine is far from new. For instance, French philosopher René Descartes (1596 - 1650) attempted to explain muscle reflex in terms of hydraulics. He was building on an established idea that the interaction between mind and body was mediated by "animal spirits" - a sort of fluid that circulates around the body. However, Descartes' contribution was to provide a mechanical explanation for how these animal spirits could produce behaviour, one that was influenced by hydraulic automata - often seen in the French royal gardens of the time - that could "play instruments or even speak as water and air were forced through their metal bodies" (p.34).


In explaining human and animal behaviour, he suggested that "one may compare the nerves of the machine I am describing with the pipes in the works of these fountains, its muscles and tendons with the various devices and springs which serve to set them in motion, its animal spirits with the water which drives them, the heart with the source of the water, and the cavities of the brain with the storage tanks" (p.35).


Despite these speculations, Descartes ultimately believed that the soul consisted of an "immaterial substance" separate to the substance of the physical world. Nevertheless, there were others who adopted a strictly materialist view, such as Thomas Hobbes (1588 - 1679), who asked rhetorically: "For what is the Heart, but a Spring; and the Nerves, but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole" (p.41).


Yet there were still some who opposed the materialist conception of the mind, such as the German philosopher Gottfried Leibniz (1646 - 1716), who in 1712 presented an argument now known as Leibniz's Mill: "If we pretend that there is a machine whose structure enables it to think, feel and have perception, one could think of it as enlarged yet preserving its same proportions, so that one could enter it as one does a mill. If we did this, we should find nothing within but parts which push upon each other; we should never see anything which would explain a perception" (p.43). How could parts pushing on each other form a mind? This eternal puzzle, which Leibniz interprets as an absurdity, has proved highly influential to the present day. It finds descendants in thought experiments such as the "Chinese Room Argument" concocted by American philosopher John Searle in 1980.


Through the 18th century, the development of miniaturised automata, driven by clockwork mechanisms, kept alive this vision of a brain ruled by mechanistic principles. In 1738, the French inventor Jacques Vaucanson "amazed Parisians with his mechanical flute player, followed a year later by a piper that accompanied itself on a drum, and a device known as the Canard dirégateur ('Digesting Duck') that could move, eat and defecate". Perhaps most impressively, in the 1770s the Swiss watchmaker Pierre Jaquet-Droz built an automaton of nearly 6000 parts named The Writer; this device could "write letters with a quill pen, the glass eyes flicking back and forth, following the movement of the automaton's hand as though it was concentrating". Cobb notes that although nobody thought of these automata as 'alive', their "uncanny ability to reproduce aspects of behaviour suggested that their ticking innards might somehow shed light on how squishy brains and bodies might work" (p.54). Again, there is nothing new in the contemporary idea that identity in behaviour might suggest some kind of identity in internal form.


By the mid-19th century, the metaphor had changed yet again, owing in no small part to the discovery of the role of electricity in nervous function. In 1863, German physician Hermann von Helmholtz (1821 - 1894) claimed that "nerves have often and not unsuitably been compared to telegraph wires ... according to the different kinds of apparatus with which we provide its terminations, we can send telegraphic dispatches, ring bells, explode mines, decompose water, move magnets, magnetise iron, develop light, and so on. So with the nerves" (pp.73, 74). English inventor Alfred Smee (1818 - 1877), who took this idea to its limits by attempting to construct an electrical 'thinking machine', claimed that "In animal bodies we really have electro-telegraphic communication in the nervous system. That which is seen, or felt, or heard is telegraphed to the brain ... and, from the whole our previous ideas being included in the circuit, the act determined takes place momentarily" (p.76).


As Cobb observes, whilst the nervous system was described as some kind of telegraph system, the telegraph system itself was described as "the nervous system of the country" (p.73). The metaphor went both ways.


The discovery of neurons in the mid-to-late 19th century changed the game yet again. The relay-like manner in which neurons transmitted signals back and forth evoked the workings of a telephone exchange, a metaphor that French philosopher Henri Bergson (1859 - 1941) very explicitly employed in 1896: "the brain is no more than some kind of central telephonic exchange: its office is to allow communication, or delay it ... it really constitutes a centre, where the peripheral excitation gets into relation with this or that motor mechanism, chosen and no longer prescribed" (p.145).


Bergson wished to conjure the image of an operator working in a telephone exchange. When the caller picked up the phone, a light would turn on in the exchange over the slot corresponding to the caller; the operator would manually connect a cable to the slot, ask the caller for the number they wanted to connect to, then insert another cable into the slot corresponding to the desired receiver, either the precise address, or yet another, remote telephone exchange. A relay system, in which the operator (theoretically) has the ability to 'allow communication, or delay it'.


This metaphor was further popularised in a series of talks given by British anatomist Arthur Keith (1866 - 1955) for the 1916-1917 Royal Institution Christmas Lectures. Keith drew a parallel between neurons in the brain and human operators in a telephone exchange, both of which he considered a 'relay units'. To demonstrate this, he described the simple example of someone who had a painful stone stuck in their shoe. The pain signals would travel to the brain, and henceforth, "to obtain relief the 'driver cells' of the cortex have to be set in motion; ... they control the driver units in the local exchanges, and combine their actions so that the muscular engines carry out the movements which are determined on by operations effected within the exchange systems of the cortex" (p.146). A neat analogy to be sure, but Keith entirely brushed over the meat of the question, which was how exactly those 'operations effected within the exchange systems of the cortex' happen in such a way as to reduce pain.


II: The Machine as a Brain


By the early 20th century, these machine metaphors had become so embedded in scientific consciousness that there were numerous attempts to model the nervous system through devices - both physical and imagined. The goal was not simply to mimic behaviour, but to "gain some insight into the processes and structures that were involved in producing behaviour in a living system" (p.160).


Early attempts to design theoretical 'nervous system machines' were undertaken by figures such as psychologist Max Meyer (1873 - 1967) and engineer Silas Bent Russell (1860 - 1941). Whereas Meyer's machine was fundamentally hydraulic in conception - despite his use of electrical wiring diagrams - Russell set out to design a device that could "simulate the working of nervous discharges by purely mechanical means" (p.160). Russell claimed that his concoction of valves, rods and cylinders could "respond to signals and control movements like a nervous system and ... [possess] associative memory as it can learn by experience".


These theoretical ventures were accompanied by somewhat more practical ones. One example is of a machine described as an 'electric dog' - dubbed 'Seleno' - which could use its two light detectors and three wheels to navigate autonomously towards sources of light. This device, developed in 1912 by US radio engineers, was not built primarily for scientific purposes - rather, it demonstrated the principles that would later be put to use in self-directing torpedoes. However, Seleno was also used as proof that organisms functioned along 'behaviourist' principles, i.e., that their actions were determined primarily by external stimuli and individual history, without reference to inner mental life. For example, the German-American physiologist Jacques Loeb (1859 - 1924) argued that "We may feel safe in stating that there is no more reason to ascribe the heliotropic reactions of lower animals to any form of sensation, e.g., of brightness or color or pleasure or curiosity, than it is to ascribe the heliotropic reaction of [Seleno] to such sensations" (p.161). In other words, he had drawn the unjustified conclusion that "because a machine could reproduce the behaviour of an animal, that meant an animal was simply a machine".


The quest to endow machines with apparently purposive behaviour did not end there. In 1925, American mathematician Alfred Lotka described a toy clockwork beetle that was able to sense when it was about to fall off a table and take evasive action. The workings of the beetle were simple - it had 'antennae' at the front, as well as two 'driving' wheels at the back and an additional wheel in the front set at right-angles to the driving wheels. When the beetle was far from an edge, the antennae would touch the ground, front wheel would be lifted and the beetle would move in a straight line. When the beetle was close to an edge, the antennae would fall, the front wheel would touch the ground, and the beetle's linear motion transform into circular motion, swerving the beetle away from the edge. Lotka abstracted this design into three kinds 'organ': a receptor (the antennae), an effector (the driving wheels) and an adjustor (the front wheel). Despite the unmistakably artificial nature of the beetle, Lotka sought to explain how this abstract design could be "applied to a wide variety of situations in which animals responded in an adaptive, apparently purposeful way" (p.164).


In 1933, University of Washington student Thomas Ross (1909 - 2010) made a further development by designing a robot rat that could 'learn' to find its way through a simple maze (these ideas were published in a Scientific American article titled "Machines That Think"). This robot, constructed with the help of psychology professor Stevenson Smith (1883 - 1950), was able to navigate a series of twelve Y-branches, each of which contained one dead end and one path that led to the next Y-branch. When the robot reached a dead end, a lever would be activated and it would go into reverse until it reached the Y-branch again, whereupon it would take the other path. Furthermore, the robot contained a crude 'memory disc' that allowed it to record the dead ends, such that when placed at the start of the maze a second time, the robot would be able to navigate the maze flawlessly.


In describing his robot rat, Ross articulated a project statement that has held as strongly as ever to the present day: "to test the various psychological hypotheses as to the nature of thought by constructing machines in accord with the principles that these hypotheses involve and comparing the behaviour of the machines with that of intelligent creatures" (p.164). In other words, as Ross put it later, "One way to be relatively sure of understanding a mechanism is to make that mechanism". Whereas Nicolaus Steno exhorted us to reverse-engineer the brain, Thomas Ross saw no other option than to design one ourselves.


Despite the impressiveness of these designs, Cobb asserts that all of them were inherently limited because "none of them was based on the real way that nervous systems functioned" (p.165). For example, in explaining the robot rat to Time magazine, Stevenson Smith claimed that "This machine remembers what it has learned far better than any man or animal. No living organism can be depended upon to make no errors of this type after one trial". This very fact meant that the robot rat could shed no light on learning as a process that actually happens in living creatures. It puts into question the very possibility of constructing machines in accord with the precisely correct principles that could allow them to mimic living behaviour in any meaningful way.


III. The Computer as a Brain


Moving into the mid-twentieth century, Cobb explores the various ways in which the brain has been modelled as a computer, the dominant metaphor right to this day (so dominant in fact, that we rarely even talk of it as a metaphor). Yet, intriguingly, Cobb reveals that at first the metaphor went the other way around: the nascent computing machines of the 1940s were conceptualised as brains.


This path arguably begins in 1926, when physiologists Edgar Adrian (1889 - 1977) and Yngve Zotterman (1898 - 1982) made the discovery that sensory neurons responded to stimuli in an all-or-none fashion; they either fired, or they did not - there was no middle-ground (pp.168, 169). American neurophysiologist Warren McCulloch (1898 - 1969) later realised that this all-or-none behaviour of the neuron was essentially equivalent to a proposition in logic: a statement that was either True or False. He understood that it would be possible to describe the activity of networks of neurons - "neural nets" - in terms of these propositions. Along with logician Walter Pitts (1923 - 1969), McCulloch showed how neurons linked into these networks could produce quite complex phenomena, all due to the ability of neuronal outputs to partake in Boolean operations (AND, OR and NOT). Their 1943 paper describing this work, "A Logical Calculus of Ideas Immanent in Nervous Activity", has since proved enormously influential (p.179).


Interestingly, Cobb contends that the paper's "greatest influence" was not on our understanding of the human brain, but on the budding world of computing - in particular, on the mathematician John von Neumann (1903 - 1957). von Neumann's contribution to computing has been enormous. In 1945, he proposed the first logical design of a computer that used the "stored-program concept", meaning that programs could be stored in memory rather than fixed in the computers' hardware (as in a desk calculator). Virtually all digital computers today are based on designs that utilise this concept (now known as the "von Neumann architecture").


However, von Neumann's ideas didn't just come from nowhere. In particular, he was directed by the cybernetician Norbert Wiener (1894 - 1964) to the article written by McCulloch and Pitts. If their theories worked fine for biological substrates, perhaps they could be made to work for mechanical or electrical substrates as well. Indeed, Cobb claims that McCulloch and Pitts' theoretical work on nerve nets lay "at the heart of von Neumann's conception of the structure and logical control of a computing system" (p.182).


Near the beginning of his 1945 document, von Neumann evoked clear biological analogies: "the neurons of the higher animals ... have all-or-none character, that is two states: Quiescent and excited ... following W. S. McCulloch and W. Pitts we ignore the more complicated aspects of neuron functioning: Thresholds, temporal summation, relative inhibition, changes of the threshold by after-effects of stimulation beyond the synaptic delay, etc. ... It is easily seen that these simplified neuron functions can be imitated by telegraph relays [the old metaphor] or by vacuum tubes [the devices on which first-generation computers relied for their logic circuitry]". He continued, "Since these [vacuum] tube arrangements are to handle numbers by means of their digits, it is natural to use a system of arithmetic in which the digits are also two-valued [this is due to the two-valued on/off nature of the vacuum tube, which controls the flow of electrons through an internal vacuum]. This suggests the use of the binary system" (p.182).


Given this clear invocation of neurological analogies, Cobb concludes that "Von Neumann was justifying his choices about how to develop the structure and function of a computer by referring to a biological model. At this moment of its birth, von Neumann's computer was seen as a brain.The direction of the metaphor between machine and brain had switched. Before the metaphor settled into its current form - seeing the brain as a computer - there were a number of years in which studies of brains and computers interacted in the most dynamic fashion possible" (p.183).


IV. The Brain as a Computer


In the 1930s-40s there was a flurry of speculation as brain researchers realised that there may be similarities between these nascent computers and their object of interest. For instance, the Cambridge psychologist Kenneth Craik (1914 - 1945) argued that the fundamental feature of neural machinery was its "power to parallel or model external events ... as a calculating machine can parallel the development of strains in a bridge" (pp.184,185). More precisely, both brains and machines can run symbolic calculations which allow them to represent aspects of external reality.


This idea was taken up by Edgar Adrian, who argued that "the brain must contrive to model or parallel external events by using something like the kind of symbolism which is employed in a calculating machine to represent a physical structure or process", the implication being that "the organism carries in its head not only a map of external events but a small-scale model of external reality and of its own possible actions" (p.185). According to Adrian, "Images and thoughts are then to be regarded as the finished products of an elaborate machine ... we could tell what something was thinking if we could watch his brain at work, for we should see how one pattern after another required the necessarily brilliance and definition" (p.186). This directly contradicted Leibniz's Mill and its successors, such as Searle's Chinese Room argument.


The link between brains and computers was made even more explicit when the work of McCulloch and Pitts interacted with that of British mathematician Alan Turing (1912 - 1954). In 1936, Turing had described a hypothetical device that could compute anything computable (dubbed the "Turing machine" by American logician Alonzo Church). McCulloch and Pitts later pointed out that if connected to suitable input, output and storage components, their neural nets could compute anything that is computable by a Turing machine - despite their structures being so overtly different. As McCulloch explained later, "What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine" (p.186).


Turing himself believed that in the near-future, computers could exhibit behaviour very similar in some ways to that of humans. In 1950, he published an article titled Computing Machinery and Intelligence in which he considers the question "can machines think?". He then quickly sidesteps this question and replaces it with another problem, which he calls the 'Imitation Game'. The goal is for an interrogator to ask a series of questions to a human and a machine, without prior knowledge of which is which, and to determine which one is the machine on the basis of their answers. Turing's personal belief was that "in about fifty years’ time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning" (p.196). However, he never went as far as saying this meant that machines could 'think'. Going back to the original question, "can machines think?", Turing simply believed that it was "too meaningless to deserve discussion".


Nevertheless, the speculative ferment continued. In 1948, Norbert Wiener published a best-selling book titled Cybernetics: Or Control and Communication in the Animal and the Machine. Wiener explained the new mathematical concept of information, developed during the war primarily by himself and Claude Shannon (1916 - 2001), and placed it centre-stage in the operation of both machines and brains. He also emphasised the role of negative feedback in producing apparently purposeful behaviour in both animals and machines. In fact, this was an idea he had developed earlier in 1943 along with Arturo Rosenbleuth and Julian Bigelow in an article titled Behavior, Purpose and Teleology. They explained that if a machine was programmed to stop its activities once it reached a given state, this negative feedback would present "the illusion of purposive behaviour". They also suggested that positive feedback could explain "certain pathological symptoms", such as the tremors that occur in Parkinson's disease (mirroring McCulloch and Pitts' work on "nets with circles") (p.184).


Wiener's burgeoning conception of the informational brain was widely popularised by English zoologist J.Z. Young (1907 - 1997) in the BBC's radio-broadcasted Reith Lectures in 1950. Young explained that "Information reaches the brain in a kind of code ... of impulses passing up the nerve-fibres. Information already received is stored in the brain either by sending impulses round closed circuits, or in some form corresponding to a print. This is just what calculating machines do - they both store old information and receive new information and questions in coded form. The information received in the past forms the machine's rules of action, coded and stored away for reference ... The brain has an even greater number of cells than there are valves in a calculator and it is not at all impossible that it acts quite like an adding machine, in some ways ... However, we still do not know exactly how the brain stores its rules or how it compares the input with them. It may use principles different from those of these machines" (p.198). Here we see a clear articulation of the conception of the brain as a sort of stored-program computer, a device in which is "rules of action" are "coded and stored away for reference", as a program is stored in a computer's memory.


There were some, however, who articulated the limitations of the computer analogy in helping us understand the brain. American physiologist Ralph Gerrard (1900 - 1974) emphasised at the 1950 cybernetics conference that despite the digital nature of neuronal firing, the way neurons communicate information was "essentially analogue", and that their functioning was fundamentally different to that of an electronic machine. Von Neumann also finally expressed doubts in his unfinished 1958 work, The Computer and the Brain, stating that "there exist here different logical structures from the ones we are ordinarily used to in logics and mathematics". He concluded that "the outward forms of our mathematics are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is". At this point in the narrative, Cobb remarks that "the gulf between the theoreticians and the practical biologists was growing" (p.192).


V: The Computer as a Brain (Redux)


At the beginning of the second part of the book ("Present", whereas the first was "Past"), Cobb makes the astonishing claim that "no major conceptual innovation has been made in our overall understanding of how the brain works for over half a century. This period has seen immense, Nobel Prize winning discoveries ... [and] all of this [has given] us a far richer understanding of what is happening where in the brain ... but we still think about brains in the way our scientific grandparents did" (p.203).


Cobb summarises that view as such: "a brain contains symbolic representations of the outside world that it manipulates to predict what will happen and to produce behaviours; it does this using some kind of computational approach, but it is not like any machine we have yet constructed, because it bathes in a complex system of chemical communication and its activities are partly determined by its own internal states" (p.203). In other words, the brain is not like any computer have yet built, but it still a sort of computational machine.


In a chapter titled "Computers", Cobb explores a series of efforts to understand the brain by attempting to mimic its operations in machines. One early attempt at simulating the nervous system came in 1956, when IBM researchers used the company's first commercial computer, the 701, to study the behaviour of neuronal assemblies. They simulated a network of 512 neurons and discovered that, though the neurons were initially unconnected, they "soon formed assembled that spontaneously synchronised their activity in waves", suggesting that "some [more complex] aspects of nervous system circuits simply emerge from very basic rules" (p.261).


Some highly intriguing efforts attempted to simulate some flavour of the brain's inherent plasticity by using performance-based feedback loops - a strategy that we now call "machine learning". In 1958, mathematician Oliver Selfridge (1926 - 2008) unveiled a "hierarchical processing system" called 'Pandemonium', which could recognise complex patterns and features (e.g. a letter) in image data. Pandemonium was structured into four tiers of units, or 'demons'. The 'data demons' would recognise simple features of the input by "comparing a feature, such as a line, to some predetermined internal template". These demons would then pass their output to the next layer of demon, the 'computational demon', whereupon, in Selfridge's words, "the computational demons or sub-demons perform certain more or less complicated computations on the data and pass the results of these up to the next level, the cognitive demons who weigh the evidence, as it were. Each cognitive demon produces a shriek, and from all the shrieks the highest level demon of all, the decision demon, merely selects the loudest" (pp.262, 263).


The design scheme of Pandemonium

The striking feature of Pandemonium was that it could learn as it went along. The program could continually assess its performance according to the 'ground-truth' (initially determined by human observers) and modify its behaviour via 'natural selection' by retaining those demons whose classifications were correct and culling the others. Over numerous iterations, the machine could become more accurate at its task.


At the same time, American psychologist Frank Rosenblatt (1928 - 1971) presented a similar yet slightly different model, the 'Perceptron'. Like Pandemonium, the Perception was hierarchically organised, and was built to recognise ('perceive') patterns in image data. However, unlike Pandemonium, the Perception could execute this without pre-existing templates, such as those that existed in Pandemonium's 'data demon' layer. Furthermore, the learning algorithm was different: the 'weights' of the inter-neural connections would be gradually shifted according to the difference between the network's output and the desired result. The Perception is 'trained' to minimise this difference, in an approach that prefigures the 'backpropagation' algorithm so dominant in deep-learning today.


The design scheme of the Perceptron

A marvellous piece of technology to be sure, but how well could it model the brain? Rosenblatt himself was not so sure, asserting that "Perceptions are not intended to serve as detailed copies of any actual nervous system. They are simplified networks, designed to permit the study of lawful relationships between the organisation of a nerve net, the organisation of its environment, and the 'psychological' performances of which the network is capable. Perceptions might actually correspond to parts of more extended networks in biological systems [but] ... More likely, they represent extreme simplifications of the central nervous system, in which some properties are exaggerated, others suppressed" (p.264). How could we know we are not 'suppressing' anything important?


Clearly then, Pandemonium and the Perceptron both possessed inherent limitations in their ability to model the brain - yet this did not stop them influencing entire generations of researchers who would build ever-more-complicated 'neural networks' to accomplish tasks in an ever-growing number of fields.


However, not all were enthusiastic about the potential of artificial neural networks to help us understand the brain. Francis Crick (1916 - 2004), who had worked on this kind of computational approach, lamented their reliance on backpropagation, pointing out that "it seems highly unlikely that this actually happens in the brain" (p.277). Furthermore, these approaches appear to assume that the brain embodies some set of general principles, whereas Crick suspects that the brain "may prefer a series of slick tricks to achieve its aim" (p.278). The problem, as he puts it, is that whereas the brain evolved through a tortuous trial-and-error process where each step was not perfect and merely adequate, these computational modelling approaches treat the brain as an object that was designed: "Constructing a machine that works (such as a highly parallel computer) is an engineering problem. Engineering is often based on science, but its aim is different. A successful piece of engineering is a machine which does something useful. Understanding the brain, on the other hand, is a scientific problem. The brain is given to us, the product of a long evolution. We do not want to know how it might work but how it actually does work" [emphasis added] (p.278).


These criticisms led Crick to later call for the construction of a 'connectional map' of the brain, a 'wiring diagram' of the neuronal connections in the brain, typically called a 'connectome'. Yet how far would this actually get us in understanding how the brain works?


VI: The Brain as a Brain


The title of this section is perhaps a tad facetious, but in the following paragraphs I wish to chart some of Cobb's thoughts about the merits and limitations of connectomes. If we actually simply peer into the brain and describe what we see, how far will that get us in understanding what we're seeing?


In a sense, it is fitting that Crick would promote this approach after questioning the potential of computational models. The two strategies seem like polar opposites: in one, we eschew anatomical details in favour of a general principle, whilst in the other, we seek a full anatomical picture and then attempt to understand how it all works; engineering vs. reverse engineering. Though the former clearly has its problems, much the same could be said about the latter.


The central issue is that the behaviour of even very simple neural circuits can still be incredibly complex. A case in point is the crustacean stomach, a structure which grinds up food using two rhythms that are produced by roughly thirty neurons, organised into three circuits. Each circuit contains a 'central pattern generator', a set of neurons that spontaneously produce a repetitive output with no sensory inputs. The rhythm is not 'stored' anywhere for reference, it simply emerges from the activity of the network (p.252).


As has been shown by neuroscientist Eve Marder (1948 - ) of Brandeis University, these simple networks can display a bewildering level of complexity. The activity of the neurons can be affected by 'neuromodulators' - compounds that are secreted alongside neurotransmitters and which function as "relativity slow-acting mini-hormones", altering the activity of neighbouring neurons. Already, this is a level of complexity that exceeds anything derivable from wiring diagrams. Furthermore, the activity of particular neuron is affected also by its own previous activity, and by its particular pattern of gene expression. Hence, the behaviour of two organisms with identical wiring diagrams can be quite different. Neurons can also change their composition and function over longer timescales, such that "the same neuron in different animals can ... show very different patterns of activity". As Eve Marder puts it, "a neuron is like an aeroplane that is flying at altitude while simultaneously replacing all its pre-manufactured components with elements it has created onboard" (p.253). As a result, despite having established the full connectome of the crustacean stomatogastric ganglion (this particular rhythmic structure), Marder's group cannot yet fully explain how it works.


Marder & co have also shattered the tight link thought to exist between circuit structure and output behaviour. Using computer simulations, they showed that there were many different sets of activity in individual neurons that could produce the same collective output. Furthermore, the same circuit can 'switch' between modes to produce radically different behaviours. As Cobb laments, "decades of work on the connectome of the few dozen neurons that form the central pattern generator in the lobster stomatogastric system, using electrophysiology, cell biology and extensive computer modelling, have still not fully revealed how its limited functions emerge. That brutal, frustrating fact is the benchmark for all claims about understanding the brain" (p.253). And that's before we consider the human brain with its one-hundred billion neurons and hundreds of trillions of synapses.


In the book's final pages, Cobb describes one final bit of research that throws into question the entire strategy of reverse-engineering the brain: "Reverse-engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thoughts experiments are successful, encouraging us to pursue this way of understanding the squishy organs inside our heads. But in 2017 a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as as expected" (p.378). Well, how did they fare?


As they describe in a paper titled Could a Neuroscientist Understand a Microprocessor?, Eric Jonas and Konrad Paul Kording attempted to employ the techniques typically used to analyse the brain to understand the function of an MOS 6507 processor (widely used to run video games in the late 1970s - early 1980s). First, they obtained the 'connectome' of the chip by scanning its 3510 transistors and then simulated the device on a modern computer (e.g. running the programs for the games the chip was used for, such as Donkey Kong or Space Invaders). They then used the "full range" of neuroscientific techniques, such as inflicting 'lesions' (removing transistors from the simulation), analysing the 'spiking' activity of the transistors (akin to studying the activation potentials generated by neurons) and "observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games" (p.378).


Deleting transistors ('lesioning' a brain area) produced "seductively clear results". There was a subset of ninety-eight transistors whose individual deletion prevented the system from booting up Donkey Kong, but not Space Invaders or Pitfall. But of course, this did not imply there was some special relation between these transistors and Donkey Kong, merely that "each of these components ... carried out a simple, basic function, which was required for Donkey Kong but not for the other two games" (p.378).


In the end, the study failed to produce an understanding of the inner workings of the chip, not even detecting the hierarchy of its information processing. The duo concluded that "the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking" (p.379).


VII: Reflections


Will we ever 'understand' the brain? In some sense, the answer to that question must depend on what we mean by 'understand the brain'. We have a rich understanding of the basic components of the brain - neurons, neurotransmitters, and the like - yet as we study systems of neurons, even very simple ones - as in the lobster stomach - our understanding quickly fails. Are there any approaches that could rectify that?


The approach that I found most intriguing whilst reading this book was that articulated by Thomas Ross: "One way to be relatively sure of understanding a mechanism is to make that mechanism". Inevitably, the strategy for 'making the mechanism' has varied over the ages depending on the dominant technological metaphor of the time, whether its a hydraulic system, a clock, a telegraph network, a telephone exchange, or a computer. Recently, it has meant that neuroscientists and computer scientists have built ever more complex and impressive computational models of mind and brain, many of which can be encapsulated in the fashionable term "artificial intelligence".


Another approach follows Nicolaus Steno's suggestion to "dismantle [the brain] piece by piece and to consider what these can do separately and together". The gathering of anatomical data and mapping of 'connectomes' might fit under this general mission.


In my opinion the key strength of Cobb's book is in its exploration of the limitations of both these approaches. The issue with finding general principles - or a general logic - behind overall brain function is that such a logic might not exist. As Francis Crick pointed out, the brain evolved in a messy series of steps in which each gradual change was never perfect, but just good enough. Thus, in Cobb's words, "attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, [Crick] argued, because the starting point is almost certainly wrong - there is no overall logic" (p378).


Furthermore, Cobb points out, in connection with the microprocessor experiment, that "even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them". In some sense, therefore, our lack of a theoretical understanding of the brain dooms our attempts to reverse-engineer it by peering inside. We can see the brain's myriad components and draw a physical map of what we see, but that does not mean we would understand how these parts all work together. Cobb concludes that "we still need to make significant theoretical breakthroughs" (p.379).


Another perceptive criticism of both approaches recognises the fact that brains are not objects that work in isolation - they are inherent parts of bodies: "the physiological reality of all brains is that they interact with the body and the external environment from the moment they begin to develop. Excluding these aspects from the model, or from the experimental set-up, will lead at best to an inadequate understanding". Cobb provides another example: "imagine that the MOS 6507 chip and its associated components were ... a device found on a Martian spaceship that fell to earth. A full analysis of its components would reveal that inputs from the exterior could alter its function, but it seems unlikely that we would realise that a Martian would use the device to play a game. ... In the absence of that decisive external element both the meaning and the mode of functioning of the device would be obscure" (p.381). In other words, any meaningful attempt to understand the brain must treat it as a part of an integrated body.


Cobb's own suggestion for how to proceed is to "pour resources into discrete, doable projects [such as understanding the brain of a fruit fly] able to provide insight that can be subsequently integrated into a more global approach". He predicts that attempting to understand simple animal brains "will keep us busy for the rest of the [twenty-first] century" (p.384).


This review portrays a mere subset of the discoveries and insights that Cobb shares with his readers in the book, skipping over practically everything to do with physiology and biochemistry and focusing on the theoretical and the computational, which interested me the most. Yet, this book is nothing if not comprehensive, and even those who derived no pleasure from this review will find moments to enjoy reading The Idea of the Brain.

0 comments

Comments


bottom of page