This week, a piece by psychologist Robert Epstein on Aeon made the rounds on social media, in which he argues that understanding the brain as a computer is a fundamentally misguided approach.

My favourite example of the dramatic difference between the IP [information processing] perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.

It’s a well-written article and I agree with him on a number of points, including that the second description of how the baseball player hits the ball is much closer to how the human brain works. And I guess it’s true that there are still too many neuro-scientists that subscribe to the first notion. However, all this does not mean that the brain cannot be understood as a computer.

Let’s have a look at the Oxford Dictionary definition of the word computer:

An electronic device which is capable of receiving information (data) in a particular form and of performing a sequence of operations in accordance with a predetermined but variable set of procedural instructions (program) to produce a result in the form of information or signals.

Considering that before the advent of electronics, a computer was simply a person carrying out computations, and the fact that there are computers operating in parallel nowadays, I would simplify and generalize the definition to the following:

A computer is an object or person which is capable of receiving information, and of performing operations according to a set of instructions on that input (i.e. a program, which may consist of a predetermined and a variable portion), to produce a result in the form of information.

Using that definition of a computer; how is the brain not a computer? I takes input in the form of electronic signals transmitted through nerves, and gives output in the form of electronic signals through nerves. And those signals contain information.

As such, I would argue that there’s nothing wrong in principle with trying to understand the brain as an information processor, i.e. a computer. What is wrong is thinking of it as a computer with a von Neumann architecture or any similar stored-program architecture. The brain doesn’t contain adressable memory. As the drawing-a-dollar-bill-from-memory example in Epstein’s article illustrates well, the brain doesn’t save and store the full informational contents of the inputs it receives (you’re unable to reproduce much, if any, of the detail of the printings and markings found on a dollar bill). The brain merely saves a kind of hash of the input (similar to an acoustic fingerprint), which can be used to detect a similar input again (i.e. when you see another dollar bill, you recognize it as similar to the bill you’ve seen before).

Since the brain’s hardware is so radically different from that of current (and historic) electronic computers, the brain’s software needs to use radically different algorithms as well. This shouldn’t come as a surprise. Every major shift in computer hardware platforms has resulted in a shift in the algorithms that make sense to run on top of it. Think of the shift from magnetic tape to harddisks, the shift from spinning disks to random-access solid-state drives, the shift to multi-core CPUs, the utilization of GPUs for certain workloads, etc. And all these are tiny changes compared to what quantum computers will do to the usefulnes of algorithms written for classical computers.

Nonetheless, I see tremendous value in thinking in terms of software and sometimes ignoring the messy details of the hardware below. Yes, it’s an abstraction and like all abstractions, it is leaky. But also like all abstractions, it helps us reason and think clearly when used under the right circumstances.

Using a metaphor is basically calling on the audience to use abstraction to see the shared properties of two different things. To quote Epstein’s article one last time:

But the IP [information processing] metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.

As if there were something like actual knowledge or objective truth.