Thoughts on The Empty Brain - aeon.co. I had a very strong negative reaction to this article, and wanted to let it out. This is long, rambling, and unedited.
To see how vacuous this idea is, consider the brains of babies.
The first section lays out several semantic arguments about the components of a computer. The author claims that our brains do not use symbols, algorithms, buffers, software, data, etc. etc. etc. He then claims that because of this, the metaphor is not useful.
This argument has two critical assumptions that do not hold: 1) because we have not discovered the presence of those components, they do not exist in the brain, and 2) these components are necessary to information processing. The author spends a great deal of time describing the way computers store symbols in physical memory and follow algorithms while humans do not. This is misdirection. It distracts you from the fact that he presents no evidence whatsoever that the brain does not have these structures or processes, or that these are critical components of an information processing device.
To compound the issue, the author's model of the infant's brain fails to distinguish itself from a computer. According to the author, humans are born with "senses, reflexes and learning mechanisms." The interaction of senses and reflexes with the outside world inform learning mechanisms that change the brain. These are analogous to Turing's well-known abstract model of a computer. A Turing machine has a tape (external stimuli), a read-write head to interact with it (senses for reading, reflexes for writing), a state, and instruction table for updating the state (learning mechanisms update instructions and state).
This section attempts to mask a deep misunderstanding of the nature of information processing. The author discusses implementation details to present himself as an expert in a field he has not studied. He then offers a model of the brain which is compatible with abstract information processors.
The author claims that the information processing metaphor is not useful. Obviously, brains are not, mechanically speaking, computers. If I open my skull, no transistors spill out. This is not any reason to suppose that brains are not information processors. We don't care about the physical mechanisms, we care about the results. If, like the author repeatedly claims, a brain relies on inputs, outputs, and state updates, then it is useful to think of it as an information processor.
Science does not seek proof. It demands disproof. The IP metaphor has been deemed useful by thousands of people. Any explanation that fits the evidence is prima facie reasonable until demonstrated otherwise. The author has advanced no function of the human brain that could not be replicated by a computer. Thus the scientific imperative is to believe the brain may be a computer while seeking evidence that it is not.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines.
I really love Descartes. Two major issues here: 1) the author misrepresents Descartes's beliefs about the brain and the mind, and 2) the author fails to mention or deal with Descartes's Evil Demon.
Descartes was a dualist. He believed that the mind exists separately from the body. Which is to say, he believed that intelligence and will stem from an immaterial spirit, which drove the body via the pineal gland. The body, on the other hand (including the brain) was a machine. The author is concerned with the ability of man to learn, which in Descartes's philosophy was explicitly non-mechanical.
This is representative of the author's confusion of mind and body. The article seems to advocate a physicalist monist point of view: the brain is the mind, and it changes in response to external stimuli. The author's "metaphor-free" framework of 1) observe, 2) pair, 3) punish/reward requires this. Its feedback loop allows only exterior physical causes to influence the brain (and thereby the mind).
This is consistent with our current knowledge of physics, and I respect that, but the article mixes it with magical thinking. The assertion that the brain is meaningless without the "entire life history" of its owner relies on the idea that the brain interacts atemporally with some external past. The statement "either the brain keeps functioning, or we disappear" is not compatible with physicalism. From a physicalist point of view, it is only the arrangement of matter that matters. Offline periods are allowable, and physics, rather than life history, is the necessary context. In other words, the author ascribes many contradictory properties to the brain.
Descartes's Evil Demon thought experiment (these days better known as the brain in a vat) is not mentioned in the article at all. It poses questions about the boundaries of mind and body that are incredibly relevant to any discussion about brain simulation. The author references Descartes and even discusses the difficulty of interacting with brains removed from their body while ignoring the most important and influential work on the subject.
The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
[ . . . ]
McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
This second description of a method to catch the ball is an algorithm. It is a set of concrete steps to transform inputs into outputs. The visual input of the ball in flight informs the player whether to move left, right, forward, or back, and how to position their hand. The output of the algorithm is either a ball in hand, or not. This is specifically a heuristic algorithm, in that it is not guaranteed to be perfect (like precise computation of force of impact, angle of trajectory, etc.), but is sufficient for most purposes.
The brain is amazing at producing effective heuristics, far better than any software we've produced so far. It is also amazing at incorporating useful heuristics into its structure. Walking starts as an unbalanced mess, but becomes completely automatic within a few years. But it still produces and relies on heuristic algorithms. Running an algorithm on specialized wetware rather than silicon does not make it not an algorithm.
Brief aside on something I agree with
there is no reason to believe that any two of us are changed the same way by the same experience.
Darn right. It's incredibly likely that our brains have completely different structures handling the same tasks, and that translating from one to the other would be extremely difficult. There's no reason to believe that matrix-style knowledge-transfer will be practical, as it requires reverse-engineering the funtion of multiple brains.
However, don't interpret this as an argument against brain uploading. Total brain simulation is a far far easier problem. It requires only that we can make a high fidelity physics simulator. We don't even have to understand how the brain works. You don't need a general solution to the three body problem to accurately simulate it.
Trivially refuted statements
The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.
The computer's state has simply changed in an orderly way that now allows it to play the song or display the poem under certain conditions.
Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive.
Durable storage is not a requirement of a computer or an information processing device. A computer without it maintains its state only as long as it remains powered on.
The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?
Just because a problem becomes more challenging doesn't mean it's intractable. Let's not throw out relativistic mechanics simply because it's more challenging than classical mechanics. This is a poor strawman.
Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading.
The author has failed to provide any evidence that the IP metaphor is not valid, or propose a more valid metaphor. But even accepting that premise, the conclusion doesn't follow. Even if the brain is not a computer, there is no evidence that a computer cannot simulate a brain.
What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
Imperfect recall is not evidence that the brain is not an information processor. Lossy encoding has many benefits. Example: MP3s changed music forever. The first drawing contains all essential properties of a dollar bill. Why should the brain store any more than that?
It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
This paragraph is a mess. It fails to define intelligence, when the definition is crucial to evaluation. It reduces a complex view to two premises. Worst: its central claim is that users of the IP metaphor are incapable of simple deductive reasoning. This is not useful, and has no bearing on the matter at hand.
This article is a regression to mysticism. It relies on semantic arguments to push human exceptionalism in the absence of evidence. It argues from physicalism while relying on magical physics. It misrepresents or ignores prior work. It betrays a deep misunderstanding of information processing. The author is seeking arguments to confirm his belief that human brains are special.
Tl;dr: That guy is talking out of his ass.