Last week, I was privileged to be a respondent to a lecture entitled “The End of the World As We Know It: Neuroscience and the Semantic Apocalypse”. (Held at Canada’s premier interdisciplinary department: The University of Western Ontario’s Centre for the Study of Theory and Criticism.) Thanks to the lecturer, Scott Bakker, and the other respondent, Ali McMillan, I’m happy to post the entire lecture here as well as the responses.
Scott’s lecture aimed to provoke high-minded critical theorists out of their self-contentment, arguing that the results of neuroscience have far more radical implications for philosophy, the subject, and meaning than any poststructuralist critique. As the author of a recent fictional psychothriller (Neuropath) – about which Metzinger has said, “This book has emotionally hurt and disturbed me in a way none have done in many years. You should think twice before reading this – there could be some scientific and philosophical possibilities you don’t want to know!” – Scott is well equipped to explore the apocalyptic implications of neuroscience.
My own response came next and should be somewhat familiar to readers of this blog. It was based on an earlier post of mine, and aimed (unsurprisingly) to resist some of dire conclusions Scott draws. It also, secondarily, acted as an intro to speculative realism for the uninitiated – including brief summaries of Brassier and Meillassoux’s projects. Lastly, I tried to broach the question of the political implications of neuroscience – but squeezed for time, only managed to briefly touch upon it.
Ali’s response came last, and used insights from analytic philosophy to try and counter Scott’s lecture. He argued for a compatibilist vision of free will, and used some of Benjamin Libet’s famous experiments as evidence for his point. More optimistic about philosophy’s chances than either Scott or myself, Ali tried to revive some traditional philosophical concepts, while still acknowledging the significance of neuroscience.
I believe all three lectures together present an interesting starting point for thinking about the relation between neuroscience and philosophy. And while none of the questions between our respective positions were really resolved in the debates afterwards (even after a few beers), it was clear that we all agreed neuroscience needs to be taken seriously by philosophy. If we can minimally agree that we’re not disembodied abstract beings, then the fundamental constraints of our material selves are of the utmost importance for philosophy.
Since the lectures are rather lengthy, they’ve been posted below the fold…
Scott Bakker, “The End of the World As We Knew It: Neuroscience and the Semantic Apocalypse”
I need to begin by saying that I am a writer, not an academic, and certainly not a researcher. And if this wasn’t enough for you to sprinkle a little sceptical salt across the salad of ideas I will be presenting, you should know that I write not literary fiction, but the lowest form of commercial fiction short of Harlequin romances, epic fantasy. Given that epic fantasy was the genre most likely to be dismissed or lampooned by academic specialists, by ‘serious people in the know,’ I figured that was where the action had to be. Because I think we’re trapped in a game theory nightmare, because I think that we–whatever we are–are doomed even if the technological optimists are right, I see myself as a ‘post-posterity writer,’ as part of the first generation of writers who cannot pretend that subsequent generations will redeem the esotericism of their works. The only literature I’m interested in, indeed the only literature I think has positive social value, is literature that reaches beyond the narrow circle of the educated classes, and so reaches those who do not already share the bulk of a writer’s values and attitudes. Literature that actually argues, actually provokes, rather than doing so virtually in the imaginations of the like-minded.
One of the downsides of being kicked out of your philosophy PhD program is that you can no longer avail yourself of the many self-congratulatory myths provided by the academy. I’ve had to make up new ones. So I’ve become exceedingly fond of seeing myself as a ‘thinker.’ As much as I would love to put a capital T on the term, I’ve yet to summon the hubris to do so. But even still, I’ve been telling myself that the world needs crackpots, and that institutions like yours, cleaving to outdated pseudo-cognitive scruples, are dedicated to rubbing us out. You see, I really am free to think whatever the hell I want, so long as I continue telling rip-roaring yarns. I can pursue any and all the ideas that used to cause me so many institutional and interpersonal headaches when I was still pursuing my degree.
So I’m going to write as I think a thinking writer should write, as someone who can perhaps offer a fresh perspective precisely because they are an institutional outsider, unconstrained by the various path dependencies that so often deliver us to dead ends.
And as a writer and thinker both, the thing I am most interested in is this… this very moment now…
Whatever the hell it is.
I came up with the idea for my last book, Neuropath, in the course of several conversations with my wife. She’s never particularly cared for epic fantasy as a genre, not even the kind that features lawn ornaments for characters, so I thought it would be nice to write something in her preferred ‘guilty pleasure’ genre, the psycho-thriller. I had recently finished teaching a Pop Culture course where, given my growing contempt for semiotics, I decided to take an evolutionary biological approach, to look at mass mainstream culture as a modern prosthesis for various aspects of our stone age minds. So my head was swimming with nifty analogies and formulations.
We thought we were the centre of the universe–science showed us wrong. We thought we were struck in God’s image–science showed us wrong. We still think we’re the great ‘meaning maker’–and now science seems to be showing us wrong, that this is simply another conceit of our limited perspective.
What an awesome premise for a hack-and-slash sexploitation piece.
The idea was to write something set in a near-future where now nascent technologies of the brain had reached technical, and more importantly, social maturity, a time where the crossroads facing us–the utter divergence of knowledge and experience–had become a matter of daily fact. A time when governments regularly use non-invasive neurosurgical techniques in interrogations. A time when retail giants use biometric surveillance to catalogue their customers, and to insure that their employees continually smile.
A time after the apocalypse.
Truth be told, this talk represents something of a homecoming for me–I am extremely grateful to Nandita Biswas-Mellamphy and the Centre for the Study of Theory and Criticism for affording me this opportunity. Neuropath represents both how far I have and have not travelled from the things I once believed as a student here. Man, did I think I was a radical badass. I’ve migrated from an odd brand of post-structuralism to an odd brand of contextualism to a downright bizarre species of sceptical naturalism. I am half mad for interdisciplinarity.
Which is why I offer this general discussion of the novel’s philosophical underpinnings, both as a cautionary tale and an act of provocation.
You are not so radical as you think. In fact, you are nothing at all.
THE ARGUMENT AND THE ARGUMENTS
Ostensibly, the narrative of Neuropath is structured around something called ‘The Argument,’ which is simply that humans are fundamentally biomechanical, such that intentionality can only be explained away. Rather than enter the conceptual jungle of the determinism/compatibilism debate–where interpretative ambiguity and ‘death by a thousand qualifications’ allows every position to think themselves right–I try to steer the dilemma away from intractable metaphysical grounds. The dilemma simply does not need guesses regarding materialism or the fundamental nature of causation or what have you to bite. Whatever a mechanism is ‘fundamentally,’ it obviously strikes us as incompatible with any number of intentional concepts. The Argument is something that people tend to ‘get’ even in the absence of specialized training, such as the kind we all suffer.
Personally, I stumbled onto it as a fourteen year old.
But aside from the Argument, which I don’t think requires rehearsing here, the narrative presents several secondary arguments, which taken as a whole seem to paint mind and meaning into an exceedingly difficult corner.
The first is a straightforward pessimistic induction. Historically, science tends to replace intentional explanations of natural phenomena with functional explanations. Since humans are a natural phenomena we can presume, all things being equal, that science will continue in the same vein, that intentional phenomena are simply the last of the ancient delusions soon to be debunked. Of course, it seems pretty clear that all things are not equal, that humans, that consciousness in particular, is decidedly not one more natural phenomena among others.
The second involves what might be called ‘Cognitive Closure FAPP.’ This argument turns on the established fact that humans are out and out stupid, that the only thing that makes us seem smart is that our nearest competitors are still sniffing each other’s asses to say hello. In the humanities in particular, we seem to forget that science is an accomplishment, and a slow and painful one at that. The corollary of this, of course, is that humans are chronic bullshitters. I’m still astounded at how after decades of rhetoric regarding critical thinking, despite millennia of suffering our own stupidity, despite pretty much everything you see on the evening news, our culture has managed to suppress the bare fact of our cognitive shortcomings, let alone consider it any sustained fashion. Out of the dozen or so instructors of practical reasoning courses that I have met, not one of them has done any reading on the topic.
The fact is we all suffer from cognitive egocentrism. We all seem to intuitively assume that we have won what I call the ‘Magical Belief Lottery.’ We cherry pick confirming evidence and utterly overlook disconfirming evidence. We automatically assume that our sources are more reliable than the sources cited by others. We think we are more intelligent than we in fact are. We rewrite memories to minimize the threat of inconsistencies. We mistake claims repeated three or more times as fact. We continually revise our beliefs to preempt in-group criticism. We regularly confabulate. We congenitally use our conclusions to determine the cogency of our premises. The list goes on and on, believe you me. Add to this the problem of Interpretative Underdetermination, the simple fact that our three pound brains are so dreadfully overmatched by the complexities of the world…
Maybe we will discover Adorno’s ‘Messianic moment’–more importantly, maybe we already have. But the fact is we simply lack the capacity to collectively recognize it. As Richard Dawkins is prone to point out in his interviews, the thing that distinguishes scientists is that even if they disagree, they do tend to agree on what would change their minds.
This is what cripples the pre-emptive and ‘separate but equal’ approaches that were my favourite theoretical security blankets back when I was first a Heideggerean and then a Wittgensteinian. In the first instance, I was inclined to believe that science, since it lacked the conceptual resources to examine its own assumptions, was simply a kind of bad philosophy in desperate need of diagnostic interpretation to straighten itself out. In the second instance, I was inclined to think that science was simply another language game which, despite the obvious power of its domain specific normative yardsticks, didn’t necessarily carry reductive water in other language games.
Now, I no longer pretend to know What Science Is. Maybe it is a kind of bad philosophy. Maybe it is a kind of language game or normative context or whatever your unexplained explainer happens to be. But since we are such theoretical bunglers outside the institutional confines of science as a matter of fact, it strikes me as more than a little inconsistent to use exclusive commitments to any of these speculative interpretations to then condition my commitments to scientific claims–a little too like using a cognitive Ted Bundy’s testimony to convict a cognitive Mother Theresa.
Some people belief the earth is flat. Some people believe the earth is young. Some believe that the earth is hollow and that Hitler hides within it, waiting for the day to sort things out. Still others believe the earth is a social construct. Beliefs are so cheap it’s amazing they don’t sell them at Walmart. Cognitive Closure FAPP, the fact that we are theoretical half-wits outside of science, is what forces the issue, what closes the sophistical door.
What warrants a long, hard, and most importantly, honest look at the troubling implications of science.
CONSCIOUSNESS AS COIN TRICK: THE BLIND BRAIN HYPOTHESIS
What if we’ve been duped, not simply here and there, but all the way down, when it comes to experience? What if consciousness were some bizarre kind of hoax?
The final secondary argument offered in the novel is based on something called the ‘Blind Brain Hypothesis.’ Consciousness is so strange, so little understood, that anything might result from the current research in neuroscience and cognitive science. We could literally discover that we are little more than epiphenomenal figments, dreams that our brains have cooked up in the absence of any viable alternatives. Science is ever the cruel stranger, the one who spares no feelings, concedes no conceits no matter how essential. In the near future world of Neuropath, this is precisely what has happened under the guise of the Blind Brain Hypothesis, the theoretical brainchild of the story’s hero, Thomas Bible.
Consider coin tricks. Why do coin tricks strike us as ‘magic’? When describing them, we say things like “poof, there it was.” The coin, we claim, “materialized from thin air” or “appeared from nowhere.” We tend, in other words, to focus on the lack of causal precursors, on the beforelessness of the coin’s appearance, as the amazing thing. But why should ‘beforelessness’ strike us as remarkable to the point of magic?
From an evolutionary standpoint, the uncanniness of things appearing from nowhere seems easy enough to understand. Our brains are adaptive artifacts of environments where natural objects such as coins generally didn’t ‘pop into existence.’ Our brains have evolved to process causal environments possessing natural objects with interrelated causal histories. When natural objects appear without any apparent causal history, as in a coin trick, our brains are confronted by something largely without evolutionary precedent. Instances of apparent beforelessness defeat our brains’ routine environmental processing.
The magic of coin tricks, one might say, is a function of our brains’ hardwired abhorrence of causal vacuums in local environments. The integration of natural objects into causal backgrounds is the default, which is why, we might suppose, the sense of magic immediately evaporates when we look over the magician’s shoulder and the causal history of the coin is revealed. The magic of coin tricks, in other words, depends on our brains’ relation to the coin’s causal history. Expose that causal history, and the appearing coin seems a natural object like any other. Suppress that causal history (through misdirection, sleight of hand, etc.), and the appearing coin exhibits beforelessness. It seems like magic.
I bring this up because so many intentional phenomena exhibit an eerily similar structure. Consider, for instance, your present experience of listening. The words you hear ‘are simply there.’ You experience me speaking; nowhere does the neurophysiology–the causal history–of your experience enter into that experience as something experienced. You have no inkling of sound waves striking your eardrum. You have no intuitive awareness of your cochlea or auditory cortex. Like the coin, this experience seems to arise ‘ready made.’
The Blind Brain Hypothesis proposes that this is no accident. Various experiential phenomena, it suggests, are best understood as a kind of magic trick–only one that we cannot see through or around because our brain itself is the magician.
Whether or not the so-called ‘thalamocortical system’ turns out to be the ‘seat of consciousness,’ one thing is clear: the information that finds its way to consciousness represents only a small fraction of the brain’s overall information load. This means that at any given moment, the brain’s consciousness systems possess a kind of (fixed or dynamic) information horizon. What falls outside this information horizon, we are inclined to either overlook completely or attribute to the so-called ‘unconscious’–a problematic intentional metaphor if there ever was one.
Just as the magic of coin tricks is a function of our brains’ blinkered relation to the coin’s causal history, the Blind Brain Hypothesis suggests that many central structural characteristics of consciousness are expressions of our brains’ blinkered relation to their own causal histories, an artifact of the thalamocortical information horizon.
Given that our brains are in fact largely blind to their own neurophysiological processing, it seems clear that an information horizon exists in some form. Structurally, the brain is simply too complicated to track itself. Developmentally, the brain lacked both the time and the evolutionary impetus to track itself.
When we access our brain ‘from the outside,’ we’re exploiting circuits developed over millions and millions of years of evolution. Our brains are primarily environmental processors, exquisitely adapted to how things are in their environments. As a result, when we access our brains as another object in our environment, we have tremendous success ‘seeing how things are’ with our brains. When we access our brain ‘from the inside,’ however, we’re forced to completely forgo all this powerful circuitry. Instead, we’re limited to what seem to be relatively recent evolutionary adaptions, the ‘wiring of conscious experience.’ Our brains are not primarily brain processors, and as a result, we have tremendous difficulty ‘seeing how things are’ with our brains–so much so that we cannot even see ourselves as anything remotely resembling the brains we encounter in our environment.
Given these structural and developmental handicaps, information horizons have to exist. The real question is one of how they impact consciousness.
That the absence of information does affect experience becomes immediately clear if you simply attend to your visual field. You can actually track the falling off of information from your fovea–a spot the size of your thumbnail held out arm’s length–across your periphery and into …
In fact, the point at which your visual field trails away lies outside of the very possibility of seeing. Sight simply does not exist on the far side of your ‘visual information horizon.’ We rely on other, non-visual systems to stitch successive visual fields into a coherent spatial environment, and so tend to ‘overlook’ the limits of our looking.
As mundane as this might sound, this example actually underscores something truly remarkable. It seems clear that the ‘trailing away’ of our visual field is a basic structural feature of visual experience, a positive feature. So does this mean it possesses neural correlates? Does it make sense to infer the existence of ‘visual trailing’ circuits? If not, this suggests that a neurophysiological lack can manifest itself as a positive feature of experience, in this case, the closure of our visual field. In other words, not all experience possesses functional correlates––at least not in the straightforward way we think.
Consider the ex nihilo character of volition, or they way want and desire simply ‘come upon us.’ According to the Blind Brain Hypothesis, decisions and affects simply arise at the point where they cross the information horizon and are taken up by the thalamocortical system.
Or consider the so-called ‘transparency’ of experience, the fact that we see trees, not trees causing us to see trees. Since the processing involved in modeling environments falls outside the information horizon, all we access is the model and none of its constitutive neurological antecedents.
Intentionality also seems to fit. Since the processing behind our recollection of trees, say, falls outside the information horizon, our brain substitutes an abbreviated synchronic relation, what we have conceptualized as ‘aboutness,’ for a diachronic one, the particular causal provenance of our recollection.
The structure of normativity provides another potential candidate: since the processing involved in the bottom-up generation of behavioural outputs largely falls outside the information horizon, the thalamocortical system can only use the ‘tail end’ of regularities, so to speak. The brain only gives consciousness ‘right’ or ‘wrong,’ and nothing of the actual processing involved in testing.
Something similar might be said of purposiveness and the way teleology turns causality on its head: the information horizon encloses the circuitry involved in hypothetical modeling, but not much else, so even though our brain generates behavioural outputs bottom up, we perform actions for this or that–under the guise of bottomlessness. ‘Goals,’ the suggestion is, are what’s left when the bulk of the brain’s behavioural processing falls outside thalamocortical information horizon, save those involved in anticipation.
On this account, consciousness is a perpetual and quite impoverished middle-man, accessing, thanks to the information horizon, only opportunistic fragments of more global processes. We’re like a lone audience member, chained in front of the magician of our brain. We can intellectually theorize the causal provenances that make the tricks possible, but we are ‘hardwired into’ our perspective, we are nevertheless forced to experience the ‘magic.’
Nothing, I think, illustrates this forced magic quite like the experiential present, the Now. Recall what we discussed earlier regarding the visual field. Although it’s true that you can never explicitly ‘see the limits of seeing’–no matter how fast you move your head–those limits are nonetheless a central structural feature of seeing. The way your visual field simply ‘runs out’ without edge or demarcation is implicit in all seeing–and, I suspect, without the benefit of any ‘visual run off’ circuits. Your field of vision simply hangs in a kind of blindness you cannot see.
This, the Blind Brain Hypothesis suggests, is what the now is: a temporal analogue to the edgelessness of vision, an implicit structural artifact of the way our ‘temporal field’–what James called the ‘specious present’–hangs in a kind temporal hyper-blindness. Time passes in experience, sure, but thanks to the information horizon of the thalamocortical system, experience itself stands still, and with nary a neural circuit to send a Christmas card to. There is time in experience, but no time of experience. The same way seeing relies on secondary systems to stitch our keyhole glimpses into a visual world, timing relies on things like narrative and long term memory to situate our present within a greater temporal context.
Given the Blind Brain Hypothesis, you would expect the thalamocortical system to track time against a background of temporal oblivion. You would expect something like the Now. Perhaps this is why, no matter where we find ourselves on the line of history, we always stand at the beginning. Thus the paradoxical structure of sayings like, “Today is the first day of the rest of your life.” We’re not simply running on hamster wheels, we are hamster wheels, traveling lifetimes without moving at all.
Which is to say that the Blind Brain Hypothesis offers possible theoretical purchase on the apparent absurdity of conscious existence, the way a life of differences can be crammed into a singular moment.
But I’m getting carried away.
If our brains were somehow, impossibly, wired to process themselves from the inside (as the subject of introspection) with the same fidelity with which they process themselves from the outside (as the object of neuroscience), then one might expect the generation of ‘action’ to be experienced as one more thing within the great causal circuit of the environment. Rather than experiencing desires ‘motivating’ those actions, our brains would simply experience the translation of environmental inputs into behavioural outputs in toto. There would be no desire, only behaviour arising as another natural event. Rather than experiencing norms constraining those actions, our brains would experience the processing of behavioural outputs against ongoing environmental input. There would be no ‘right or wrong,’ no ‘corrections,’ only attenuations of behaviour in response to real-time environmental feedback. Rather than experiencing purposes guiding those actions, our brains would experience the processing of behavioural outputs against past environmental feedback. There would be no ‘point’ to our actions, only behaviour reinforced by previous environmental interactions.
But the ‘wiring of consciousness’ is far from complete, perhaps necessarily so. And given evolutionary imperatives, it stands to reason that the thalamocortical system would exploit it’s own limitations, leverage its own information horizon. If the processing behind our environmental interventions is inaccessible, and if the ‘ownership of actions’ pays reproductive dividends, then the development of something like the ‘feeling of willing’ makes a strange kind of sense. Since the greater brain behind the information horizon simply does not exist for the thalamocortical system, it has to cobble things together and make evolutionary due.
This, the Blind Brain Hypothesis suggests, could be the case for the ‘feeling of aboutness,’ the ‘feeling of forness,’ the ‘feeling of rightness,’ and so on. None of these things feel like coin tricks, like magic, simply because they are the mandatory constant, not defections from an otherwise causal background. But they seem to vanish when we look of the brain’s shoulder–they share the same antipathy to causal cognition–because they are, in a strange way, artifacts of an analogous limitation of our perspective, more the result of what we lack than what we possess.
If the Blind Brain Hypothesis turns out to be true (and heaven help us if it does), then consciousness could be–basically, fundamentally–a kind of coin trick. The so-called ‘hard problem,’ the problem of explaining consciousness in naturalistic terms, could be insoluble simply because there’s no such natural phenomena as ‘consciousness.’ The magic can only vanish as soon as the coin trick is explained. In this case, we are the magic.
For me, this is where the plank of reason breaks.
Where things become apocalyptic.
As a former graduate of the Theory Centre, my suspicion is that many of you might interpret these speculative ramblings as a kind of naturalistic distortion/vindication of the ‘post-modern subject.’ The ‘fragmentary subject’ is old hat in circles such as these, old enough to have long ceased being radical (though for some strange reason I still regularly encounter people who insist talking about it radical tones of voice). This is the reason, I think, some are initially underwhelmed by the implications of the Blind Brain Hypothesis.
First, we need to appreciate that the institutional migration of these concerns from armchairs (or couches, as the case might be) to research centres is as drastic as can be. People who do not appreciate the distinction between philosophy considering these possibilities and science considering them, it seems to me, are typically those who, despite all reason, think that their theoretical philosophical positions warrant exclusive commitment–who think they’ve won the Magical Belief Lottery. Since they already think the post-modern subject true, further confirmation strikes them as superfluous. But as I said, few things are quite as cheap as belief. Hopefully my earlier discussion of our cognitive shortcomings makes the irrationality of exclusive commitment to any of these speculative forays clear. We are theoretical cripples.
More importantly, scientific claims tend to be socially actionable, immediately, particularly given its thoroughgoing integration with capital. Nielsen’s recent billion dollar plus investment in NeuroFocus is but the beginning of the so-called neuromarketing revolution. And given that technological advantages without obvious near-term deleterious effects always seem to be exploited in capital societies, the technologization of the brain, ranging from the therapeutic to the ‘neurocosmetic,’ seem inevitable.
Second, I think that if you look closely at those discourses that turn on some notion of decentred subjectivity, either in various ‘philosophies of difference’ or elsewhere, you will notice a kind of inconsistency. No matter how radical the revision, thinkers of difference generally treat the components of that subjectivity–affects, meanings, purposes, morals, moments–as wholes.
The fragmentation, I’m suggesting, goes all the way down. It’s not that we are not the self-present subjects of early enlightenment myth–an illusion easily explained by the invisibility of ignorance. It’s that we are not subjects at all.
Even though I refuse to believe the Blind Brain Hypothesis, I often find myself terrified–and I mean this quite literally–by the strange, inside-out sense it seems to make once you grasp its central intuition. Even the way it seems to confound reason possesses a peculiar explanatory force.
Consider what might be called the ‘Bottleneck Thesis,’ which might be expressed as: we are natural in such a way that it is impossible to fully conceive of ourselves as natural. In other words, we are our brains in such a way that we can only understand ourselves as something other than our brains. Expressed in this way, the thesis is not overtly contradictory. It possesses an ontological component, that we are fundamentally ‘physical’ (whatever this means), and an epistemological component, that we cannot know ourselves as such. The plank in reason breaks when we probe the significance of the claim–step inside it as it were. If we cannot understand ourselves as natural, then we must understand ourselves as something else. And indeed we do, as we must, understand ourselves as agents, knowers, sinners, and so on. We may define this ‘something else’ in any number of ways, but they all share one thing in common: a commitment to a spooky bottomless ontology, be it social, existential, or otherwise, that is fundamentally incompatible with naturalism. We can disenchant the world, but not ourselves.
Although not contradictory, the Bottleneck Thesis does place us in a powerful cognitive double-bind. Despite the sheen of philosophical respectability, when we speak of the irreducibility of consciousness and norms as a way to secure the priority of life-worlds and language-games as ‘unexplained explainers,’ we are claiming an exemption from the natural. How could this not be tendentious? The only thing that separates our supra-natural posits from supernatural things such as souls, angels, and psychic abilities is the rigour of our philosophical rationale. Not a comforting thought, given philosophy’s track record. Moreover, these supra-natural posits are in fact fundamentally natural. Their apparent irreducibility is merely a subreptive artifact of our natural inability to understand them as such in the first instance. But then, once again, the only way we can assert this is by presupposing the very irreducibility we are attempting to explain away. We simply cannot be fundamentally natural because of the way we are fundamentally natural.
Given the absurdity of this, should we not just dismiss the Bottleneck out of hand? Perhaps, but at least two considerations should give us pause.
First, there is a sense in which the Bottleneck Thesis is justified as an inference to the best explanation for the cognitive disarray that is our bread and butter.
Say sentients belonging to an advanced alien civilization found some dead human astronauts and studied their neurophysiology. Say these sentients were similar to us in every physiological respect save that evolution was far kinder to them, allowing them to neurophysiologically process their own neurophysiology the way they process environmental inputs, such that for them introspection was a viable mode of scientific investigation. Where we simply see trees in the first instance, they see trees as neurophysiological results in the first instance.
Studying the astronauts, these alien researchers discover a whole array of neuro-functional similarities, so that they can reliably conclude that this does that and that does this and so on. The primary difference they find, however, is that our thalamocortical systems have a relatively limited information horizon. After intensive debate they conclude that humans brains likely lack the ability to process themselves as something belonging to the causal order of their environment. Human brains, they realize, probably understand themselves in noncausal terms. They then begin speculating about what it would be like to be human. What, they wonder, would noncausal phenomenal awareness look like? They cannot imagine this, so they shift to less taxing speculations.
On the issue of human self-understanding, the alien researchers suggest that with the early development of their scientific understanding, humans, remarkably, would begin to see themselves as an exception to the natural order of things, as something apart from their brains, and would be unable, no matter what the evidence to the contrary, to divest themselves of the intuition. ‘There would be much controversy’ they suggest, ‘regarding what they are.’
On the issue of social coordination, the alien researchers conclude that humans would be forced to specify their behaviours in noncausal terms, as behaviour somehow exempt from the etiology of behaviour, and as a result would be unable to reconcile this intuitive understanding with their scientific understanding of the world. Given that humans are capable of scientific understanding (the specimens were, after all, astronauts), the aliens assume humans would perhaps attempt to regiment their understanding of their behaviour in a scientific manner, perhaps elaborate a kind of ‘noncausal ethology’ (what we call ‘psychology’), but they would be perpetually perplexed by their inability to reconcile that understanding with their science proper.
Human understanding of their linguistic behavioural outputs, the alien researchers assume, would likewise be characterized by confusion. Once again the human’s intuitive understanding would be noncausal, and given the maturation of their science, they might begin to question the reality of their hardwired default assumptions–their ‘intuitive sense’ of what was happening as far as language was concerned. ‘There might be some noncausal X,’ the aliens conclude, ‘that for them constitutes the heart of their immediate linguistic understanding, but it would seem to vanish every time they searched for it.’ (The X here, of course, would be what we call ‘aboutness’). Some more daring researchers suggest humans might eventually abandon this X, attempt to understand language in thoroughly terms. But this would provide no escape from their dilemma, since such an understanding would seem to elide obvious phenomenal features that not only seem to belong to language, but to be constitutive of it. (And here, of course, I’m talking about normativity).
And so the aliens continue speculating, all the while marvelling at the poor blinkered creatures, and at the capricious whim of evolutionary fate that perpetually prevents them from effectively rationalizing their neurophysiological resources.
Is this story that farfetched? Could aliens, given intact specimens, predict things like the mind/body problem, the problem of moral cognitivism, the problem of meaning, and the like? With enough patience and ingenuity, I suspect they could. The Bottleneck Thesis, I think, provides the framework for a very plausible explanation of the intractable difficulties associated with these and other issues. The theoretical uroboros of the intentional and the physical, the human and the natural, has a long and hoary history, repeated time and again in drastically different forms through a variety of contexts. It is as though we continually find ourselves, in Foucault’s evocative words, at once “bound to the back of a tiger” and “in the place belonging to the king.” This apparent paradox is a fact of our intellectual history, one that requires explanation.
As an adjunct to the Blind Brain Hypothesis, the Bottleneck Thesis not only explains why we seem to have so much difficulty with intentional phenomena in general, it explains why those difficulties take the forms they do across an array of different manifestations.
The second thing that should give us pause before rejecting the Bottleneck Thesis is that it constitutes a bet made on a eminently plausible neuro-evolutionary hypothesis: that our neurophysiology did not evolve to process itself the way it processes environmental inputs–that our brains are blind to themselves as brains. Given evolution’s penchant for shortcuts and morphological malapropisms, the possibility of such a neurophysiologically entrenched blind-spot, although grounds for consternation, should not be grounds for surprise. So we have evolved, and so long as we continue to reproduce, our genes simply will not give a damn. It would be pie-eyed optimism to assume otherwise.
There are cogent empirical and conceptual grounds, then, to think the Bottleneck Thesis might be true. And short of actually discovering intentionality in nature, there is no way to rule it out as a possibility. Certainly the absurdity of its consequences cannot tell against it, because such absurdity is precisely what one would expect given the truth of the Bottleneck. If we have in fact evolved in such a way that we cannot understand ourselves as part of nature, then we should expect to be afflicted by cognitive difficulties at crucial junctures in our thought.
Nick Srnicek, “Neuroscience, The Apocalypse, and Speculative Realism”
Clearly the scenario Scott raises is a depressing one. So what I want to do in my response here is to try and resist these conclusions. My general approach is not going to be to directly attack your argument at the level of neuroscience or try to redefine free will in a way that makes it compatible with the natural sciences. Instead, I’m going to try and indirectly undermine these arguments by arguing against the absolute validity of neuroscientific knowledge. In the other words, I want to argue that neuroscience is incapable of being a self-sufficient body of knowledge, and it therefore crucially relies upon philosophy to complete it. I certainly don’t think I definitively refute your argument, but I hope to at least make it less overwhelming. To give away the ending, I am going to argue that presently neuroscience is bound to a correlationist interpretation and that it requires a new realist interpretation in order for its full power to be realized. The side effect of such a reinterpretation will hopefully be to also limit the epistemological certainty of neuroscience that Scott’s argument is premised upon.
So to begin with, I want to outline what I mean by a ‘correlationist’ interpretation – a term coined by Quentin Meillassoux in his recent diagnosis of post-Kantian philosophy. He argues that ever since Kant’s original prohibition, philosophy has resolutely avoided attempting to know the absolute, to know the in-itself. Since then, all that has existed for philosophy is the correlation between thinking and being. Any thought of an object is always the thought of that object as correlated to consciousness or language. Speaking of the object-in-itself, speaking of a reality existing independently of thought, is denounced as naïve and a return to pre-critical metaphysics. And this prohibition has continued on throughout the continental tradition, arguably even Deleuze, with each progressive philosopher claiming to have discerned a more fundamental variant of the correlation. The result being that the findings of science have never been taken literally. The statements of science – which ostensibly refer to objects-in-themselves – have always been crippled by philosophy’s interpretations of them. The correlationist always adds (at least implicitly) the crucial caveat that while “X may be true, it is true for us“. The explicit realism of the scientific statement is always implicitly neutered and philosophy is once again given the upper hand. The primary problem with such a shift simply lies in its making reality centered on us, on humans. We can see here that far from being philosophy’s Copernican revolution, Kant’s transcendental subject was in fact the negation of science’s incipient inhumanism. And we’ve never been able to fully extricate ourselves from this since. To escape correlationism then, what is required is to take science’s declarations at their literal face value – science truly speaks of an inhuman world. In doing so, though, we must be careful not to return to a naïve realism; what is required instead is a new type of realism.
With that in mind, let’s see how neuroscience fairs. Is it the case that the typical interpretation of neuroscience is one that subtly embeds the science into a correlationist philosophy? My answer will be yes, but to see why, I’ll need to take a brief detour through Thomas Metzinger’s work. I take him as an exemplar because he has systematized the neurophilosophical project – specifically, the project that Scott is interested in, to reduce the self to the brain. For Metzinger, simply put, subjectivity is nothing more than a phenomenal appearance, fundamentally the same as any other phenomenal appearance. It has characteristics that distinguish it from other phenomena, but it’s still an appearance, a product of the brain. There is no subject that it appears to; the subject simply is the appearance. In order to account for this appearance, he attempts to show how subjectivity emerges as the product of particular constraints being fulfilled by a neuro-informational system. What does this mean? An example might help. There’s multiple constraints, but let’s look at a constraint he calls ‘global availability’. What this refers to simply is the ability of our brain to take information and make it available for the system itself. This can occur on three analytically distinct levels – the behavioural, the attentional, and the cognitive. Phenomenologically, these refer, respectively, to our ability to react to information, our ability to focus our attention on particular things, and our ability to form concepts of our experience. Straightforward enough, but where it gets interesting is in the fact the each constraint can be fulfilled and not fulfilled, and to varying degrees. For example, let’s look at what’s called blindsight. As the result of specific lesions in their visual cortex, individuals with blindsight have lost the ability to see a particular portion of their world. There’s literally a blindspot in their visual field. The visual information for that section has, in Metzinger’s terminology, lost its global availability. Yet, when placed into specific experimental set-ups, these same individuals can still show signs that they – on some level – still have the information from that visual blindspot. They can, for example, correctly tell whether an object is within their blind spot. Yet when presented with their ability to continually guess correctly, they vigorously deny it, chalking it up to luck or a hunch. What has occurred here is a separation between the functional and the phenomenological aspects of the brain. The phenomenology of global availability is gone, destroyed by the lesion. Yet the sensory inputs are still importing the information into the brain. And so long as that information bypasses the lesion, it can still be used by the system. So while these individuals can never consciously see the object, they can still unconsciously react to it. Blindsight, therefore, is an example of where one constraint has gone unfulfilled. There are about 10 other constraints, but it’s not important to outline them here.
Instead we can skip directly to what’s interesting about this way of talking about neurophenomenology: specifically, that Metzinger’s search for constraints mirrors Kant’s own search for the transcendental conditions of experience. Both Kant and Metzinger are asking what conditions are required for experience to be possible. But of course, rather than ultimately finding the source of these conditions within a transcendental subject, Metzinger finds them in the brain. And rather than describing experience as a single formal structure comprised of intuitions and categories, Metzinger offers a much more nuanced view of experience. Despite these advances though, in framing the interpretation of neuroscience this way, Metzinger still seems to place neurology in the clutches of a classic Kantian problem. And Metzinger himself even seems somewhat aware of it, as he will repeatedly argue that phenomenal immediacy is not epistemic immediacy, or as Kant might have put it – the phenomenal is not the noumenal. What appears as immediately and intuitively given has no necessary relation with an independent world. We have no way of verifying that our experience truthfully reproduces an external world. And since our own subjectivity is simply an appearance, this applies to it too. Now this claim that phenomenal immediacy is not epistemic immediacy certainly is supported by the neuroscientific evidence. Our brain takes in sensory information and processes it in very specific ways; our experience is simply the end-product of these subpersonal processes – a movie without an audience. The problem is that our scientific knowledge is based upon our experiences; yet neuroscience appears to have debunked experience as being a reliable ground for knowledge. The very empirical grounding of neuroscience seems to be called into question by neuroscience itself. Unlike other sciences that study phenomena ‘out in the world’, neuroscience has the unique characteristic that it studies, and eventually undermines, its own empirical premises.
So our question is: if neuroscience can’t ground its own knowledge, if neuroscience alone is not self-sufficient or self-postulating, then where does this leave us? It seems as though we are stuck once again within the correlationist circle. Any knowledge of the real-in-itself is forbidden, except as an empty formal postulate. (Metzinger, for example, will insist that we are embodied beings – but he admits that we’re incapable of knowing whether or not we’re simply brains in a vat.) What this entails is that contra Scott’s argument, we can never be absolute certainty that free will is a total illusion. We just can’t have knowledge of what goes on outside of our form of experience. The thing to resist here, however, is the opposite reaction, which would be to simply say “Aha! Neuroscience is insufficient and therefore we can continue doing philosophy as we always have!” As I tried to argue earlier, such a move is a dead-end that ignores science’s effectiveness. What we need to do is to find a way to escape the choice between on the one hand, the uncritical acceptance of neuroscience, and on the other hand, the idealism of correlationism. At the very least, this means finally removing the remnants of idealist philosophy. We must accept that we are embodied beings living in a fundamentally inhuman world – one that is indifferent both to our conceptualizations and to our needs. There can be no recourse to a transcendental subjectivity, and there can be no recourse to a theological being. What philosophy needs to do, is to actually carry out a Copernican revolution and to face up to the desolate and indifferent nature of reality. This, at the very least, should be neuroscience’s effect on philosophy.
THE GREAT ESCAPE
But what can we do to break out of the correlationist circle? There are two options, based on two philosophers, that I want to suggest as an alternative to your apocalyptic vision, Scott. I’ll have to necessarily go quickly through them, but after that I want to finish by noting the possible political implications of neuroscience. The philosophers I have in mind are Ray Brassier and Quentin Meillassoux, two philosophers who are associated with what’s been called speculative realism. As is clear from that moniker, both are avowed realists, but theirs is a realism that has passed through Kant’s critical turn. Speaking schematically, we could distinguish them by saying that Brassier will begin his project from outside of the correlationist circle, while Meillassoux aims to open the circle from within.
I’ll begin with Brassier, since he’s the philosopher currently taking the nihilistic conclusions of Scott’s presentation the furthest. As Brassier will say in his book, “the disenchantment of the world deserves to be celebrated as an achievement of intellectual maturity, not bewailed as a debilitating impoverishment.” (NU, xi) In this attempt to take nihilism to its limits, Brassier brings together an impressive collection of philosophers to support his arguments, but for our purposes it is the French philosopher, Francois Laruelle, who is the most relevant. Now, Laruelle is famously obscure and difficult, so I will unfortunately have to pave over some of the nuances here, but we can basically see his project as an attempt to uncover a realist ontology unbound from any humanistic conceits. The key notion here is that instead of believing us to be stuck within the correlationist circle, where everything is already tainted by us humans, Laruelle will begin from the fact that we are already within the immanence of the real. Our thoughts, our experiences are, like Metzinger argues, already determined by something logically and temporally prior to them. Whereas for Metzinger, this is the physical world of neurons, brains and the evolutionary environment, for Laruelle, these objectifying entities already concede too much. Laruelle’s real is unobjectifiable, and moreover, foreclosed to any possible conceptual discrimination. Rather than thought or the correlation between thought and being constituting the real, it is the real itself as an unobjectifiable immanence that unilaterally determines-in-the-last-instance thought. This unilateralizing action is key – it destroys any idea of a reciprocal relation between thought and being, such as we find in correlationism. The real determines thought. Technically, we can’t even say there’s a distinction between the real and the thought that it determines, since this distinction is itself a product of thought. For Laruelle, all relationality, all distinctions, lie on the side of thought; the real is a type of identity foreclosed to such things. The real is, as Brassier will, argue a ‘being-nothing’ – a nothing even more radical than Badiou’s inconsistent void which still requires an idealist inscription to signify it. Contra Badiou, Laruelle’s real qua being-nothing is entirely sufficient-in-itself; completely indifferent to thought or any inscription. Now, two comments on this: one, such an “idea” of the real has the benefit of giving neuroscience the realist interpretation we were looking for. Much like Scott laid out, thought is determined by something outside of it; but in this case, the foreclosed outside is also outside of science’s empirical basis. In that way we can escape both neuroscience’s total validity, and correlationism’s claustrophobic circle. My second point, though, is that we are still left in the apocalyptic situation Scott outlined. Thought retains no agency and phenomenal experience is a mere epiphenomenal illusion. That doesn’t discredit Brassier’s arguments, but let’s see if another philosopher can offer a more hopeful option.
Unlike Brassier, Quentin Meillassoux begins by accepting the correlationist problematic. All we epistemologically have access to is the correlation between thought and being. What distinguishes Meillassoux from the correlationist, however, is that he uses its own arguments against it, in order to show how correlationism in fact already has knowledge of the real-in-itself. The argument is quite extensive, so I apologize in advance for having to condense it so much. To start with, he begins from the fact that the correlationist has two adversaries: the naïve realist, and the absolute correlationist. The latter is represented most prominently by Hegel who takes the correlation to be all that there is. For the absolute correlationist, the postulate of an in-itself inaccessible to thought is unnecessary, and so he simply does away with it. For the standard correlationist, however, this assumes too much. Whereas Hegel proposes to determine the internal logic of the shifts in correlations, Kant argues that this is impossible. All that we can do is describe the present correlation. If even our logical categories can shift, then we have no basis for assuming that the present internal logic is the absolute internal logic. Against the absolute correlationist, therefore, Kant will argue for the facticity of the correlation. Facticity, in this case, states that the given correlationist form – the logical invariants that permit experience to appear – is a simple fact; a fact that is without reason and without necessity. This facticity of the correlation also means that there is no way we can say with certainty that the correlation won’t shift at some future point. As Meillassoux will say, “correlationism can be summed up in the following thesis: it is unthinkable that the unthinkable be impossible.” (AF, 41) But this inability to ban the unthinkable is not merely a negative limit to our knowledge. Rather, it is the positive knowledge of the absolute that we have been looking for – the piece of knowledge that will allow us to emerge from the correlationist circle and begin to think the absolute outside. For this contingency, this facticity of our present correlation, is a necessary feature. And since by accepting facticity, we accept that everything is contingent, the only necessity that remains is contingency. What does this mean? It means that any existing entity can pass away, and any non-existing entity can come to be. There simply is no reason; it is the destruction of the principle of sufficient reason and the unleashing of chaos. Now obviously, such a philosophical idea is wildly at odds with our everyday experience. So I’ll mention two quick points in regards to that. One, Meillassoux devotes an entire chapter to refuting this argument, but I unfortunately don’t have time to cover it here. I’ll simply recommend it to anyone who’s interested. And two, it’s important to note that this absolute chaos that Meillassoux argues for doesn’t entail constant change. Change and movement are not necessary; they are just as contingent as stability is. So the fact that our everyday lives are mostly stable and structured doesn’t refute Meillassoux’s philosophical point.
OK, so to sum up: from within the correlationist circle, Meillassoux has argued that the necessity of contingency constitutes knowledge of the absolute. What does this mean for neuroscience then? Well, one of the intriguing side effects of necessary contingency is that it makes emergence a viable possibility. Since anything can come from anything without reason, emergence is even in a sense required. And since a proliferation of independent ontological levels is permitted by Meillassoux’s realist system, the idea that the mind could be at least relatively independent of the brain is entirely plausible. In fact, Meillassoux at one point even claims that the emergence of subjectivity is the ultimate example of contingency. So the benefits of treating neuroscience in this framework are threefold: one, it potentially recovers a space for agency independent of the brain; two, it offers a realist interpretation of scientific findings instead of subjecting them to yet another idealist framework; and three, it ultimately removes neuroscience’s certainty by making even natural laws contingent. Of course this is all at the cost of killing off the principle of sufficient reason, but hey, you can’t have everything.
So that’s the two options I see as alternatives to Scott’s reduction of everything to neuroscience. It is my contention that the self-reflexive gesture, whereby neuroscience cancels its own empirical basis, requires us to turn to philosophy. Science on its own is incapable of grounding its own knowledge.
Now lastly, what I want to do now is to conclude by briefly trying to take the apocalyptic tone of Scott’s talk and invert it into a potentially hopeful and liberating politics. I can only be very brief here, but I figure it might provide a good starting point for discussion. My guiding intuition here is that this period of horror and revulsion at neuroscience’s implications seems to mirror the depression and meaninglessness of the existentialist movement. And just as post-existentialism turned to philosophies of affirmation and play and ultimately turned existentialism’s absurdity into a positive condition for liberation, so too it seems as though future philosophers might take neuroscience as offering hope and freedom from folk psychology’s constraints. Where might this hope come from? Scott actually points to one example in his book, Neuropath, although he frames it there in a negative light. The basic idea is that with greater knowledge of our brain and how it affects the mind, the greater capacity we will have to modify the brain. Not merely in a therapeutic sense of healing pathologies, but in a sense of taking our existing capacities and heightening them. Or, even more radically, we might look to experiment with the constraints that Metzinger outlines. We could imagine undoing particular constraints in order to experience entirely new phenomena. Or we could create entirely new constraints to form seemingly impossible experiences. The possibilities really are endless. Now such neurodesign is clearly an ethically precarious area, but it’s certainly not a priori evil. I would like to caution against one thing though. As Catherine Malabou points out, what we must be careful to remember is that the brain is encased not only within a biological environment, but also within a cultural one. And in our present world, this cultural sphere is dominated by the logic of capital. In order to resist this logic, what Malabou says we must distinguish between is flexibility and plasticity. The former – flexibility – is a capitalist value, and refers simply to the ability of something to adapt and consent to a new situation. It is the mantra of post-industrial capitalism – constantly creating ever more flexible workers. On the other hand, plasticity not only receives form, but also actively resists it. Unlike flexibility, plasticity is not infinitely malleable. The risk is that in a capitalist world, it may be too easy for us to take our neurological knowledge and simply use it to adapt to capitalist dictates without ever offering a positive resistance. Without plasticity, without its resisting force, capitalism may end up subsuming the human, and shifting from a vampiric entity feeding off the living, to an undead self-perpetuating machine. The semantic apocalypse, in other words, may end up being the capitalist apocalypse.
Ali McMillan, “Compatibilism and Free Will”
So there’s a lot to like about this paper. I applaud Scott for setting aside certain thorny philosophical problems and for having laid out neuroscience’s apocalyptic line of reasoning in such an intuitive fashion. Certainly I also appreciate the opening injunction to take the hypotheses presented with a measure of ‘skeptical salt,’ and most of all the fact that the implications of neuroscience for philosophy are taken very seriously. This is true of both Scott’s paper and Nick’s response, of course.
But while I appreciate that Scott’s project is to lay out provocative questions rather than take firm positions, there are a few problems that I have with his eschatological vision. In the interest of explaining these in a rudimentary way, I’ll take as my starting point some arguments from the conclusions of both pieces. First, I was very interested in the connection you made, Scott, to what you call the ‘post-modern subject’; similarly, I thought Nick’s implicit invocation of Derrida and Deleuze toward the end of the paper was rather significant. I hope we can take this parallel up more thoroughly in the discussion to come.
While I appreciate the arguments laid out, and absolutely recognize the difference between abstract speculation and a scientific research program, I ultimately count myself among those ‘underwhelmed’ by the implications of Scott’s hypotheses. My problem is less with the neuroscientific critique of subjectivity, than with the apocalyptic tone, and with the fearful ideas of an utter ‘end of subjectivity,’ or ‘selfhood,’ or (as the title of the paper implies), of ‘meaning.’ While I agree with Nick that the scenario as framed in the paper is rather depressing, I simply don’t believe the conclusions we derive from neuroscience must be of this type.
There’s a curious oscillation in this piece between a fearful eliminativism and a transcendental mysterianism: so we move from ‘consciousness-as-hoax’ to talk of souls and cognitive bottlenecks, as though the latter was somehow the ‘solution’ to the spiraling despair provoked by the first. To me, this doesn’t seem like a very productive way to think about neuroscience.
As I mentioned a moment ago, I applaud Scott for setting aside in the first place the somewhat sophistical question of compatibilism contra determinism; yet I feel as though my own argument won’t make sense unless I specify that I’m a thoroughgoing compatibilist. I mean this not only in the strict sense of believing that our free will and the strictest physical determinism are not mutually exclusive; I also believe a version of the same compatibility holds true with respect to the ‘existence’ of phenomena like minds, selves, morals and meaning. Neuroscience should lead us to redefine these terms and reappraise their ontology, but it seems unlikely to be capable of somehow demonstrating their inexistence.
So I feel that the apparently radical hypotheses raised in this paper are no more frightening, and take us no further than the fragmented, decentred subjects already proposed by so many philosophers and theorists. In my response I just want to unpack this assertion a bit, and explain why I think this jaded, ostensibly ‘post-modern’ point of view is dismissed perhaps a bit too quickly in your paper. And in fact, I’m going to make little reference to any of the major theorists of this version of subjectivity; Nick has already given us a great introduction to some really contemporary work in this area, work by philosophers who would perhaps reject any association with postmodernism. I want to focus more on the core of the ‘Blind Brain’ hypothesis, and ask a fundamental question: could neuroscience ever prove that consciousness is, as you put it, “some bizarre kind of hoax”?
The first, and perhaps most fundamental problem I have with your ‘Blind Brain’ hypothesis starts in ‘The Argument,’ with the imputation that we can somehow explain away intentionality through neuroscience. Mechanism may strike us as ‘intuitively incompatible’ with intentional concepts, but I believe this intuition is just plain wrong.
This of course made me think right away of Dan Dennett’s formulation of the ‘intentional stance:’ intentional concepts such as belief and desire, on his view, are nothing more than a interpretive stance, a level of description we use both when the physical level is unavailable to us and when it’s simply not useful. So Dennett claims we can interpret a thermostat from the intentional stance, as having beliefs and desires, but it’s silly; conversely we could in principle interpret all of a human’s actions from a physical stance, but it’s impracticable and in fact it misses the point in its own way. I was particularly intrigued by your ‘alien’ story, insofar as it’s very similar to a thought-experiment taken up both by Dennett and Robert Nozick, but you use it to draw very different conclusions. So what this suggests is that at least we ‘are’ something in that our talk of belief and desire – ‘folk psychology’ as the eliminativists would have it – is in fact an very powerful, economical way of predicting the course of events and modeling possible worlds. The idea that intentional concepts could be replaced by factual descriptions of neural states is prima facie acceptable and somewhat persuasive, but ultimately implausible. Cognitive economy – along with many of the marginal details of Scott’s story, the odd behaviours of the martians that seem to require intentional concepts – would seem to demand that we continue to speak as though the mind and the self were real, causally effective entities.
But of course we don’t ‘feel’ like modes of interpreting a set of physical, causal mechanisms; we feel like a unified, conscious self in full control of our own actions. We feel like our goals and beliefs are the true, perhaps even the ‘necessary and sufficient’ causes of our actions. Here, of course, is where Scott’s ‘Blind Brain’ idea really has some bite. The idea of an ‘information horizon’ is very sensible, and you’re of course right to note that our beliefs and desires often seem to ‘come to us’ from some mysterious place on the other side of this horizon. I disagree with few of the substantial claims in this section. Still I believe that all they really negate is the Cartesian version of the subject as an absolutely, transcendentally free decision maker. In fact, you’re showing us that neuroscience is very likely to refute the assumption that our conscious self is the sovereign author of all our actions. I completely agree. But this assumption has been under attack since Freud and even well before that! (And as Francisco Varela and many others have observed, this transcendental self never even was an assumption in some Eastern traditions.)
Neuroscience might give anti-subjectivity arguments a bit of new ‘bite’ as you put it, but the core of the argument is nothing radically novel. We should remember that proclamations about the inexistence of free will were the norm rather than the exception in the early days of psychology, coming from both Freud and the behaviorists. It’s only recently, with the rise of cognitive science, that psychology has opened up once again to talk of the ‘mind’! I make this historical point only to emphasize that philosophers and scientists have had lots of time to engage with this critique of subjectivity in creative ways, and that some of these ways make being ‘underwhelmed’ in the face of your apocalypse a legitimate response. In this respect I think Nick is right to point to Deleuze and Derrida’s philosophies of affirmation and play, and I’d like to close by noting some interpretations of neuroscientific data that leave plenty of room for an intentional self!
My cases in point are two experiments conducted by the late Benjamin Libet. Specifically, I want to point to a radical difference between the two, and then propose an argument that resolves it. The difference is a very interesting one, because the first experiment is usually cited as the knock-down argument for epiphenomenalism (epi-phenomenalism being eliminativism lite: brain causes mental phenomena; but mind itself has no causal powers). The other is cited in support of precisely the opposite view, as borderline-mystical evidence for an inexplicable, irreducible capacity of consciousness. Specifically – and sorry if I lose anyone here – the first experiment shows that the readiness potential for a volitional movement occurs about 800 milliseconds before we can consciously report that we’ve chosen to make the movement: thus by the time we’re fully conscious of having made a decision to move, it’s already in a certain sense made by our brain. The second experiment shows that when we experience a sensation, it takes about 500 milliseconds for it to enter our conscious awareness. Consciousness, however, somehow antedates the sensory experience as having occurred much closer to the time of the stimulus: so this experiment suggests that we’re capable of experiencing a stimulus before we’ve really, consciously experienced it.
So these are remarkably different results, leading to diametrically opposed philosophical interpretations. The common interpretation of the first experiment corresponds nicely with ‘The Argument’ proposed by Scott and its frightening conclusion; the second by contrast leads to a rebuttal that goes much further than Scott’s ‘Bottleneck’ thesis, and actually gets taken up by certain demagogues as evidence for a transcendental soul. To me this is the greatest thing about scientific research: one guy can carry out two equally famous experiments which lead to mutually exclusive conclusions. How many philosophers have argued for such radically different theses? A few, probably, but very few. I just think that this is a great thing about science, directly tied both to Scott’s point (about how science subverts our naive beliefs about the ‘Belief Lottery’) and possibly also to what Nick draws out in his discussion of Brassier and Laruelle, as the stance of being in a sense always-already engaged with the Real itself.
But our engagement with reality is necessarily mediated by any number of factors. (Perceptual, technological, linguistic, etc.; although this invocation of the ‘medium’ is emphatically not to suggest that the real on the other side is any less real!) In the first place, what Libet’s experiments suggest to me is that the matter of neuroscientific ‘proof’ is very much a matter of interpretation. Libet himself saw the first experiment as proving that free will was an illusion, and that the only agency we could have as conscious beings was the ability to occasionally interrupt the automatic functions of the brain. In other words, consciousness is the ‘impoverished middleman’ Scott describes, possessing only a kind of limited ‘veto power.’ Yet Libet suggests that the other experiment implies “serious though not insurmountable difficulties” with the physicalist theory of mind… leading naturally to the idea that consciousness and its freedom are somehow transcendent, quite simply inaccessible to neuroscience! So the conclusions to be drawn from neuroscience with respect to the existence and causal powers of consciousness are profoundly uncertain.
Clearly, the best interpretation of these results is one that resolves the opposition between these conclusions (and between the two equally dissatisfying poles of the oscillation I feel in Scott’s text). To my mind, Dennett offers a wonderful candidate for such an interpretation, and one that leaves plenty of room for the existence of real intentionality. (And ‘my mind!’)
I’ll just summarize the rough outlines of his reading. We may, as Libet does, artificially isolate the conscious self, and in an experiment we may attempt to pin down the instant of a free decision to the level of the millisecond. In the process, we may refute a certain concept of the self: Dennett calls it ‘Self-Contained You,’ but it’s essentially the same old Cartesian self where everything important – from perception to understanding to decision – happens all in one place and all at once. This is the same transcendental concept of the self that’s ruled out by Scott’s Blind Brain hypothesis, and that has been under attack for centuries.
But refuting this version of the self doesn’t mean that Libet’s results or any others prove free will to be an illusion. As Dennett puts it, these experiments actually show us that “our free will, like all our other mental powers, has to be smeared out over time, not measured in instants” (242). Agency feels wholly unified, but it’s actually distributed both in time and throughout the brain. Thus in a single conceptual movement we can get a more robust version of free will than Libet’s ‘veto power’ and we can account for the antedating of sense experience: all we have to do is recognize that consciousness doesn’t really inhabit the ‘specious present’ and that it’s not absolutely sovereign. The intentional self, then, is not a Cartesian homunculus autonomously orchestrating all of our actions from Central Command. Instead, it is best conceived as a diachronic assemblage of top-down and bottom-up processes situated in a brain and a body. So by a detour through neuroscience and analytic philosophy, we’re back to the fragmentary subject of a certain ‘postmodernism.’
As I see it, then, Scott’s paper shows us that neuroscience is well on its way to refuting certain antique notions of self, freedom, intentionality, and so on. But I still don’t think we need to be worried about neuroscience proving that consciousness is a ‘hoax,’ that we ‘ourselves’ don’t exist or that we’re altogether lacking in agency. Our ‘selves’ are really very different phenomena than they appear to introspection, but we can always remake our conceptual and theoretical frameworks. This allows us to integrate neuroscientific results into refined concepts of self and subjectivity, and as Dennett puts it to ground these concepts in a way that “metaphysical myths fail to do.” This to me undercuts the apocalyptic tone of the paper, and opens up precisely the space for philosophy that both of you have gestured to in your own ways…