Wittgenstein and The Matrix

In the movie The Matrix, future human beings live in cages where the energy from their bodies powers the robotic overlords who rule a dystopic world. While locked in their cages, however, people remain docile thanks to a grand simulator called The Matrix. The Matrix lulls these people into believing that they are citizens of late 20th century America, doing all the things normal Americans do.

The film is enjoyable hokum. Yet one might think it raises a serious philosophical question. For how do we know that we are not victims of the Matrix? That is, rather than being alive in 2018, enjoying (?) the undoubted madness of Brexit and Donald Trump, we are instead all trapped in electrified cages in some apocalyptic future. We believe the world around us is real and not simulated, so the skeptical worry goes, because this is what the Matrix wants us to believe. 

It is a frightening thought. But is it one we should take seriously? Wittgenstein shows us why we don’t have to.

Consider the following example. I remark to you, “I saw John yesterday. He looked upset.” Suppose that, unbeknownst to me, you too saw John yesterday but you don’t recall him looking upset. You might then ask me, “Are you sure? Did John say something?” If I were to reply, “he told me his wife is seriously ill”, then this should settle or end your doubt.

Everyday exchanges like this, I claim, support one of Wittgenstein’s crucial insights, namely that doubts have ends. You doubted my statement about John. My follow up then settled or ended your doubt. This is how our normal practice of doubting operates.

Contrast this with the skeptical worry. The skeptic asks: how do you know you are not a victim of the Matrix? You might respond by recounting the sorts of facts you take as confirmation that the world around you is real and not a simulation. Facts such as where you were born, who your parents were, whether or not you are married or have children etc. Yet the skeptic will reply that none of these facts answers their question. For all these facts could be part and parcel of the ‘user illusion’ constructed by the Matrix. If so, then the possibility that you are a victim of the Matrix remains a live one.

However, in his book On Certainty, Wittgenstein points out that, “[a] doubt that doubted everything would not be a doubt” (OC 450), and “doubt without end is not even a doubt” (OC 625).  The skeptical worry, at least as reconstructed above, appears to fit both of these descriptions. The worry seeks to doubt everything, in the sense that no matter what facts are offered as evidence for the reality of the world, these facts are dismissed as illusory. The worry is without end, in the sense that once raised, no facts are taken to settle or end it. Yet, if a doubt without end is not a doubt, then whatever else the skeptic is doing, their worry is not on a par with our normal practice of doubting (see our example of John looking upset). As such, it is a mistake to even call what skeptic is doing ‘doubting’.

Is it then false to claim that we are victims of the Matrix? No. But neither is this claim true. Wittgenstein’s insight into our normal practice of doubting instead reveals something else, namely the utter senselessness of the skeptic’s worry. Consequently, despite the visuals supplied by Hollywood, it is not a worry that we need take seriously.

Getting outside our heads

You are walking down a long tunnel. The tunnel is lit but only dimly. You grab your flashlight. The light reveals rows of pipes lining the walls. Where do these pipes come from? Where do they end? You continue to explore. You learn lots more details about the tunnel. Will this help answer your questions? Perhaps. But perhaps not. Maybe it is only when you step outside and examine the surrounding environment that the purpose of the tunnel becomes clear.

With an important caveat, I think something similar is true when it comes to investigating the brain. For while the brain is obviously not a tunnel, examining the brain, like wandering through the tunnel, can only tell us so much. For some questions, we are simply going to need to step outside people’s heads.

What is the mind? Imagine lying on a sandy beach, warming yourself in the morning sun. Where is your imagined thought? Is it located inside your head? If I were to (somehow) open you head and probe your brain, would I find it?

The question itself is confused. Imagining, like thinking, is a capacity, something we are able to do. And while it obviously occurs somewhere, such a capacity does not (nor could it) occur inside your brain. Instead, you realise your capacity to imagine when, for example, you describe to me your imagined thought (you are on a beach in Greece, you have just swam, the sand feels itchy etc).

Importantly, this acknowledges the crucial role of the brain. A functioning brain is quite clearly a necessary component for imagining or thinking. Further, depending on the questions that interest you, examining the brain may give you the answers you seek (tunnel specialists are always needed).

Still, for other questions, we will need to do something very different, namely take seriously the reminder that it is what we say and do that reveals our mental lives. The advantage of doing so however is that it clarifies that we examine the brain, not in order to settle thorny questions about what the mind is, but rather because the brain is part of those material underpinnings that make our imagining or thinking possible. And investigating those underpinnings can tell us new and surprising things.

Does your mind extend?

Where is my mind? Is it located inside my head? Or could my mind instead extend into the environment around me?

Andy Clark once remarked that we make the world smart so we don’t have to be. What he meant was that we (along with many other animals) alter and transform our environments in ways that enable us to do things that would prove difficult or indeed impossible without such transformations.

For creatures like us, these transformations can range from the trivial (think about how the layout of a kitchen facilitates the cooking of food) to the truly profound (notations such as alphabets).

Yet if the environment is not simply a passive player in our lives but can instead be structured and modified so as to play an active and driving role, then this has significant consequences for how we understand ourselves. In particular, it impacts on how we understand mind and experience.

For example, think of someone whose reliance on their smartphone or other piece of technological equipment is so constant that they appear to do little without it. If Clark is right and the environment can play an active, driving role, then such a device could be regarded as more than simply a tool used by the person to perform a task. Rather this device could be understood to be a genuine part of that person’s mind.

This striking claim has been given eloquent treatment by Clark and Chalmers. Christening their view Active Externalism (or what is more commonly known as Extended Mind), Clark and Chalmers have argued that objects and structures in our environments could be as equally deserving of the title of mentality as anything found inside the skull. If so, then mental states and cognitive processes can, on certain occasions and under particular circumstances, be understood as partially extending into the environment. In short, your mind is not confined to your skull.

In a new paper, my colleague (Karim Zahidi) and I criticise this view. In particular, we argue that both those who think minds do extend into the environment and those who think minds are confined to the skull confuse what are otherwise separate and distinct questions.

Consider, for example, writing a letter. Most people would agree (I hope) that writing a letter is a mental activity. You need to think about what to write. You also usually have a reason for writing (perhaps you are writing to your boss to tell her why you are quitting your job). Yet we can ask: what is it about the act of putting pen to paper that explains why this act is a mental or cognitive one?

Boy_Writing

On the other hand, we can also wonder about the causal processes at work whenever you or I write a letter. For example, we can take it as given that, when I am writing a letter, there must be a whole set of neural processes active inside my head. I also have to pick up a pen and use it to make marks on a page. In which case, we can ask: is there something in common between these ‘in-the-head’ processes and those ‘outside-the-head’ processes?

Curiously, both defenders and detractors of Active Externalism think that answering the second question helps answer the first.

Defenders claim, for example, that if there is something in common between the causal processes inside my head and those processes outside the head, then Active Externalism can be true.

Alternatively, detractors claim that if there is nothing in common between these bodily internal processes and bodily external processes, then Active Externalism is false.

By contrast, we present an alternative ‘wide view’. We argue that what makes an activity cognitive or mental is that it agrees with the sorts of wider, public practices and conventions within which most of us spend our waking lives.

What makes writing a letter a mental activity, for example, is that fits the criteria that you, I or anyone else use to determine what counts as writing a letter. This is true regardless of whether there is or there is not something in common between bodily internal and bodily external processes (though we tend to think there will not be something in common between such processes).

If our alternative ‘wide view’ is correct, then your mind is neither confined to your skull nor does it extend into your environment. Instead, mentality is an unbounded, non-localisable phenomenon.

 

One consequence of our ‘wide view’ is that the debate over Active Externalism requires, not solution, but rather dissolution.

There of course remain interesting questions to be asked about how we explain mentality. Perhaps explaining how I am able to remember the time of the last train does require appealing to how I use my smartphone. This doesn’t demonstrate Active Externalism, however (at least as most people understand the term ‘Active Externalism’). It rather demonstrates that explaining our mental abilities can (on some occasions but not on others) require appealing to ‘outside-the-head’ factors.

Going wide about mentality, as we do, thus offers the flexibility to explain mentality in ways that can (and should) make liberal use of whatever factors, be they internal, external or some combination of the two, can help unravel some of the complexities of human thought.

The full paper can be read here.

Jakob Hohwy and The Predictive Mind

Review of Hohwy, The Predictive Mind for Phenomenology and the Cognitive Sciences. Please quote from the published version of this review, which can be accessed here.

hohwy the predictive mind image

 

In his book, The Predictive Mind, Jakob Hohwy builds a case for the increasingly popular idea that perception is a matter of prediction. You do not passively perceive the world around you. Instead you predict what you are going to see. For Hohwy, this translates into the brain having internal models or representations about the world, which the brain updates by generating predictions about the world and then modifying those predictions in the light of future input.

The unique selling point of this view of the brain, insists Hohwy, is both its simplicity (the brain is a hypothesis-tester, nothing more, nothing less) and its explanatory reach (it explains perception and action and everything in between). Yet while it is fairly uncontroversial to claim that perception can involve prediction – most are likely to agree that your previous expectations can influence what you currently see – it is much more controversial to claim, as Hohwy seemingly does, that minimizing prediction error is all the brain ever does or that this is an expression of how creatures like us self-organise, which is in turn to be understood via the information theoretic notion of minimizing surprisal. Hohwy’s aim with his book is to convince us why we should take these further claims seriously.

Hohwy divides his book into three parts. In part 1, he details the sort of prediction error minimization (PEM) mechanism at the heart of his account of the brain. In part 2, he describes how this mechanism helps advance debates within cognitive science, such as that of the binding problem, cognitive penetrability and cognitive impenetrability, misperception and representation. In Part 3, Hohwy examines what PEM means for attention, perceptual unity and the fragility of perception. He also applies PEM to other aspects of mentality, like emotions, introspection, the privacy of consciousness, agency and the self.

Hohwy’s book is a rich resource, ranging over many and varied issues. In this review, I will not attempt to cover all these issues. Instead, I will zero in on what Hohwy calls the “problem of perception” and how he thinks his version of PEM can solve this problem. Once done, I will detail some concerns I have with Hohwy’s proposed solution and suggest how these concerns might impact on our understanding of the problem.

1: The problem of perception

Picture your brain tucked away inside your skull. Now consider that your brain is constantly on the receiving end of sensory input from the outside world (and produces output as a result). Given the brain’s isolation inside the head, how does the brain make sense of all this input? More importantly, how does the brain get things right? This is the “problem of perception”. Hohwy states: “The problem of perception is how the right hypothesis about the world is shaped and selected [by the brain].” (p15)

Solving this problem thus requires explaining, not only how the brain helps us perceive, but also how the brain helps us perceive in the right way.

Hohwy’s solution is to argue that the brain is a Bayesian mechanism. Very briefly, Bayes’ rule is a way of calculating the probability of a hypothesis given the current evidence. The rule does this by comparing: (1) the likelihood of a hypothesis given the current evidence and (2) the prior probability of that hypothesis, that is, how probable that hypothesis is independent of the current evidence. The result is the posterior probability for a given hypothesis.

According to Hohwy, the brain is a mechanism that realizes Bayes’ rule. So, for example, the brain determines the posterior probability of a hypothesis through the use of a perceptual hierarchy. At the top of this hierarchy, the brain’s internal prior constraints generate predictions, which then flow down through the hierarchy. At the bottom of the hierarchy, the brain receives sensory input, which in turn generates a prediction error signal, which is then sent back up through the hierarchy. This top-down, bottom-up process is then reiterated, such that the prediction error signal generated by the sensory input is minimized and hence the discrepancy between the brain’s internal prediction and the received sensory input is reduced.

This ensures that the world acts as a supervisory signal for the hierarchy, enabling the brain to acquire feedback for its predictions and so allowing its own internal models or representations of the world to become better or more accurate models or representations of that world. The prediction error signal is thus understood as an “objective corrective caused in us by objects in the world” (p45).

Now, the world is often a highly uncertain and noisy place. It follows then that if the brain is to accurately model or represent the world, then it needs a way to accommodate this uncertainty and noise. It does so, claims Hohwy, by factoring in the precision of the prediction error signal. If the precision of the prediction error signal is taken to be low i.e. there is an expectation of a lot of noise, then less weight is accorded to that signal. By contrast, if the precision of the prediction error signal is taken to be high i.e. there is an expectation of little noise, then the prediction error signal is given greater weight. By such means the brain recapitulates the causal structure of the world and does so in a context sensitive manner.

Within this hierarchical structure, there is a complex relationship between the different levels, such that when one level accords less weight to the prediction error signal, then that level or the level below will do more of the revision of that error signal. Alternatively, when a level accords greater weight to the prediction error signal, then higher levels will do more of the revision of that error signal. There is thus a delicate balancing act to be struck between each level.

Further, the prediction error signal can be minimized in more than one way. For example, the brain’s models or representations of the world can be updated by changing the predictions generated by the brain’s internal priors in such a way as to match the sensory input. This is termed “perceptual inference” and has a mind-to-world direction of fit, that is, the mind (the prediction) is adjusted to match the input from the world. However, these models or representations can also be updated by changing the input to match the predictions generated by those priors. This is termed “active inference” and has a world-to-mind direction of fit, that is, the world or the sensory input is adjusted, through action or active sampling of the world, to match the predictions generated by the brain’s internal priors.

Hohwy insists that the attractive feature of all this is that both perception and action can now be explained in terms of minimizing the prediction error signal (p88). Another allegedly attractive feature is that this understanding of the brain offers ways to tackle long-standing debates within cognitive science, such as that over the binding problem.

Suppose, for example, I see a red ball bouncing towards me. My visual percepts are seemingly ‘unbound’ (something red, something bouncing, something ball shaped etc). Yet if so, then why do I see a ‘bound’ visual percept, that is, a ‘red ball bouncing’? That is, how does my brain bind together the various non-bound percepts that arrive through my senses? This is known as the binding problem.

Hohwy’s solution is to invert this problem (p101). He claims it is not the case that the brain must somehow bind together what is otherwise an unbound sensory signal. Instead, the brain predicts that the sensory signal is bound and then looks for confirmation of this prediction in input. In other words, the brain assumes bound attributes and then queries the sensory input on the basis of this assumption. Binding is thus “essentially a statistically based inner fantasy, shaped by the brain’s generative model of the world” (p115).

However, if the variability of the input signal is sufficiently high, then this can result in illusory binding. According to Hohwy, this is what happens in the rubber hand illusion. This illusion occurs when an experimenter strokes a subject’s unseen hand, while simultaneously stroking a rubber hand that the subject is looking at. After a suitable period of time, subjects begin to report seeing the rubber hand as their own hand. Hohwy explains this illusion by saying that the brain assumes a common hidden cause when there is a strong spatiotemporal overlap in our sensations (p105-106). Yet due to the variability in the sensory input, predicting that the rubber hand is the subject’s own hand best explains away that input, that is, reduces prediction error.

Consideration of illusory binding might however lead one to wonder why Hohwy’s account of PEM doesn’t simply result in self-fulfilling prophecies. For if the brain is constantly seeking to minimize the prediction error signal, then why don’t we just spend our lives living in, say, darkened rooms, since doing so would enable us to predict perfect darkness and so effectively minimize any residual prediction error?

By way of reply, Hohwy acknowledges that the brain does engage in self-fulfilling prophecies but this is not a problem because underlying PEM is a drive to minimize surprisal, that is, to keep the organism within a certain range of expected states. The greater the discrepancy between the predictions generated and the error signal propagated up through the hierarchy, the greater the surprisal. Conversely, the less the discrepancy between the predictions and the error signal, the less the surprisal. Hohwy argues that this is linked to the phenotype of the organism. For the phenotype of the organism will ensure that certain states will be expected, that is, will have less surprisal.

In his book, Hohwy provides no further details about what exactly those expected states are. Nonetheless, the idea here seems to be that we don’t spend our lives, say, living in darkened rooms, since, unlike other kinds of organisms, doing so would not keep us in the sorts of expected states that we can reasonably take to define us as the organisms we are. I shall leave it to others to decide how satisfactory an idea this is. Still, it reveals the central role that minimizing surprisal plays within Hohwy’s account. Indeed, Hohwy goes so far as to remark: “the only reason for minimizing the error through perception and action is that this implicitly minimizes surprisal.” (p87)

So, to recap: the “problem of perception” was to explain, not only how the brain helps us perceive, but also how the brain helps us perceive in the right way. For Hohwy, the mechanism that solves this problem is prediction error minimization (PEM). PEM is what enables creatures like us to hone and refine our internal models or representations about the world such that we can then acquire right hypotheses about the world.

2: Does PEM get things right?

I have outlined Hohwy’s proposed solution to the problem of perception. I turn now to some concerns I have with Hohwy’s solution and how I think these concerns might impact on the problem itself.

At one point in his book, Hohwy appeals to a probability distribution, that is, to the claim that when a number of visual percepts are averaged across time, this provides a standard against which future percepts can be judged ‘true’ or ‘false’. Hohwy then uses this probability distribution to explain misperception. To borrow one of Hohwy’s examples, I see a sheep but because of funny lighting I think I see a dog. Why is my percept of a dog false? A probability distribution ensures that my visual percepts of sheep will likely carry more information about sheep than about dogs. Hence, I can expect to see sheep-as-sheep and not sheep-as-dogs. If I then see a sheep-as-a-dog, then my percept is false because it misaligns with that probability distribution, that is, it “pair[s] percepts with environmental causes that are not on average best paired with” (p174). Misperceptions are thus “inferences that undermine average, long-term prediction error minimization” (p176). Note that, as we saw with Hohwy’s account of binding, this then ties the brain’s models or representations of the world directly to statistical physics (p180).

All of this raises a serious worry, however. Consider that most philosophers now accept that not all information is created equal. For example, some worldly states of affairs can co-vary with other worldly states of affairs such that the co-variances between such states of affairs can be used to reveal information about those states of affairs (think of how the rings of a tree can be used to determine the age of a tree). This is called “informational covariance”. However, other worldly states, like my stated desire to go to Greece on holiday, can have properties like true or false (I may in fact desire to go to Turkey on holiday and so my stated desire is false). This is called “informational content”. Crucially, properties like true or false ensure that informational content is logically distinct from informational covariance (Hutto and Myin, 2013).

The distinction between covariance and content is not controversial. What is controversial however is determining if or how covariance can constitute content. This controversy impacts on Hohwy’s account of misperception. For a probability distribution simply is a set of covariances between, on the one hand, states of the PEM hierarchy in the brain, and on the other, external worldly states as revealed through sensory input. Yet if content is logically distinct from covariance, then it is not clear how Hohwy’s proposed probability distribution can ensure that, say, my visual percept of a sheep-as-a-dog has the contentful property of being false. Or, to put the same point another way, if my percept is to be false, then Hohwy needs to show how the sorts of covariances as displayed by a probability distribution can in fact rise to the level of content. This will be no easy task (indeed, Hutto and Myin label this the “Hard Problem of Content”). Importantly, Hohwy does not tackle it in his book. Yet until he does so, all his account succeeds in showing is that my percept of a sheep-as-a-dog is statistically unusual, since statistically I can expect to see sheep-as-sheep and not sheep-as-dogs. It does not also show that my percept has the property of being false.

Now, you might think: so much the worse for Hohwy’s account of misperception. But the problem identified here generalizes. For a related issue emerges when Hohwy claims that the brain “mirrors” the world (p228). This mind-world relation, insists Hohwy, is both direct and indirect. It is direct in the sense that the brain’s models or representations are, as we have seen, honed and refined by the prediction error signal. Yet it is also indirect in the sense that PEM is always a fragile, non-robust process. This is because the sorts of fine-tuning needed to maintain a balance between perceptual inference and active inference can easily break down (or be broken down, as in settings like the rubber hand experiment).

 However, mirrors simply co-vary with what they mirror. If covariance and content are logically distinct, then it is not obvious that mirrors do in fact have any contentful properties. As such, even if Hohwy is right to claim that the brain is “like a mirror of the causal structure of the world” (p228), this still leaves entirely unexplained how or why the brain is also a “truth-tracker” (p229).

On the other hand, Hohwy seems to simply assume that the brain minimizes prediction error and as a consequence gains truth. But one could endorse the idea that (1) the brain is a mechanism for the manipulation of causal regularities as revealed through sensory input, while nonetheless not endorsing the further idea that (2) there is any informational content inside the brain. [1] Indeed, given the sorts of heavy lifting that would need to be done to show that a probability distribution can have any contentful properties, then there may be good reason not to endorse (2).

Hohwy might respond to all this by claiming that we could not successfully engage with our environment unless our senses told us something true about the world around us. And this successful engagement can be cashed out in terms of maintaining integrity and avoiding entropy. Yet this just pushes the problem back a stage. For what exactly is the link between minimizing surprisal and an organism having internal states that are true or false, right or wrong? It seems whichever way you cut it, this question encounters a Hard Problem, one that needs to be dealt with before Hohwy can claim that PEM solves the “problem of perception”. We can still ask: how does minimizing prediction error enable the brain to get things right?

However, perhaps what is really at fault here is the problem itself. Recall how we initially set up the problem: we have a picture of an isolated brain within a skull trying to make sense of raw sensory input. Maybe what is problematic is this picture. If so, then rather than trying to make sense of this picture, we might get further by asking ourselves, not, how does the brain get things right, but instead, how do we get things right? And answering that question will require extending our explanatory focus way beyond the narrow confines of the skull.

As stated earlier, all of this is to skip over plenty of the detail within Hohwy’s book. And it would remiss of me if I ended without pointing out that Hohwy does successfully demonstrate the sheer scope of PEM, that is, how PEM does seem to have the resources to tackle lots of the issues currently confronting cognitive science. Nonetheless, an important lesson Hohwy’s version of PEM might show is that maybe not all the answers to the questions that interest us are going to be found by looking inside our heads. [2]

 

Bibliography

Clark. A. (2013). Whatever next? Predictive brains, situated agents and the future of cognitive science. Behavioural and Brain Sciences, 36 (3), 181-204.

Clark. A. (2015). Predicting Peace: The End of the Representation Wars. A reply to Michael Madray. In T.Metzinger & J.M Windt (Eds). Open Mind: 7 (R). Frankfurt am Main: MIND group.

Hohwy, J. (2013). The Predictive Mind. Oxford University Press.

Hutto, D. & Myin, E. (2013). Radicalising Enactivism: Basic Minds without Content. Cambridge, MA: MIT Press.

[1] Clark (2015) seems to come close to endorsing this view. For example, he claims that states of the brain are action-oriented. That is, the brain’s internal model is geared towards “delivering a grip on the patterns that matter for the interactions that matter” (p5, italics in original) As a result, even high-level states of a PEM mechanism do not describe the world, that is, have anything like recognizably contentful properties. Yet Clark also sees this as entirely compatible with contentful talk. However, how or why this should be so is, in my opinion, much less clear.

[2] It is worth pointing out that PEM is compatible with both externalist and internalist views about mentality. For example, Clark sees PEM as revealing those “key aspects of neural functioning that makes structuring our worlds genuinely continuous with structuring our brains and sculpting our actions” (2013, p194). Hence, PEM supports the situatedness of mentality, that is, it supports the view that the environment plays more than simply a causal role in underscoring human mentality. This is an externalist view. Hohwy, on the other hand, acknowledges that mind and world are “genuinely continuous”, but understands this as revealing the fragility of our perceptual inferences, and so as evidence of our attempts to compensate for such fragility. That is, we structure our environments and/or our engagements with those environments so as to optimize the incoming sensory signal i.e. make it more precise and so improve our internal predictions. This ensures that the situatedness of mentality is, pace Clark, confirmation of the brain’s seclusion from the hidden causes of the world. As such, the sensory boundary between brain and world is not malleable but rather “principled, indispensable, and epistemically critical” (Hohwy, 2013, p239). In other words, PEM supports an internalist view.

Wittgenstein and Extended Mind

When I write, the process of writing often reveals my thoughts to me. I may start with a vague idea of what I want to say. But it is usually during the actual process of writing, that is, during the correcting, editing, re-writing etc, that the point I want to make slowly emerges. I seem to think by writing. And the more I write, the more thinking I do.

I doubt that I am unusual in this. I rather suspect that many of us think in this way. But what does this mean for mentality? For example, where does my thinking occur when I write? Two possibilities suggest themselves. On the one hand, maybe my thoughts are somehow stored inside my head and it is the process of writing that causes them to emerge. On the other, maybe it is the very act of writing itself that realises or makes happen my thinking.

Changing tack slightly, I am now old enough to remember a time when everyone was not glued to their screens – a time when I walked down the street, I did not constantly see people glancing at their iphones or holding up their tablets. (I am now no different of course. I am as addicted to new technology as the next person.)

Yet this ubiquity of technology has led some to claim that, like many other creatures, us human beings transform our environments in ways that enable us to do things that would prove difficult or even impossible without such transformations.  As Andy Clark has put it, we make the world smart so we don’t have to be.  But if this is true, and objects like iphones and tablets can indeed play an active, driving role in our thinking, then maybe such objects are not simply tools. Maybe our thinking can in fact extend to include such objects. Put another way, perhaps my iphone can be as much a part of the machinery of my mind as anything inside my skull. While it may have a modern gloss, this is an old idea. In philosophy circles, it is now called The Extended Mind.

But what exactly is the mind?

I like to read popular accounts of neuroscience. But I would be lying if I said I had anything other than a very basic grasp of brain function. Still, I often wonder, is the brain the mind? One reason to think not is that thoughts seem to be nothing like anything inside the head. For one thing, everything inside the head is spatially and temporally locatable. You can dissect the brain. You can even prize apart a neuron. Yet if you were to open up my skull and poke around in my brain, you would not find my imagined thought of, say, sitting on a beach in Greece.

However, this disconnect between the brain and the mind only seems to magnify if we accept that the mind can extend. For while iphones and tablets are obviously made of very different stuff from neurons and cellular structures, all are still spatially and temporally locatable. But then they are nothing like my imagined holiday in Greece.  So, how can the mind include both neurons, iphones, tablets and imagined holidays?

Lots of ink has of course been spilled answering this question. I won’t canvas such answers here. Rather, I am going to change tack again.  For here is another puzzling claim.

Some have suggested that the very fact that we can extend the concept “mind” to include iphones and tablets is itself an indication that something is amiss with this concept. For once we allow that, under the right conditions, a person’s iphone can be mental, then it seems to follow that as long as those conditions are met, then potentially any object a person uses could be mental. But if any object can be mental, then maybe there is in fact nothing substantial to the concept “mind”.  Perhaps the very idea of a mind is a relic of a by-gone era, a concept past its sell by date, one that now needs to be eliminated if we are to have a proper, that is, scientific understanding of ourselves.

But consider just how radical this claim is. Eschewing talk of the mental from our everyday vocabulary would, at the very least, seem to entail a full scale revision of how we understand one another. Indeed, it is hard to see how we could discard the concept “mind” without completely reframing the concept “human being”. Perhaps this is why this claim has so few admirers. But perhaps it should also cause us to reflect that maybe somewhere along the line, our thinking on these matters has become seriously confused.

A Wittgensteinian remedy

Ludwig Wittgenstein was an Austrian philosopher who, in his later work, devoted considerable attention to issues about philosophical psychology. For example, he remarked:

“I really do think with my pen, for my head often knows nothing of what my hand is writing.”  (C & V, p24e).

He thus seemed to agree with the idea that we can think by writing. However, Wittgenstein also had a very distinctive approach to philosophical problems. He thought that when we sit down to reflect on a topic, we are liable to fall victim to pictures contained within our language, that is, ways of thinking that convince us how something must be. As he put it, “the picture seems to spare us this work: it already points to a particular use. This is how it takes us in” (PPF, vii, 55). Such pictures often generate puzzles (paradoxes, confusions). According to Wittgenstein, the remedy or cure for such puzzles is to remind ourselves of how we ordinarily use words, that is, “bring words back from their metaphysical to their everyday use.” (PI 116)

So, for example, it does seem plausible to claim that thoughts have a location. After all, it is I who imagines sitting on a beach in Greece and I clearly do have a spatial and temporal location. But if I combine this claim with the further claim that thoughts can’t have a location in the same sense as, say, neurons (if you open up my skull, you won’t find my imagined holiday), then it now seems I need to posit some sort of medium within which thoughts are stored. Yet this leaves me with a real puzzle. For the problem I hoped to avoid by positing such a medium (where are my thoughts?) now simply transfers to the medium itself. For where exactly is this medium located? More worrisome, how can it be located anywhere, since it needs to store things like thoughts, which are so unlike other spatially and temporally located objects?

The Wittgensteinian diagnosis, I think, would be that this puzzle emerges because we are in the grip of a picture, the picture that all words denote things (or what Wittgenstein calls substantives). We think that the word “thought” must denote a thing, something that I have. Since things tend to have locations, then so too must thoughts, hence we need a medium within which to locate it. The remedy for this puzzle however is not to speculate further about this medium but rather to bring words back from their metaphysical use to their everyday use. Doing so will enable us to see that not all words work by denoting things and even when some words do work in this way, not all words denote things in the same way.

For example, I often say, “I am in two minds about this”, meaning that I feel uncertain about something. I have also said, “I am losing my mind”, when I feel stressed or unduly anxious. Others have said to me,  “mind your head”, meaning watch out for the low hanging ceiling. I have watched my dad silently smooth down a surface and carefully apply a coat of paint. I thought his actions “thoughtful”, by which I simply meant that if I asked him why he was doing what he was doing, he could give me his reasons.

There is nothing exhaustive about these examples. I’m sure you could provide me with many more. Nonetheless, I think they are  illustrative of the diverse work we do with words like “mind” or “thought”, a reminder that such words can be expressive or performative, serve as part of a warning to someone else, be how we characterise someone else’s behaviour. In which case, they often do not denote anything at all.  And even if, on occasion, the word “mind” does seem to denote a thing (“my mind is full of foolish thoughts today”), this need not entail that it does so in an analogous way to how the word “table” denotes a table or how the word “writing” denotes writing. For while the concepts at work here may “touch..and run side by side” (PPF 108), this need not ensure that they also overlap. To evoke one of Wittgenstein’s many phrases, there may instead be something like a ‘family resemblance’ among their uses.

Thus, what initially appeared to be a mystery about a medium turns out to in fact be a confusion about language, one which we remedy or cure by reminding ourselves of the diversity of language and thereby loosening the hold that a picture contained within our language has over us.

Wittgenstein and Extended Mind

Now, you might think, why does it matter what Wittgenstein said? One reason I think it matters is because it casts a very different spin on Extended Mind.

As we have seen, Wittgenstein seemingly had no problem with the idea that we can think by writing. I suspect he would also have had no problem with the idea that someone can think by using their iphone or typing on their tablets. However, I think he would have had real issue with those who want to invest such everyday observations with a larger metaphysical significance.

My hunch is that Wittgenstein would criticise proponents of Extended Mind for having confused a issue about language with a puzzle about a medium (BB p6).  For underpinning  Extended Mind is a particular picture of thinking i.e. the picture that thinking is an “auxiliary activity”, a stream which must be flowing beneath the surface of our actions (Z 107). While this picture appears attractive, it encourages the thought that there must be some medium that realises such activity. And once this is accepted, then it is but a short hop, skip and a jump to claims about mechanisms, such that we can now say that the machinery of the mind can extend to include iphones or tablets. The problem however is that we are now faced with a similar puzzle as before, namely how can a causal mechanism, that is, something which quite clearly is spatially and temporally locatable, realise what equally clearly is not spatially and temporally locatable, namely thought?

As is often the way with a question like this, a vast philosophical industry has emerged to solve it. Yet where others might recommend solutions, Wittgenstein instead recommends therapy. For he would likely regard this question as one that only emerges because we are in the grip of a picture. If however we were to reject that picture, then we can also reject the question that this picture seems to generate. And we reject that picture by reminding ourselves that:

“‘Thinking’ is a widely ramified concept. A concept that comprises many manifestations of life. The phenomena of thinking are widely scattered.” (Z 110)

In other words, we remind ourselves of the sorts of work we do with the word “think”. By doing so, we will see that our everyday use of this word is not motivated by a reference to accompanying causal mechanisms. To return to a previous example, when I called my dad’s actions “thoughtful”, this word had meaning, not because it referenced some state or process currently inside or outside my father’s head, but because of the particular work the word does in that context, namely to characterise his behaviour. That is, what matters is the ‘language-game’, the ‘hurly burly’ within which words are spun and cast, not some underlying medium. This reminder should help free us from conceiving of thinking as an auxiliary activity, something flowing beneath the surface of our actions.

Thus, while you can think by writing, by using your iphone, or even by typing on your tablet, none of these commonplace observations need offer support (even indirectly) for metaphysical claims about the mind. Hence, contrary to proponents of Extended Mind, these observations need not be taken as evidence of extending mechanisms. And none of this need be viewed as a reason to eliminate the concept “mind”.

Internalisation: what exactly is it good for?

 

Here’s the thing. We need to know what happens inside people’s heads. Damage our brains and this will almost certainly affect our mental and bodily health. Yet looking inside people’s heads won’t tell us all we want to know about the mind. Crucially, in order to understand what we mean when we talk about mentality or cognition, we need to instead examine the sorts of things people say and do and the types of practices and contexts that shape people’s behaviours. Calling yourself an enactivist means signing up to this idea.

I’m just back from an excellent workshop on enactivism organized by Fred Muller at Erasmus University Rotterdam. Caterina Dutilh-Novaes (Groningen University) and Karim Zahidi (University of Antwerp) both gave excellent presentations which, in very different ways, raised the vexed issue of internalization, the idea that we internalize what is initially an external phenomenon.

For example, Caterina (https://sites.google.com/site/catarinadutilhnovaes/papers) defended her view that mathematical proof is a dialogical notion. The point of a proof, she claimed, is explanatory persuasion. Proofs can thus be characterized as having two participants with opposing goals: on the one hand, the prover, whose goal is to establish a conclusion, and on the other hand, the skeptic, whose goal (predictably enough) is to block that conclusion. Yet why don’t proofs look like dialogues? This is because the job of the skeptic has become part of the method of writing a proof. I understood this to mean that, according to Caterina, the role of the skeptic has been internalized during the formulating and writing of the proof.

In his talk, Karim (https://www.academia.edu/16884109/Radically_Enactive_Numerical_Cognition) defended an enactive account of numerical concepts. First, he laid out his view that concepts are particular types of abilities. Second, he challenged, among other things, the claim that a great variety of animal species have numerical abilities and numerical concepts. For example, he distinguished number sensitivity from number concept possession, and argued that while many animals certainly have the former, it is much less clear who, apart from humans, actually have the latter. He also offered a direct perception account of the ability to sum sequences of stimuli across different modalities, as well as describing how one could begin to give a natural history of number concepts.

Karim also discussed calculation. He argued that calculation is an activity constituted by the manipulation of public representations (symbols on a page, for example). Mental calculation, on the other hand, is what occurs when we leave these public representations out. So even if something is in fact internalized when we mentally calculate, this something is not representational. A question which then came up in discussion was: what then is internalized?

As a card carrying enactivist, my own view is that talk of internalization, no matter from what quarter, is deeply problematic. But it occurs to me (as it has occurred to others) that ‘going wide’ about mentality will always meet resistance as long as it is thought that some mental phenomena just have to be internal. Mental calculation looks to be a prime example of this. Later Wittgenstein however can help enactivists fight this resistance.

Wittgenstein cambridge
Your man himself, as they say in Northern Ireland: Ludwig Wittgenstein.

Consider one of Wittgenstein’s many thought experiments:

“Let us imagine a god creating a country instantaneously in the middle of the wilderness, which exists for two minutes and is an exact reproduction of a part of England, with everything going on there in two minutes. Just like those in England, the people are pursuing a variety of occupations. Children are in school. Some people are doing mathematics. Now let us contemplate the activity of some humans during these two minutes. One is doing exactly what a mathematician in England is doing, who is just doing a calculation. – Ought we to say that this two-minute-man is calculating? Could we for example not imagine a past and a continuation of these two minutes, which would make us call the processes something quite different?” (Remarks on the Foundations of Mathematics, VI -34)

Klagge’s (1995) description of Wittgenstein is helpful here:

“By pressing the question of what various human activities consist in, Wittgenstein hopes to demystify the mental – not by denying its existence, but by diagnosing and transcending our conception of it as an invisible reservoir. Instead of looking within the person, at the moment, for (the essence of) what constitutes, e.g. intending, expecting, or reading, we should concentrate on what leads to, surrounds, and follows from the experiences and movements with which we usually associate the activity.” (p472)

The idea then is this: when we wonder what it is that makes a given activity calculating, we encounter the primitive notion that there must be something occurring now (usually in our heads) that somehow infuses that activity with ‘calculating qualities’. Yet as Wittgenstein’s two-minute-man illustrates, strip away all context, past and future, and it becomes evident that nothing happening now (in the head, in the body, or even in the environment) makes an activity an example of calculation.

Later Wittgenstein also discusses at length the differences between calculation and mental calculation. For Wittgenstein, calculating with a pen-and-paper and mental calculation are distinguished by the different things people do and say. This of course renders the difference between these two activities one of behaviour. The twist Wittgenstein adds is that the concepts ‘mental calculation’ or ‘calculating in the head’, like many other concepts, are not about behaviour. As is well known, Wittgenstein understands the link between some concepts and behaviours as logical or grammatical. One way to understand this is to say that some concepts characterize behaviour in the sense of situating that behaviour within a particular historical socio-cultural practice. Concepts, moreover, can have primary and secondary uses. The secondary use shares important resemblances (‘looks’) with the primary use. Fogelin (1977) understands Wittgenstein to be saying that this is true of calculation, such that ‘mental calculation’ or ‘calculating in the head’ are secondary uses of the primary concept of calculation.

Together, these points – anti-presentism, the logico-grammatical link between concepts and behaviour, primary and secondary uses of concepts – provide a means to deny that mental calculation is ever internal. So, for example, when I am asked to compute a sum in my head, and in response, I furrow my brow, look pensive and give an answer, nothing that has happened inside my head at that moment explains what I just did. This is not to deny that changes may have occurred in my brain (or in my body) at that moment. What it is to deny however is that such changes capture why what I just did can be called calculating. Rather my action counts as a case of mental calculation because the secondary use of the concept of calculation is applicable to it. This secondary use characterizes my action, not through the identification of some narrow characteristic happening in me or near me at that moment, but by situating my action within the appropriate set of broader practices and techniques. This is a robustly naturalist account – it is firmly rooted in what we do and say when we calculate.

Of course, this is only the beginnings of such an account. Much more still needs to be said. Still, it confirms the merit in asking: internalisation, what exactly is it good for? The answer, I would suggest, is not much. Which raises the further question: how far can we apply this anti-internalisation agenda? To all mental phenomena? What about, say, dreams? I suspect that killing cognitivism once and for all is going to require us enactivists getting really radical…..

 

Removing the mind from the head. Part 1.

[The following post is the first in a series which will describe my philosophical project for the Research Foundation Flanders (FWO).]

 

What is the mind? Is it a thing? Can it be located? Many philosophers have endorsed the distinct but related ideas that the mind is a thing and it is located inside your head. The standard view seems to be that the mind is identical to the brain, or simply the mind is the brain. Not everyone is convinced, however.

For example, philosophy of mind and cognitive science has recently seen the emergence of two paradigms that challenge mind/brain identity. The first, enactivism (Varela, Thompson and Rosch, 1991), contends that minds are “enacted”. You think, believe, desire etc in the ways that you do because of the activities you engage in. The second, extended mind (Clark and Chalmers, 1998), claims that, under certain circumstances, minds can be “extended”. That is, the states or processes that make up a mind can, on occasion, be partially located in the environment.

Given that both paradigms challenge the standard view, then it is perhaps not surprising that both paradigms have come under sustained attack. This has led to a heated back-and-forth with, on the side, internalists, who defend the orthodox position that the mind is an internal, brain-bound phenomenon, and on the other, externalists who counter that the mind is variously an enacted phenomenon (enactivism), or an internal-external phenomenon (extended mind). At present, the literature remains divided, with an apparent standoff between the two sides. All of which raises the question: can this internalist/externalist issue actually be resolved?

In this post, I will consider one strategy that has been favoured by both internalists and externalists alike. I will suggest that consideration of this strategy indicates that we should be sceptical that this internalist/externalist issue can in fact be resolved.

The strategy in question involves appealing to what I am going to call ‘special mechanisms’. Special mechanisms are causal states or processes that constitute or compose a given cognitive and/or mental state or process. Interestingly, appeals to such mechanisms have been used to vindicate both internalist and externalist positions.

For example, Clark (2009) has argued that there are special mechanisms in the brain that demonstrate internalism about consciousness. He claims that consciousness depends on high speed (or high bandwidth) information processing. Such processing is brain-bound, according to Clark, because, first, this processing depends on the synchronous activation of neural populations in the brain (Singer, 2003), and, second, the body acts a low pass filter and so slows down the transfer of information (Eliasmith, 2008). If consciousness is dependent on high-speed information transfer and, as a matter of contingent fact, this can only occur inside the brain, then the states or processes that realise or made up consciousness do not extend outside the skull. Simply put, consciousness is an internal phenomenon.

However, Thompson and Varela (2001) use similar empirical data i.e. data about the synchronisation and de-synchronisation of oscillating neural populations, to claim that consciousness is in fact an interactive, brain-body-world affair. They argue that brain mechanisms are “a paradigmatic example of self-organisation” (ibid, p419). That is, they understand the synchronous activation of neural populations in the brain, not in terms of the binding together of bodies of information, but rather as a self-organising emergent feature of the brain. They claim that emergence through self-organisation entails that neural, bodily and worldly elements can interact to produce emergent global organism-environment processes. Consciousness is consequently an emergent phenomenon, one that “cut[s] across brain-body-world divisions” (ibid, pp421-424). In other words, consciousness is external.

I offer this brief comparison between Clark, Thompson and Varela to illustrate the point that the same empirical data e.g. data about neural synchrony, can be used to defend opposing views on consciousness. Yet if appeals to such data can support such opposing views, then arguably this data cannot be used to settle the internalist/externalist debate.

(A further example of the same back-and-forth about special mechanisms could be recent Predictive Coding accounts of the brain, since such accounts have been used both to vindicate an externalist position (e.g. Clark, 2015) and various internalist positions (e.g. Milkowski, 2015; Gladziejewski, 2015)).

This offers us grounds to think that appeals to special mechanisms are not decisive in the internalist/externalist debate. I think this should encourage a certain scepticism that the debate can be resolved. But we can motivate this scepticism further by considering another means of settling this debate.

For one might take the alternative view that rather than going ‘top down’ – that is, from the mind down to the causal states or processes that accompany mentality – we should instead go ‘bottom up’ – that is, from the necessary and sufficient conditions needed to regard any state or process as mental or cognitive to the mind itself. This would then enable us to demarcate the boundaries of the mind and so answer the question as to where the mind is located. I shall consider this view in my next blog post. However, as with the appeal to special mechanisms, I shall argue that this view encounters its own set of problems.

 

 

Enactivism and Endoscopes

What you do determines what you perceive. If I close my eyes, my world goes dark. If I move from a darkened room to a room filled with light, what I see changes, sometimes dramatically. However, what you do may in fact also ‘constitute’ (be the realiser of) what you perceive. This claim is much more controversial, at least within the confines of philosophy. It is a shibboleth within much contemporary philosophy of mind that perception must be representational, that is, in order to perceive, you must first have a representational state(s) in your mind or brain that ‘stands in’ or ‘stands for’ some feature of the world.

Enactivists however demur. For enactivists, mind and experience are not heady affairs. Perceiving (and thinking, feeling, even imagining) are all things we do, rather than things that happen inside of us. And whilst enactivists disagree about how to characterize the enacted nature of perception (some think it involves know-how, others think it a fully embodied and embedded affair), they all insist that perception requires focusing on action.

So what, you might think. This is just a lot of philosophical talk. Yet the reach of enactivism extends far beyond the doors of the academy. A case in point is the following.

During a colonoscopy, the patient is lying on their side and the doctor places the endoscope inside the patient. The progress of the scope is, counter-intuitively, not determined by looking directly at the patient, but rather by monitoring changes to an image on a screen. Junior doctors however often have great difficulty making sense of what they are seeing on the screen, since the image can be both inverted and reversed at different times. Moreover, the doctor has to learn how their physical manipulation of the endoscope affects the image on the screen, and not how their manipulation of the scope affects the progress of the scope inside the patient.

Enactivism could help explain this difficulty. The doctor has to learn how their physical manipulation of the endoscope affects the image on the screen. The enactivist explanation of this is that the doctor has to learn a new set of what are called ‘sensorimotor contingencies’, lawful relations whereby perception changes with bodily movement. These contingencies will be unique to the using of an endoscope, hence the need for training. However, it may be the case that individuals with greater experience of altering images on screens (e.g. gamers) may learn how to use endoscopes faster. If so, then the training with endoscopes could potentially be done with the use of software alone. It need not be done in the presence of a patient.

This illustrates how beneficial a philosophical idea like enactivism can be. If doctors can get better in the use of endoscopes without first having to train on patients, then patients need not be exposed to the sorts of difficulties that every doctor is likely to encounter when they first use an endoscope. Enactivism, in this instance, can be used to help improve patient care.

Why Jennifer Anniston is (probably) not hard wired into your Brain.

Lots of money is currently being devoted to investigating the brain e.g. The Brain Initiative Project,The Connectome Project. Like most people, I think the more we know about the brain the better. But more data about the brain is one thing. Clarifying what we are trying to investigate is another. For example, what, exactly, does the brain do? The prevailing view seems to be that the brain is an information processor. The many cells in your brain (neurons and the like) engage in highly complex patterns of electro-chemical signaling. This signaling is, in turn, understood in terms of information. Putting it crudely, brain cells shuttle information around the brain, and this is why you can think, feel, imagine, remember etc. This, however, raises an important question: what do we mean when we talk about ‘information’? For not all notions of information are the same. Indeed, there may be good reasons to think that only some notions of information actually have fully paid up naturalist credentials, that is, are compatible with brain science. And this has ramifications for how we understand what the brain does.

For example, some neuroscientists claim that there are “concept neurons” in the brain. These are neurons that signal in response to highly particular stimuli. The claim is that such neurons literally “encode […] a concept” (Koch, 2012, p65). For example, if you are a fan of Friends, then (an image of? the idea of?) Jennifer Anniston may be literally hard wired into your brain. However, the availability of different notions of information ensures that there is an alternative view. For one could instead think that what has been identified is an important statistical correlation between a certain stimulus and the firing of a particular neuron or neurons (which would still be a significant finding). Moreover, one could add that there is a conceptual gap between, on the hand, a correlation between a stimulus and the activity of a neuron or neurons, and, on the other, the claim that the neuron or neurons actually encode informational content (that is, the neuron or neurons store information about, or refer to, a particular stimulus). One could also add that there may be no easy way to close this gap. As the philosophers Dan Hutto and Erik Myin have argued, statistical correlations (or what is called informational covariances) do not give you informational content. This puts a heavy price tag on the idea of concept neurons. For if one is going to endorse this idea, then one is going to have to show how a statistical correlation demonstrates that cells in the brain actually store content (about, say, Jennifer Anniston).

My point in discussing this is to show that understanding the brain as an information processor raises key conceptual difficulties. Resolving those difficulties is needed if we are to determine what the brain does. Conceptual issues, whether you like them, loathe them, or just want to ignore them, matter. The problem, however, is that as brain technology becomes ever more sophisticated, we apt to forget this. Perhaps more worryingly, these big brain projects may actually encourage us to forget this.

Words, Bloodly Words.

In arguing for the importance of conceptual issues, I sometimes hear the reply: “Well, that’s just semantics”. The objection (if it can be regarded as such) is then generalized: “Philosophers just play with words.” To which it is sometimes added: “Scientists, on the other hand, do the real work.” Variants of these phrases include: “Why should we care?” and “Where’s the evidence?”

There are, of course, many ways of responding to these phrases. But I’ve often wondered why some people find such phrases so appealing. My impression is that these phrases beguile because they suggest a certain picture of the world. According to this picture, the world seemingly divides into, on the one hand, stuff (e.g. the medium sized entities you see around you – bodies, trees etc) and on the other, the words we use to categorise that stuff (e.g. concepts like “person”, “human being”, “oak”, “elm”). This picture looks attractive because it suggests that while you or I may disagree about how exactly to categorise the stuff we see around us, there nonetheless remains some fact of the matter that can settle our (or anyone else’s) disagreement. Hence, arguments about concepts are just so much playing with words. What really counts is the stuff.

It is worth recognizing, however, that this is only a picture, and not a very sophisticated one at that. Indeed, this picture can be replaced. For one could take the alternative view that conceptual issues matter, not because words are all there is, but rather because our concepts shape and structure our engagements with the world around us and hence investigating that world inevitably spins us back upon our own concepts. One of the advantages of this alternative view is that it regards science and philosophy as mutual (and not sparring) partners. Bad philosophy can be rectified with good science. Equally, bad science can be rectified with good philosophy. None of this is to play with words. Rather it is to take our investigative commitments seriously.