Removing the mind from the head. Part 1.

[The following post is the first in a series which will describe my philosophical project for the Research Foundation Flanders (FWO).]

 

What is the mind? Is it a thing? Can it be located? Many philosophers have endorsed the distinct but related ideas that the mind is a thing and it is located inside your head. The standard view seems to be that the mind is identical to the brain, or simply the mind is the brain. Not everyone is convinced, however.

For example, philosophy of mind and cognitive science has recently seen the emergence of two paradigms that challenge mind/brain identity. The first, enactivism (Varela, Thompson and Rosch, 1991), contends that minds are “enacted”. You think, believe, desire etc in the ways that you do because of the activities you engage in. The second, extended mind (Clark and Chalmers, 1998), claims that, under certain circumstances, minds can be “extended”. That is, the states or processes that make up a mind can, on occasion, be partially located in the environment.

Given that both paradigms challenge the standard view, then it is perhaps not surprising that both paradigms have come under sustained attack. This has led to a heated back-and-forth with, on the side, internalists, who defend the orthodox position that the mind is an internal, brain-bound phenomenon, and on the other, externalists who counter that the mind is variously an enacted phenomenon (enactivism), or an internal-external phenomenon (extended mind). At present, the literature remains divided, with an apparent standoff between the two sides. All of which raises the question: can this internalist/externalist issue actually be resolved?

In this post, I will consider one strategy that has been favoured by both internalists and externalists alike. I will suggest that consideration of this strategy indicates that we should be sceptical that this internalist/externalist issue can in fact be resolved.

The strategy in question involves appealing to what I am going to call ‘special mechanisms’. Special mechanisms are causal states or processes that constitute or compose a given cognitive and/or mental state or process. Interestingly, appeals to such mechanisms have been used to vindicate both internalist and externalist positions.

For example, Clark (2009) has argued that there are special mechanisms in the brain that demonstrate internalism about consciousness. He claims that consciousness depends on high speed (or high bandwidth) information processing. Such processing is brain-bound, according to Clark, because, first, this processing depends on the synchronous activation of neural populations in the brain (Singer, 2003), and, second, the body acts a low pass filter and so slows down the transfer of information (Eliasmith, 2008). If consciousness is dependent on high-speed information transfer and, as a matter of contingent fact, this can only occur inside the brain, then the states or processes that realise or made up consciousness do not extend outside the skull. Simply put, consciousness is an internal phenomenon.

However, Thompson and Varela (2001) use similar empirical data i.e. data about the synchronisation and de-synchronisation of oscillating neural populations, to claim that consciousness is in fact an interactive, brain-body-world affair. They argue that brain mechanisms are “a paradigmatic example of self-organisation” (ibid, p419). That is, they understand the synchronous activation of neural populations in the brain, not in terms of the binding together of bodies of information, but rather as a self-organising emergent feature of the brain. They claim that emergence through self-organisation entails that neural, bodily and worldly elements can interact to produce emergent global organism-environment processes. Consciousness is consequently an emergent phenomenon, one that “cut[s] across brain-body-world divisions” (ibid, pp421-424). In other words, consciousness is external.

I offer this brief comparison between Clark, Thompson and Varela to illustrate the point that the same empirical data e.g. data about neural synchrony, can be used to defend opposing views on consciousness. Yet if appeals to such data can support such opposing views, then arguably this data cannot be used to settle the internalist/externalist debate.

(A further example of the same back-and-forth about special mechanisms could be recent Predictive Coding accounts of the brain, since such accounts have been used both to vindicate an externalist position (e.g. Clark, 2015) and various internalist positions (e.g. Milkowski, 2015; Gladziejewski, 2015)).

This offers us grounds to think that appeals to special mechanisms are not decisive in the internalist/externalist debate. I think this should encourage a certain scepticism that the debate can be resolved. But we can motivate this scepticism further by considering another means of settling this debate.

For one might take the alternative view that rather than going ‘top down’ – that is, from the mind down to the causal states or processes that accompany mentality – we should instead go ‘bottom up’ – that is, from the necessary and sufficient conditions needed to regard any state or process as mental or cognitive to the mind itself. This would then enable us to demarcate the boundaries of the mind and so answer the question as to where the mind is located. I shall consider this view in my next blog post. However, as with the appeal to special mechanisms, I shall argue that this view encounters its own set of problems.

 

 

Enactivism and Endoscopes

What you do determines what you perceive. If I close my eyes, my world goes dark. If I move from a darkened room to a room filled with light, what I see changes, sometimes dramatically. However, what you do may in fact also ‘constitute’ (be the realiser of) what you perceive. This claim is much more controversial, at least within the confines of philosophy. It is a shibboleth within much contemporary philosophy of mind that perception must be representational, that is, in order to perceive, you must first have a representational state(s) in your mind or brain that ‘stands in’ or ‘stands for’ some feature of the world.

Enactivists however demur. For enactivists, mind and experience are not heady affairs. Perceiving (and thinking, feeling, even imagining) are all things we do, rather than things that happen inside of us. And whilst enactivists disagree about how to characterize the enacted nature of perception (some think it involves know-how, others think it a fully embodied and embedded affair), they all insist that perception requires focusing on action.

So what, you might think. This is just a lot of philosophical talk. Yet the reach of enactivism extends far beyond the doors of the academy. A case in point is the following.

During a colonoscopy, the patient is lying on their side and the doctor places the endoscope inside the patient. The progress of the scope is, counter-intuitively, not determined by looking directly at the patient, but rather by monitoring changes to an image on a screen. Junior doctors however often have great difficulty making sense of what they are seeing on the screen, since the image can be both inverted and reversed at different times. Moreover, the doctor has to learn how their physical manipulation of the endoscope affects the image on the screen, and not how their manipulation of the scope affects the progress of the scope inside the patient.

Enactivism could help explain this difficulty. The doctor has to learn how their physical manipulation of the endoscope affects the image on the screen. The enactivist explanation of this is that the doctor has to learn a new set of what are called ‘sensorimotor contingencies’, lawful relations whereby perception changes with bodily movement. These contingencies will be unique to the using of an endoscope, hence the need for training. However, it may be the case that individuals with greater experience of altering images on screens (e.g. gamers) may learn how to use endoscopes faster. If so, then the training with endoscopes could potentially be done with the use of software alone. It need not be done in the presence of a patient.

This illustrates how beneficial a philosophical idea like enactivism can be. If doctors can get better in the use of endoscopes without first having to train on patients, then patients need not be exposed to the sorts of difficulties that every doctor is likely to encounter when they first use an endoscope. Enactivism, in this instance, can be used to help improve patient care.

Why Jennifer Anniston is (probably) not hard wired into your Brain.

Lots of money is currently being devoted to investigating the brain e.g. The Brain Initiative Project,The Connectome Project. Like most people, I think the more we know about the brain the better. But more data about the brain is one thing. Clarifying what we are trying to investigate is another. For example, what, exactly, does the brain do? The prevailing view seems to be that the brain is an information processor. The many cells in your brain (neurons and the like) engage in highly complex patterns of electro-chemical signaling. This signaling is, in turn, understood in terms of information. Putting it crudely, brain cells shuttle information around the brain, and this is why you can think, feel, imagine, remember etc. This, however, raises an important question: what do we mean when we talk about ‘information’? For not all notions of information are the same. Indeed, there may be good reasons to think that only some notions of information actually have fully paid up naturalist credentials, that is, are compatible with brain science. And this has ramifications for how we understand what the brain does.

For example, some neuroscientists claim that there are “concept neurons” in the brain. These are neurons that signal in response to highly particular stimuli. The claim is that such neurons literally “encode […] a concept” (Koch, 2012, p65). For example, if you are a fan of Friends, then (an image of? the idea of?) Jennifer Anniston may be literally hard wired into your brain. However, the availability of different notions of information ensures that there is an alternative view. For one could instead think that what has been identified is an important statistical correlation between a certain stimulus and the firing of a particular neuron or neurons (which would still be a significant finding). Moreover, one could add that there is a conceptual gap between, on the hand, a correlation between a stimulus and the activity of a neuron or neurons, and, on the other, the claim that the neuron or neurons actually encode informational content (that is, the neuron or neurons store information about, or refer to, a particular stimulus). One could also add that there may be no easy way to close this gap. As the philosophers Dan Hutto and Erik Myin have argued, statistical correlations (or what is called informational covariances) do not give you informational content. This puts a heavy price tag on the idea of concept neurons. For if one is going to endorse this idea, then one is going to have to show how a statistical correlation demonstrates that cells in the brain actually store content (about, say, Jennifer Anniston).

My point in discussing this is to show that understanding the brain as an information processor raises key conceptual difficulties. Resolving those difficulties is needed if we are to determine what the brain does. Conceptual issues, whether you like them, loathe them, or just want to ignore them, matter. The problem, however, is that as brain technology becomes ever more sophisticated, we apt to forget this. Perhaps more worryingly, these big brain projects may actually encourage us to forget this.

Words, Bloodly Words.

In arguing for the importance of conceptual issues, I sometimes hear the reply: “Well, that’s just semantics”. The objection (if it can be regarded as such) is then generalized: “Philosophers just play with words.” To which it is sometimes added: “Scientists, on the other hand, do the real work.” Variants of these phrases include: “Why should we care?” and “Where’s the evidence?”

There are, of course, many ways of responding to these phrases. But I’ve often wondered why some people find such phrases so appealing. My impression is that these phrases beguile because they suggest a certain picture of the world. According to this picture, the world seemingly divides into, on the one hand, stuff (e.g. the medium sized entities you see around you – bodies, trees etc) and on the other, the words we use to categorise that stuff (e.g. concepts like “person”, “human being”, “oak”, “elm”). This picture looks attractive because it suggests that while you or I may disagree about how exactly to categorise the stuff we see around us, there nonetheless remains some fact of the matter that can settle our (or anyone else’s) disagreement. Hence, arguments about concepts are just so much playing with words. What really counts is the stuff.

It is worth recognizing, however, that this is only a picture, and not a very sophisticated one at that. Indeed, this picture can be replaced. For one could take the alternative view that conceptual issues matter, not because words are all there is, but rather because our concepts shape and structure our engagements with the world around us and hence investigating that world inevitably spins us back upon our own concepts. One of the advantages of this alternative view is that it regards science and philosophy as mutual (and not sparring) partners. Bad philosophy can be rectified with good science. Equally, bad science can be rectified with good philosophy. None of this is to play with words. Rather it is to take our investigative commitments seriously.