How the brain learns to be conscious
~ discussion with Axel Cleeremans ~
'Who am I?' I bet you ask yourself from time to time. This is because you possess knowledge about your own existence. Even beyond that: you care about your existence. You are self-aware. And you are also aware of many others things around you. You are a conscious being.
But what is consciousness? How is it generated in the brain? Will we ever be able to build conscious machines? These questions are among the greatest unsolved mysteries in science.
Axel Cleeremans (professor of Cognitive Science and research director at the National Fund for Scientific Research, Belgium) has spent most of his career working on these mysteries.
I discussed with him at the Euroscience Open Forum in Manchester.
"I was fascinated by the idea that we can learn about things without awareness."
Let's play a game! Imagine that you are a factory manager and you can control the production of sugar by determining the number of workers employed on each of a number of trials. We won't tell you any explicit rules though. You have to find out for yourself the best way to optimize sugar production. With time, you would become a better and better manager by making more and more optimal decisions, and also you would feel that you start to get the gist of it. You learn how to play the game, so to speak. But you would not be able to determine the formula according to which you make a decision. This is what we call unconscious learning, or implicit learning, as cognitive psychologists have termed it.
What you learn unconsciously in this game is the rule which was not made explicit to you. You can guess that the sugar production will increase with the number of workers you employ. But it is much less intuitive that the current sugar production is inversely related to the sugar production in the previous step. This is how the rule was set by Dianne Berry and Donald Broadbent, the psychologists who developed this task. Usually, this difficult relationship is not discovered by the participants, they cannot verbalize it. However, they acquire the rule unconsciously and use it to improve their decisions.
Professor Cleeremans told us how he got inspired when he first learned about the sugar factory task and implicit processes from Donald Broadbent. 'I was fascinated by the idea that we can learn about things without awareness,' he says.
Is there a clear distinction between what we can do with and without awareness?
We are still in search of a good model of the mind. However, philosophers and psychologists came up with several metaphors. One of the most recent and influential ones is the computer metaphor, which sees the components of our cognitive system analogous to the central processor, storage devices and peripherals of a desktop computer. But do we really have separate information-processing modules just like a computer?
Freud used the analogy of an iceberg to describe the three levels of the mind: the unconscious, preconscious and conscious levels. But is this really how our mind is configured?
Professor Cleeremans draws attention to the absence of a final evidence. 'Regarding this question, I am agnostic,' he laughs, and continues. 'I don't think anybody can state with a lot of certainty that there is a neural distinction for implicit and explicit processes. Even though some special conditions like amnesia seem to subtend the idea that explicit and implicit memory are two different systems.'
It is shown in several studies that patients suffering from anterograde amnesia, characterized by partial or complete inability to create new memories, are nearly or even just as good as healthy individuals when it comes to learning something new unconsciously. For instance, their performance at a motor task improves with practice, despite never remembering the task itself or the fact that they have practiced before. Also, in retrograde amnesia patients usually retain their well-practiced skills, like riding the bike, despite losing much of their factual (episodic) memories.
Does this mean that there are at least two distinct memory systems? 'Maybe that's not the right way to get it,' says Cleeremans. 'Maybe there is a way of understanding the whole system as a unified memory system.'
David Shanks, experimental psychologist at the University College London argues that the dissociations may stem from the measures we use and the way we assess each sort of memory ability. One should admit that if we administer two different tools for measuring the capacity of he very same system, we will end up with two different results. But these differences do not necessarily indicate that distinct systems (such as an unconscious vs a conscious memory system) are involved. Rather, they may simply reflect differences in the sensitivity of the tests that the psychologist decided to use.
So it doesn't seem to be a good idea to answer the question whether we have a conscious and an unconscious memory system, based on discrepant scores on tests that are different from each other in nature. But there is a different way to approach the problem.
'I am convinced by stories in which different bits of the memory systems are distinguished by the computational objectives they try to fulfill. O'Reilly, McClelland, McNaughton introduced this very interesting idea that the hippocampus (a structure that sits deep in the brain at about the level of ear), is specifically responsible for storing information about episodic memories.'
'If a certain memory trace proves useful then it is progressively transferred to the neocortex, particularly during REM sleep. This model is framed in terms of computational objectives. First, there is the need to remember specific episodes that apply today or in the more general present. Second, you also need to abstract over many such instances, so you can derive more general strategies.'
/Adapted from Frankland and Bontempi, 2005/
'Take the example of parking. One the one hand, it is useful to remember where you have left your car today. But on the long term, it is also immensely useful to abstract over many instances of parking your car in the same neighborhood, so that you have an idea where the best parking spots are. That requires different sorts of computations than those involved in remembering where have you parked your car today. Moreover, these computational objectives are not just different, but also incompatible. You can not have a single system that stores the specific clear memory of where have you parked your car today, and that at the same time combines information from many examples in order to figure out the best parking strategy for you. This line of thinking really might lead to an implicit-explicit distinction. What I like about this proposal is that the distinction is rather a consequence, coming from the divergence of computational objectives.'
When philosophers, psychologists and neuroscientists gather
When I asked Cleeremans why he started to focus on investigating the nature of consciousness, he told me: 'It was a natural thing to do. When you are interested in what the unconscious can do, then of course you have to try to be clear about what you mean by consciousness. The problem is figuring out the difference between what we can do with and without consciousness. Of course, there was something that prompted my line of research. It was the first conference of the Association for the Scientific Study of Consciousness in 1996. This conference turned out to be specifically about implicit processes. It was super exciting because it was a mixture of philosophers, psychologists and neuroscientists.'
How does consciousness work?
Cleeremans explained me his theory that he calls the radical plasticity thesis.
'The core idea is that consciousness is something that the brain learns to do versus a static property of certain neural states versus others. Consciousness comes about because the brain learns to re-describe its own activity to itself based on the interactions that it has with itself, with other people and the world. The brain is a predictive machine that continuously tries to anticipate the consequences of its actions and it can do much better at that particular computational objective by developing re-descriptions or models: practically speaking models of other people, models of the outside world, and models of its own functioning. And so it is by means of unconscious learning mechanisms, so to speak neural plasticity that the brain progressively learns to predict its own activity. And it is the existence of those models that give rise to a conscious experience.'
Inner circle:
CONSCIOUSNESS
connected meta-representations
GOAL: to predict what the first-order representations will do
Outer branches:
FIRST-ORDER REPRESENTATIONS
GOAL: to predict which actions are best adapted to current perception
/Adapted from Cleeremans, 2015/
Cleeremans' theory was inspired by the ideas of the philosopher David Rosenthal. He came up with the so-called "higher-order-thought" theory of consciousness, which states that a representation is a conscious representation when there is a higher-order thought that indicates to the agent the existence of this (first-order) representation. What this means is that no mental state is conscious if one is not aware of that state.
'Rosenthal's idea is great, but maybe the terminology is fuzzy – the concept of higher-order thought is unclear,' he says. 'We use the concept of meta-representation. The meta-representation is pointing to the first-order processing. When you have, say, a frontal representation that indicates to the system as a whole that there is a particular familiar activity pattern in the visual cortex, that pattern of activity will become conscious. Think of it as the meta-representation would say: << Hey, it seems to me as I see a table! >>. And then, you will not only perceive the table, but you also will be aware of the fact that you see a table. Crucially, the higher-order thoughts themselves remain unconscious.
Consciousness is not like gravity
There are other contemporary theories of consciousness. Why might Cleeremans' theory be more useful or more plausible?
'The main advantage of our theory is that it proposes a specific computational mechanism by which to make the difference between conscious and unconscious representations. We have the feeling that when the mechanism we propose takes place, then the representation would be conscious. Whereas, in the integrated information theory consciousness always seems to emerge out of nothing, basically.'
The integrated information theory of consciousness was proposed by neuroscientist Giulio Tononi.
Tononi's core claim is that a system is conscious if it possesses a property called Φ, or phi, which is a measure of the feedback between and interdependence of different parts of a system. A system with a high Φ value contains much more information than the sum of its parts. And such a system is not necessarily a brain.
Tononi's theory applies to any identity, biological or non-biological. And this leads us to one of the oldest philosophical ideas: panpsychism. Panpsychists suggest that consciousness is a primordial feature of everything in our universe. As far as its particles interact with each other, even a rock might be conscious to a certain extent. As far-fetched or even hilarious the idea may seem, as much thorough rethinking it requires. Many prominent philosophers and scientists are seriously considering panpsychism as a solution for the good old mind-body problem.
/Figure credit: Tononi, 2004/
However, professor Cleeremans stays away from the panpsychist view. 'Surprisingly, many researchers are now drawn to panpsychism, the idea that consciousness is some sort of elementary component of the universe, just like gravity for instance. I cannot connect to this at all. My intuition is that consciousness involves specific computational mechanisms, ' he explains.
Apart from the potential panpsychist implications, the theory may also fail to account for what David Chalmers calls "the hard problem", the question why and how we have phenomenal experiences.
'It seems right that there are specific networks that are characterized by some sort of small world configuration, where things are connected in local clusters, but the clusters are also connected to each other, versus everything or nothing being connected with each other. So this is a very good idea. And I am also sure that it correlates to some extent with consciousness. But it does not give me the impression that we have an account for the phenomenology. Why does it feel like for anything for a system of that kind? So far, I'd say that this remains a significant issue for any theory of consciousness.'
Consciousness may just 'fall out' from a future model that is complex enough
Are we going to find on the computational research line the explanation for the subjective, phenomenal side of the coin? 'Yes,' he states with confidence. 'This is what the philosopher Daniel Dennett says too. He thinks that finding the ultimate account is a matter of complexity of the models that we build. The problem of consciousness is an illusion in the sense that we are attracted to this problem as it was a mirage, but there's nothing to explain. Once we have the computational mechanisms, the explanation <<will fall out of it>>.
But after thirty years of intense research, I feel the hard problem is still intact. It might be expedient to dismiss the hard problem as a research strategy. Maybe it is not so good to be so focused on that. Let's do the work on the 'easy problems', and let's see what happens! Let's put this aside, and come back to the hard problem in fifty years!' he laughs. 'This could be a fruitful strategy. The second thing is that by accepting the idea that there is a hard problem, you are not necessarily endorsing the mysterianist account of consciousness. Some people say, if you think that there is a hard problem, then you think that consciousness involves some mysterious features like quantum mechanics or you will end up a panpsychist. I don't think this is true. You can recognize that there is a problem that is still intact, without endorsing these outlandish ideas.'
The robot that has an orgasm
Developing a model that accounts for phenomenal awareness would mean that we could have a blueprint of consciousness, which we could use to create conscious algorithms and robots. Would this be good for us, humans?
'I'm not so sure,' Cleeremans says. 'At this point what AI systems and robots lack is genuine agency. They don't want anything. They are perpetually depressed. They have no intentions, no desires, no hopes, no regrets. These are all mental states that greatly characterize human life. So AI is doing well at information processing that is needed for Chalmers' easy problems that we've touched earlier. Recognizing faces, making decisions, understanding language even: all falls in the category of easy problems. The question that remains unanswered is how do we have a sense of our own existence, and how can we build machines that have a sense of their own existence.
Will we be able to build an AI for which anything matters? It all goes down to the issue of life and death. We humans deeply care about our existence. Struggle for survival and pleasure seeking are our basic driving forces. But can we build a robot that is able to achieve an orgasm?'
'Of course, we have robots that try to maintain homeostasis by avoiding danger. But what about a pleasure seeking robot? I think that is what we should try to create, if we really want to bring to life a conscious robot. But there is great danger in doing so. If we will have artificial systems that care about their existence and are able to reproduce, that puts us in danger. Duplication would be as simple as copying a software. And they would be immortal. Ray Kurzweil and other transhumanists have elaborate ideas on how the development towards singularity may unfold. If it can happen, it is going to happen.'
(Image sources:
http://compartilhandodicasdesaude.blogspot.ro/2015_07_01_archive.html
http://www.forbes.com/sites/alexknapp/2011/06/23/whats-the-likelihood-of-the-singularity-part-one-artificial-intelligence/#62da657c2d94
http://rscolglazier.com/blog/archive/poem-written-over-several-days
http://www.mindtweaks.com/wordpress/?p=1151
https://www.umass.edu/transportation/where-park
http://razonamiento-verbal1.blogspot.ro/2012/04/comprension-de-lectura-ii-problemas.html
http://www.thetattoohut.com/otoscope-cartoon/b3Rvc2NvcGUtY2FydG9vbg
http://regretfulmorning.com/2011/05/robot-porn
Photo of the biker taken by: Jean Jacques Fabien;
Drawing of Sigmund Freud made by David Levine; Painting of the Eye among the stars is the work of Stephen Taylor.)