There is nothing more wonderful, than what it feels like to be in the presence of such beauty, wisdom and grace.

Thursday, February 23, 2012

The Mind-Body Problem of Consciousness


A Theoretical Essay: The Mind-Body Problem of Consciousness
By Jodie Stewart (2011) ©
One of the most debated and remarkably ambiguous terms society has sought to understand is that of the nature of consciousness. It is one of the most perplexing areas of philosophy, and the most complex of the mind-body problem as human beings strive to understand the subtleties of existence. For centuries, mankind have sought to provide a definitive meaning for ‘consciousness’ by proposing a range of forms and types and asking questions such as what is consciousness? Is consciousness purely subjective and only with regard to a first person point of view? How do human beings recognise that a conscious mental state correlates to the body? The contemporary literature holds that consciousness refers to the subjective awareness of mental events and serves at least two functions: monitoring the self and the environment and controlling thought and behaviour (Weston, Burton, & Kowalski, 2006). Providing a definitive answer or conclusion to describe the mental state for consciousness has proven, over time, to be problematic and is seen as the most essential topic in the study of the mind.
The term ‘consciousness’ conjures up images of vastly complex mental activities such as self-realisation and self-analysis and perhaps, it also conjures up something more phenomenal, such as seeing the colour ‘red’ or feeling pain. Similarly, the term consciousness corresponds to ‘awareness’ and ‘experience’. Sigmund Freud made clear that much of the emotional life is unconscious and emotions that stir within us do not always bridge the gap into awareness (Goleman, 1995) or into our ‘consciousness.’ According to Carruthers (2000), there are unconscious experiences that rely on the explanation of what ‘experience’ means. There are many different theories of consciousness and an abundance of information has been explored on the matter throughout the history of the human race and a clear definition remains elusive.
The two broad traditional theories of the mind-body problem are dualism and materialism. Although there are many different versions of these two theories, the principle idea of dualism is that the mind and body are different substances independent of one another and the theory of materialism believes that everything that exists is physical (Kalat, 2009). The history of the theory of dualism extends back to the French philosopher Rene Descartes who defended the idea of interaction in space between the mind and brain. Interestingly, this notion has no common grounding for the law of physics and the history of the big bang theory: the amount of energy and matter has been fixed since it all began (Kalat, 2009).
According to Kalat (2009) there are a few subtypes that come under the umbrella of the monism theory of consciousness, which explores the notion everything on earth and in the universe as an entity was created of only one kind of matter. The different subtypes include the materialist view that everything that exists is physical; the mentalist view that the mind only exists because something physical is aware of it and lastly the identity position which explores different subjective experiences and denotes that all stem from some kind of brain activity (Kalat, 2009).
Many psychologists have argued that many animal species are capable of having subjective experiences, and more is known today about the biological makeups between humans and animals such as brain and DNA structures, than was known previously (Gennaro, 2005). Of course, there are many similarities and differences; however, most philosophers accept that a large portion of the animal world is capable of perceptual states of consciousness (Carruthers, 2000) such as self-recognition and feeling pain.
Self-recognition of animals was reviewed in Gallups (1977) study of orangutans and chimpanzees. Gallup (1977) aimed to determine whether or not these animals were proficient enough to recognise themselves by providing them extended exposure to mirrors and then placing a marking on the animal only identifiable when viewed in the mirror. The first part of the study was successful, and the animals illustrated displays of self-directed behaviour and recognition only after a few days, however, the ability to recognise oneself did appear to be impacted by early social relationships (Gallup, 1977).
In a study on the neurobehavioural nature of fishes with reference to awareness and pain, Rose (2002) examined whether or not fish were capable of pain and suffering. In a neurobehavioral sense the assumption is that fish suffer from pain similarly to humans (Bateson, 1992; Gregory, 1999). Surprisingly, Rose (2002) found that fish lack behavioural responses to harmful stimulus that are capable in the brains of human beings, where regions that respond to pain are found in the cerebral cortex. These specified regions of the brain involved in awareness and response to pain are found in humans but seem to be lacking in fish (Rose, 2002). Although it is unlikely fish do not experience pain and fear, they do display non-conscious physiological stress responses to harmful stimulus. Although it’s plausible to believe animals can and do display perceptual states of consciousness such as feeling pain, showing a fear response, and recognising oneself, areas are still under investigation at to what extent these occur. According to Kalat (2009), none of us know for sure if another species, and even another human being is conscious because consciousness cannot be observed. 
Due to the remarkable advances in technology, the world of science and philosophers alike have marveled at the possibilities of perhaps creating a mechanical or physical system that is truly conscious and is able to perform all of the behavioural functions and relevant tasks of the subconscious, of animals or humans. For decades, philosophers have been fascinated by the question, could a conscious robot ever be created? Could a robot, or machine have subjective experiences such as seeing colour, and have recognition of that colour or feel a stab of pain and recognise it as a painful stimulus? Computers and machines are unlike any other piece of technology around and scientists, philosophers, neurobiologists and even those non-scientists believe a machine may someday be created as the intelligent behaviour of computers continues to progress (Christof & Giulio, 2011). As mentioned earlier in this essay, consciousness refers to the individual awareness of mental events and attends to at least two main functions: monitoring the self and the environment, and controlling thought and behaviour (Weston, Burton, & Kowalski, 2006). Because of this rationale then, a conscious robot could be created if the machine was programmed to understand and to be aware of its experiences and environment.
In an experiment by John Searle (1980), called the Chinese room argument, Searle followed English instructions to manipulate Chinese symbols to answer questions in Chinese. Searle (1980) came to the conclusion that computers simply follow a program; therefore a machine doesn’t really understand anything like our human brain does. From the outside looking in, an onlooker would think Searle could read and understand Chinese, however, he completed the task based on syntax alone by manipulating and arranging the symbols. Although the experiment was presented with criticism Searle (1980) believed that a machine couldn’t generate meaning from symbols, and understand how they correlate to one another like the neural networks of the human mind (Searle, 1980).
According to the study by Christof and Giulio (2011), a conscious machine would need to have acquired knowledge (which would have to be programmed) and be able to demonstrate a subjective understanding of whether an image is depicted to be ‘right’ or ‘wrong’ for example a dolphin perched on top of a garden fence is ‘wrong’; the robot must be able to recognise the different facts in the picture and be aware that it does not make sense. Assembling these kinds of facts is an essential part of a conscious mental state. According to Kalat (2009) though human beings are also programmed by their genes and past experiences.
Christof and Giulio (2011) as neurobiologists, are interested in the inner workings of the brain and how subjective experiences are established. The integrated information theory of consciousness demonstrates that most living beings have an established sense of subjective states that make up experiences such as feeling pain and remembering a thought (Christof & Giulio, 2011). The brain processes these experiences and merges incoming sensory information with the information stored from its memory to make sense of the environment surrounding it. Searles (1980) Chinese experiment illustrates then that computers following of a program lacks genuine comprehension.
Christof and Giulio (2011) make reference to the fact that information processed in our brains is integrated. For example, when a family member shows a saddened face and tears, the brain takes that information as a whole entity and this information cannot be divided into separate, unrelated mechanisms for processing on their own, therefore what we experience is confirmed as a whole. Overall, a conscious mental state then is one where a myriad of interactions among relevant parts of the brain takes place (Christof & Guilio, 2011). If a machine is able to distinguish between different experiences and emotion even, is it merely a simulation of some mental activity, or a genuine duplication and assimilation of it’s own?
According to the theory that consciousness relies on subjective experiences and integration between different parts of the brain, a programmed machine cannot imitate that process.  Say if a scientist were to create a machine with ‘consciousness’, it would simply be an imitation from the scientist’s point of view of consciousness and what a conscious mental state constitutes and this may be different from another person’s point of view, because the meaning of consciousness is subjective in itself. The machines programmed experiences would simply not be a reflection of it’s own experiences and years of evolution, but an imitation of someone else’s and what the programmer believes to be the most important and relevant subunits of the mind that create consciousness. There’s only a limited amount of variables one can program into a machine, so perhaps the question comes down to biological makeup and general physiology of the nature of consciousness in human and animal existence.
Despite the many questions and theories that arise to determine whether a machine could be conscious and have subjective experiences, the main point to consider is that a computer hard drive with already stored information doesn’t interact with new sensory input like how the brain interacts with the body. As Christof and Guilio (2011) mention, although a computer has the capabilities to store more static information than our lifetime of memories, this information remains stagnant and unmoving, where as the human mind is constantly dynamic and evolving. Unlike a machine, that stays largely disconnected, the human brains subconscious consistently evolves along with the conscious; therefore, such programmed neural networks would need to be created to imitate this.








No comments:

Post a Comment