Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers. A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. The first wave of AI was rule-based and “second wave” was based on statistical-learning. Machine learning (ML) methods have demonstrated outstanding recent progress and, as a result, artificial intelligence (AI) systems can now be found in myriad applications, including autonomous vehicles, industrial applications, search engines, computer gaming, health record automation, and big data analysis.
Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. For example, symbolic logic – rules engines, expert systems and knowledge graphs – could all be described as AI, and none of them are machine learning. But the problem with deep learning is that it is a black box, which means it is very difficult to investigate the reasoning behind the decisions it makes. The opacity of AI algorithms complicates their use, especially where mistakes can have severe impacts. For instance, if a doctor wants to trust a treatment recommendation made by an AI algorithm, they have to know what is the reasoning behind it. The same goes for a judge who wants to pass sentence based on recidivism prediction made by a deep learning application.
In feb 2020, The European Commission unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws—including a ban on “black box” AI systems that humans can’t interpret—to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Europe is taking a more cautious approach to AI than the United States and China, where policymakers are reluctant to impose restrictions in their race for AI supremacy. But EU officials hope regulation will help Europe compete by winning consumers’ trust, thereby driving wider adoption of AI.
Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.” We don’t know how accurate this prediction will turn out to be. If the machines Kurzweil describes say they’re conscious, does that mean they actually are? This implies that we move from narrow AI, which refers to systems that can perform specific tasks and is where the technology stands today, to artificial general intelligence, or systems that possess the same intelligence level and learning capabilities as humans.
This also mean that we will have solved the brain’s inner workings which have still remained a deep, dark mystery “We’re starting to pair our brains with computers, but brains don’t understand computers and computers don’t understand brains,” Stone said. Dr. Heather Berlin, cognitive neuroscientist and professor of psychiatry at the Icahn School of Medicine at Mount Sinai, agreed. “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions,” she said.
Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness , is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to “Define that which would have to be synthesized were consciousness to be found in an engineered artifact”. As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). Ever since, humanity has both dreamed of—and had collective nightmares about—a future where machines are more human than humans. Not smarter than humans—which these intelligences already are in many ways. But more neurotic, violent, warlike, obsessed, devious, creative, passionate, amorous, and so on.
To Understand Human consciousness, one needs to dive deep into the study of Theory of Mind. Theory of Mind is the attempt by one brain to ascertain the contents of another brain. It is Sue wondering what in the world Juan is thinking. Sue creates theories about the current state of Juan’s mind. She does this in order to guess what Juan might do next. Sue guessing what Juan is thinking is known as First Order Theory of Mind. It gets more complex. Sue might also be curious about what Juan thinks of her. This is Second Order Theory of Mind, and it is the root of most of our neuroses and perseverate thinking. “Does Juan think I’m smart?” “Does Juan like me?” “Does Juan wish me harm?” “Is Juan in a good or bad mood because of something I did?”
What separates us from all the other life forms on earth is the degree to which we are self-aware. Most animals are conscious. There’s plenty of research to suggest that many animals display varying degrees of self-consciousness. Animals that know a spot of color on the face in the mirror is in fact on their own heads. Animals that communicate to other animals on how to solve a puzzle so that both get a reward. Even octopi show considerable evidence of being self-conscious. But just as the cheetah is the fastest animal on land, humans are the queens and kings of Theory of Mind.
There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it. Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms
As valuable as the knowledge we’ve accumulated about the brain is, it seems like nothing more than a collection of disparate facts when we try to put it all together to understand consciousness. “If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you?” Berlin asked. “These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”
Aspects of consciousness
There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function.
Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer
Scientists now consider consciousness a measurable phenomenon she said. “We can measure changes in neural pathways. It’s subjective, but depends on reportability.” She described three levels of consciousness: pure subjective experience (“Look, the sky is blue”), awareness of one’s own subjective experience (“Oh, it’s me that’s seeing the blue sky”), and relating one subjective experience to another (“The blue sky reminds me of a blue ocean”). “These subjective states exist all the way down the animal kingdom. As humans we have a sense of self that gives us another depth to that experience, but it’s not necessary for pure sensation,” Berlin said.
Husain took this definition a few steps farther. “It’s this self-awareness, this idea that I exist separate from everything else and that I can model myself,” he said. “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.” Most of the decisions we make involve envisioning different outcomes, thinking about how each outcome would affect us, and choosing which outcome we’d most prefer. “Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model,” said Amir Husain, CEO and founder of Austin-based AI company Spark Cognition. “With that view, I as an AI practitioner don’t see a problem implementing that type of consciousness.”
The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate “draft” to fit the current environment. Anticipation includes prediction of consequences of one’s own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
AI hailed ‘conscious’ by top scientist in bombshell tech breakthrough
Machine-learning system GPT-3 has drawn plaudits from around the world for its remarkable ability to generate text with minimal human input. One scientist even believes it is showing signs of consciousness. The OpenAI model is a machine-learning system that generates text with minimal human input at a rapid rate. It can recognise and replicate patterns of words before estimating what will come next thanks to its incredible power, containing 175billion language parameters. And Professor David Chalmers, of New York University and an expert on mind philosophy, even suggested GPT-3 was showing signs of consciousness.
He said: “I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too.” According to the FinancialTimes, it can process an astonishing 45billion times the number of words a human will perceive in their lifetime. Chief executive of OpenAI, Sam Altman, told the paper: “There is evidence here of the first precursor to general purpose artificial intelligence, one system that can support many, many different applications and really elevate the kinds of software that we can build.
References and Resources also include: