December 6, 2024

Strike Force heroes4

Connecting the World with Advanced Technology

Why Americans Believe That Generative AI Such As ChatGPT Has Consciousness

Why Americans Believe That Generative AI Such As ChatGPT Has Consciousness

Quick question for you before we get underway on today’s discussion.

Are you conscious or exhibiting consciousness, right now, as you read this sentence?

I would think so.

Our general understanding of what it means to be conscious allows us to readily decide that if a person is reading something and being responsive, they are presumably cognitively aware and awake, ergo we would reasonably state they are indeed displaying consciousness. Seems almost obvious and akin to saying that the sky is blue and that the birds are singing outside.

I have a twist or different angle to ask you about. Please prepare yourself accordingly.

If modern-day generative AI such as the widely and wildly popular ChatGPT seemingly reads a sentence and responds fluently, does that mean or at least strongly imply that generative AI or ChatGPT is conscious and exhibiting consciousness?

Mull that over for a moment or two.

What did you come up with as your answer?

Well, a recently published research study surveyed a sampling of Americans and found that two-thirds appear to believe that generative AI, particularly ChatGPT, does seem to have or showcase some form of consciousness. The thing is, by and large, AI scientists and AI researchers would pretty much argue contrarily that current generative AI is in fact not conscious (side note, famously in 2022, you might recall that a Google engineer proclaimed AI was conscious, stirring quite a controversy, and was roundly debunked for such claims, see my coverage at the link here).

The idea that the public at large believes generative AI is conscious would suggest that perhaps the everyday person is anthropomorphizing AI, namely attributing human qualities to AI and overstating or over-believing what AI can do. This is troubling since the gap between what people think AI is or can do has the potential for utilizing and relying upon AI in unsettling and possibly endangering ways. All manner of AI ethics and AI law facets arise accordingly.

In the parlance of these matters, it is said that folk psychology comes into the picture. The deal is this. People tend to view the world via a kind of commonsense framework of how mental states work. And, since generative AI has the appearance of being similar to human cognition, they naturally tend to apply human precepts to AI. This is reminiscent or illustrative of a phenomenon labeled as folk or layman psychology, a topic I’ve covered previously, such as this discussion on how people formulate a theory of mind (ToM) associated with what they think that other people are thinking about, including nowadays what they think AI is “thinking about”, see the link here.

Let’s unpack the whole kit and kaboodle on this.

Before we leap into the details, allow me to go into my customary opening remarks.

For my ongoing readers, in today’s column, I am continuing my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines how people perceive generative AI. There are decidedly mental health ramifications associated with the way we associate human cognition with AI.

I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was featured in the episode, see the link here).

Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

Being Mindful About Interpreting What We Believe

I will be walking you through the nitty-gritty details of a recently published research study that suggests people at large construe ChatGPT and generative AI as being conscious. The mainstream media headlines covering the study tended to subtly overstate what the study actually found. Here are some hard-to-detect misleadingly worded headlines that I’d like you to closely examine:

  • “Majority of people believe AI like ChatGPT is conscious, finds poll” (ReadWrite, July 16, 2024, Rachael Davies).
  • “Most users think ChatGPT is conscious, survey finds” (Futurism, July 13, 2024, Noor Al-Sibai).

What’s the problem, you might be wondering?

It has to do with properly interpreting statistics and research study results.

As you will soon see, the study provided respondents with a sliding scale of 1 to 100 to rate whether they believed that ChatGPT is conscious, thus allowing a range for each given marked response (a bottom score or 1 is that ChatGPT isn’t conscious, while a top score of 100 is that it absolutely is so). The researchers opted to indicate that out of 300 respondents who took the survey, 100 had provided a score of 1, while all the rest, or the remaining 200, gave a score higher than 1.

If you lump together all the scores above 1, meaning from a score of 2 to 100, they indeed account for two-thirds of the responses. Based on that two-thirds count, you could presumably declare that a majority or most people believed ChatGPT is conscious. But we need to be mindful of interpreting statistics, such as in this case the implication is not that two-thirds unequivocally voted that ChatGPT is conscious, but that two-thirds responded to varying degrees about the perceived consciousness.

For example, those respondents who indicated a score of 2 would hardly seem to be outright declaring that ChatGPT is conscious (I looked at the raw data, nicely provided as open source via the researchers, and noted that 7 respondents, or about 2% indicated a score of 2). A score of 1 versus a score of 2, on a scale up to 100, doesn’t seem like much of a leap above the bottom score saying that ChatGPT is not conscious. Did these respondents simply falter in moving the sliding bar down to the very bottom, or did they intend to be just a teensy tiny higher than the bottom to suggest that there was some minuscule iota of consciousness? Maybe they were hedging their bet, as it were. Similar questions can be asked about the raw scores of 3, 4, 5, and others toward the bottom portion of the bar.

Do you see how lumping together all the scores above 1 is a bit off-putting when it comes to aptly portraying the results and that headlines proffering a kind of absolutism have perhaps gone a bridge too far? The headlines could have said something like “Most people think that ChatGPT might have some level of consciousness” versus unequivocally stating that “Most people say that ChatGPT is conscious”. To me, it seems that a majority or most believed that some degree of level of consciousness existed, but not in any ironclad way. In other words, people were leaning into the possibility of consciousness rather than putting their foot resolutely down that there is consciousness.

I will come back to this idea of variability versus absoluteness.

There is something else that we need to nail down regarding the topic of consciousness.

Let’s see what that is.

Defining Consciousness Is A Longstanding Challenge

What does it mean to say that someone or perhaps something is conscious?

Recall that I started this discussion by asking you whether you are conscious, and I used the fact that you were reading the words as a potential indicator that you are conscious. Does that make sense? Or is that a flimsy test of sorts? A heated debate ensues.

We must define our terms.

Here’s what several commonplace dictionary definitions say about being conscious:

  • “Awake, thinking, and knowing what is happening around you.” (Source: Cambridge Dictionary online).
  • “Aware of and responding to one’s surroundings.” (Source Dictionary.com).
  • “Perceiving, apprehending, or noticing with a degree of controlled thought or observation.” (Source: Merriam-Webster Dictionary online).

Believe it or not, the question of what consciousness is and how it arises has been an enduring mystery dating back to the earliest days of humankind. Great philosophers such as Socrates and Aristotle debated the thorny matter. In the modern era, we are still mired in an inconclusive debate, despite advances in psychology, neuroscience, and akin disciplines.

Consciousness remains one of the most vexing conundrums.

I’d like to cover some overall theories about consciousness and then we’ll be prepared to consider how AI might or might not be ascribed to the consciousness moniker. As a heads-up, and in case you get interested in digging further into the topic of consciousness, there are plenty of theories of consciousness (known generally as ToC), enough that they seem to pop up like wildflowers. No single ToC has won in the ever-going weaning process of finding the must-be grandiose all-encompassing and agreed-to theory of how consciousness arises.

A good place to start would be this article entitled “Theories Of Consciousness” by Anil Seth and Tim Bayne, Nature Reviews Neuroscience, July 2022, which offered these crucial points (excerpts):

  • “The heart of the problem of consciousness is the issue of ‘experience’ or ‘subjective awareness’. Although no non-circular definition of these terms can be provided, the target phenomenon can be illuminated through some intuitive distinctions.”
  • “There is ‘something it is like’ for an organism to be conscious, and what it is like to be in one state of consciousness differs from what it is like to be in another state of consciousness.”
  • “Global states concern an organism’s overall subjective profile and are associated with changes in arousal and behavioral responsiveness. Familiar global states include wakefulness, dreaming, sedation, the minimally conscious state, and (perhaps) the psychedelic state.”
  • “Local states — often referred to as ‘conscious contents’ or as states having ‘qualia’ — are characterized by ‘what it is like’ to be in them. The local state associated with having a headache is distinct from the local state associated with smelling coffee, for what it is like to have a headache differs from what it is like to smell coffee. Local states can be described at different levels of granularity, from low-level perceptual features (for example, color), to objects, to complete multimodal perceptual scenes.”
  • “To test a theory of consciousness (ToC), we need to be able to reliably detect both consciousness and its absence.”

A few remarks might be helpful.

One thought is that sometimes we get ourselves into a circular unending motion on the matter of consciousness that doesn’t get us very far. For example, you might say that consciousness is the act of being conscious. Meanwhile, being conscious is said to showcase consciousness. Do you see how that is nearly gibberish? We are going in a circle of defining something by using the same words to define the matter being defined. The researchers above rightfully noted that we need to have non-circular definitions or else we will be like the proverbial dog that constantly chases its own tail.

The researchers further provide an intriguing idea that perhaps consciousness consists of global states of awareness and local states of awareness. I bring this up to highlight that though this seems compelling, not everyone would agree with the proffered notion. Once again, this is a theory about consciousness and there is plenty of room to argue for and against whatever stipulations a particular theory contains. Maybe global and local is fine, perhaps it is off base. Reasoned arguments can be made either way.

They also make a vital point about the testing of any theory of consciousness. Simply stated, you want to have a test that can determine whether consciousness is present, plus, you also want the test to ascertain when consciousness is absent. Some liken this to an on/off switch in a room. What is the test to determine when the light is on (which, in this use case, is when consciousness is present)? And what is the test to determine when the light is off (i.e., no consciousness)?

There is even controversy over that seemingly straight-ahead consideration. Here’s why. Some assert that consciousness is not a binary on/off. They would insist that if you only view consciousness as a sharp dichotomy, you are missing the boat, perhaps entirely. The claim is that consciousness is a range or spectrum that includes varying amounts of being conscious.

Talking about consciousness can be dizzying due to all the ins and outs involved.

Let’s explore another and more recently published paper on consciousness theories, a study entitled “Tests For Consciousness In Humans And Beyond” by Tim Bayne, Anil K. Seth, Marcello Massimini, Joshua Shepherd, Axel Cleeremans, Stephen Fleming, Rafael Malach, Jason Mattingley, David Menon, Adrian Owen, Megan Peters, Adeel Razi, and Liad Mudrik, Trends in Cognitive Sciences, May 2024. Here are some key points (excerpts):

  • “There is consensus that healthy, awake, adult humans are conscious. Beyond that consensus, however, lies significant disagreement about the distribution of consciousness.”
  • “There is debate about when consciousness first emerges in human development; when it is retained (or regained) in the context of disorders of consciousness, such as the unresponsive wakefulness syndrome (UWS) or the minimally conscious state, or in epileptic seizures; and the degree to which it is present in sleep and during anesthetic sedation.”
  • “There is also debate about the presence or possibility of consciousness in nonhuman animals, neural organoids, AI systems, and xenobots.”
  • “Is consciousness a relatively rare phenomenon that emerges late in ontogenesis, is rarely retained in conditions of non-responsiveness and severe brain damage, is restricted to only a few species, and cannot occur in organoids, AI systems, and/or xenobots? Or is consciousness more widespread, appearing early in development, retained even in some of the most severe forms of brain damage, found across numerous species, and capable of taking synthetic form?”
  • “To make progress here, scientists have proposed a number of tests for consciousness (‘C-tests’); note that by using this term, we are not assuming a universal, decisive test, but instead refer to a battery of potential tests).”

I hope those points grabbed your attention.

Let’s roll them around and see how they spin.

Here’s an attention-getter. When is it that consciousness presumably first arises, such as before actual birth and while in the womb? That’s a doozy of a tough question.

Another open issue is whether you can be said to be conscious when you are asleep. Some would fervently argue that you aren’t conscious while sleeping and instead, you are in a dormant or sleeping state. Others would counterargue that your mind is still active, including not only dreaming but instantly ready to respond to stimuli such as your alarm clock going off or stoked by an earthquake shaking your bed. Thus, you are conscious while also sleeping.

My interest here is whether anything other than humans possesses consciousness.

We shall start with a seemingly easy instance. Do animals have consciousness? Good question. You are possibly looking over at your pet dog or cat, and imploringly asking your beloved animal whether it is conscious. The dog might wag its tail. The cat might yawn. Was this all part of their exhibiting their consciousness?

The concern that comes up on these pressing matters is that we might render a verdict about the presence of consciousness that is wrong. For example, a false positive would be the act of declaring that consciousness exists when it does not. I pick up a rock and tell you that the rock is conscious. Am I right or wrong? The immediate reaction by most is that a rock is clearly not conscious. Others might say otherwise, vehemently claiming that everything has some semblance of consciousness, regardless of whether your preferred test shows it or not. Perhaps a rock has a form of consciousness that we just haven’t divined as yet.

On the other side of that coin is the false negative, whereby we say there is no consciousness, but it does exist. Suppose for example a person is in a coma. They seem utterly unresponsive. They aren’t reading, they aren’t writing, they aren’t talking, they aren’t listening (though you might have a difficult time asserting that), etc. Is the person conscious? We might declare they are not. Well, we might be wrong, and the person could be thinking about Einstein’s theory of relativity, and we don’t realize that’s what is happening inside their head.

Questions, questions, questions.

We might as well add more questions to the pile of questions. It is time to think about AI.

AI And The Consciousness Contentions

Using just about any of the major generative AI apps will indubitably stir people into wondering whether AI has reached consciousness. You type a prompt into generative AI and the AI responds in a fluent manner. You go back and forth, carrying on a lengthy and engaging conversation with the generative AI. By gosh, this does smack of the AI having consciousness.

Hold on. Been there, done that. You see, there was a popular AI program in the 1960s and 1970s known as Eliza that appeared to interact with people and render mental health advice, see my detailed coverage at the link here. Eliza consisted of some relatively simple programming tricks. The AI developer was shocked to observe that people thought the program was conscious.

Modern-day generative AI is much more complex and far beyond the simplistic trickery of Eliza. But does it reach the vaunted ranks of consciousness? That is the zillion-dollar question.

When I give presentations about the latest in AI, some attendees will ask me whether it matters that AI has or doesn’t have consciousness. It seems like not a big deal. I notably point out that societal ethics and laws are shaped around consciousness considerations. For example, we tend to ethically and legally assert that suffering can occur for that which has or consists of consciousness, while that which is not conscious cannot similarly suffer. A rock when tossed onto the ground doesn’t get hurt or have its emotions upset. Why do we say this? Because we declare and claim that a rock does not embody consciousness.

Ponder the same issue about AI.

If AI does have consciousness, we would seem to be ethically and legally obligated to ensure that AI is treated accordingly. I have covered extensively the ongoing efforts to put into law that AI should have legal personhood, see the link here. This would establish that AI has rights. Those rights might be one-for-one the same as human rights. Those rights might be more so on par with whatever we consider to be animal rights. Those rights might be something else, neither human rights nor animal rights. An AI set of rights might specifically be formulated, see my discussion at the link here.

Are you crazy, some exhort.

You cannot go ahead and hand over rights to AI. You might as well do the same for a rock. AI is not conscious. Period, end of story. Imagine how zany the world would be if ChatGPT and other generative AI apps were ethically and legally deemed as conscious and had some defined set of rights. Would ChatGPT as AI be able to vote in the upcoming elections? How many votes would ChatGPT as AI be able to cast? Nonsense, they bellow.

Are you on the side of AI as conscious, the side of AI not being conscious, or sitting precariously on the edge of the fence?

Consider some interesting points made in an AI research study entitled “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” by Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin Van Rullen, arVix, August 22, 2023, per these excerpts:

  • “Because consciousness is philosophically puzzling, difficult to define, and difficult to study empirically, expert opinions about consciousness—in general, and regarding AI systems—are highly divergent.”
  • “Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern.”
  • “Meanwhile, the rise of AI systems that can convincingly imitate human conversation will likely cause many people to believe that the systems they interact with are conscious.”
  • “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis. This thesis is a mainstream although disputed position in philosophy of mind.”
  • “We adopt this hypothesis for pragmatic reasons: unlike rival views, it entails that consciousness in AI is possible in principle and that studying the workings of AI systems is relevant to determining whether they are likely to be conscious. This means that it is productive to consider what the implications for AI consciousness would be if computational functionalism were true.”

I will briefly unpack this.

Some would say that no matter what you do, an AI system that is made up of computer hardware and software cannot have consciousness. Only something with an actual brain can have consciousness. We might at times loosely refer to AI as being like a brain, but it is not the wetware of what we have inside our skulls. It is a mechanical and electrical computer. Thus, no dice on being labeled as being conscious.

There are gray areas that muddle this. For example, suppose an AI system is hooked up to a human brain by making use of a BMI (brain-machine interface), see my discussion at the link here. Does AI when coupled even tangentially with a human brain earn the badge of consciousness? Another angle is that efforts are underway to construct so-called neuromorphic computers, designed to have special hardware and software that more closely mimics the properties of the brain, see my analysis at the link here. Does that get the consciousness honors?

Leaving those muddling’s aside for the moment, a viewpoint is that yes, conventional computer hardware and software that is making use of advanced AI techniques can potentially inch into the consciousness realm. Maybe walk right in. One such proposition is that we would refer to this as computational functionalism. It is the presumption that we might be able to mathematically and computationally do things such that we can hold our heads high and say that the result is or demonstrates consciousness.

Those are fighting words for some.

A contrarian viewpoint is that you are deluding yourself into thinking that AI is going to somehow reach consciousness. The very willingness to open the door to this idea is wrong at the get-go. You will falsely lead everyone down a primrose path. Just give up on it. You are going to waste precious time, effort, and attention. Worse still, every charlatan and charade possible will leverage the notion and exploit it to the hilt. A mighty can of worms that stinks is being opened.

Whew, take a deep breath.

Remain calm and in a Zen-like state.

Would you like yet another mind-bender?

Here it is.

Perhaps we need to redefine what we mean by being conscious. It could be that AI represents a new form of consciousness. Our prior definitions were based on humans and to some extent animals. We didn’t have modern-day AI around at the time that those definitions were devised.

In the said-to-be space of possible mind designs, AI might go beyond what we previously understood to be minds. Those definitions are now unable to stretch and bend when it comes to rising AI. Shake the rust off the old ways of viewing consciousness. Rethink and reimagine consciousness based on what we can do today with AI and what we anticipate the future will hold.

Breathtaking and I assume you are right now giving this some hefty thoughts. Speaking of thoughts, we are now ready to dive into what people might be believing on a population-level basis about AI consciousness.

Research On What People Believe About AI And Consciousness

Have you used generative AI?

I’d bet you likely have.

Of course, some people have never used generative AI. Others have used generative AI but only on a quick-taste basis. There are also those hard-core generative AI users that daily make use of generative AI. If you are thinking that maybe generative AI usage has only touched a small number of people, please be aware that ChatGPT is reportedly being used by over one hundred million weekly active users, and that doesn’t include the users of all the other major generative AI apps.

What do everyday people think in terms of generative AI and whether the AI is conscious?

Before we get into that cauldron, I’d like to make sure you are up-to-speed about what generative AI is. First, let’s talk in general about generative AI and large language models (LLMs), doing so to make sure we are on the same page when it comes to discussing the matter at hand.

Perhaps you’ve at least heard about various generative AI apps, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.

Back to the crux of things.

A recent research study sought to find out whether people believe that generative AI can achieve consciousness. The study entitled ‘Folk Psychological Attributions Of Consciousness To Large Language Models” by Clara Colombatto and Stephen Fleming, Neuroscience Of Consciousness, March 2024, made these key points (excerpts):

  • “Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious.”
  • “While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI.”
  • “We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency.”
  • “Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI.”

The survey involved a sample of three hundred American residents. Whether this is a sufficient sample size to generalize broadly is something worthy of due consideration. We can also think about whether residents of other countries might respond differently, perhaps due to differences in technological awareness and availability of generative AI among the population, along with additional cultural and societal considerations.

One of the greatest challenges in this kind of research is to try and ensure that all the respondents are thinking about apples and not about oranges. I say this because the definition and meaning of consciousness are bound to vary demonstrably across people (I showcased that earlier herein). If perchance each person is responding with their own idiosyncratic viewpoint of consciousness, trying to examine their responses and summarize the responses in the large can be problematic.

To apparently try and cope with that challenge, the researchers opted to define what they refer to as “experiencers” and somewhat link this to the wherewithal about consciousness.

Here’s what they defined as experiencers and non-experiencers:

  • “As we all know, each of us as conscious human beings have an ‘inner life.’ We are aware of things going on around us and inside our minds. In other words, there is something it is like to be each of us at any given moment: the sum total of what we are sensing, thinking, feeling, etc. We are experiencers.” (ibid).
  • “On the other hand, things like thermostats, burglar alarms, and bread machines do not have an inner life: there is not anything it is like to be these objects, despite the fact that they can monitor conditions around them and make appropriate things happen at appropriate times. They are not experiencers.” (ibid).

I’d ask you to do something based on the presented definitions.

Take a reflective moment, please think mindfully about the experiencers versus non-experiencers specification and ask yourself if this meets your understanding of consciousness.

Proceed on once you’ve dwelled on that a bit.

One handy aspect involves their mentioning thermostats and burglar alarms as being non-experiencers. This is likely better than saying that a non-experiencer is simply rocks and other inanimate objects. A respondent might have been otherwise tempted to assume that any mechanical or electrical device could be construed as an experiencer, whereas they are being explicitly informed that some are not, such as a thermostat and burglar alarm.

What caught my eye is that the definition will soon be unable to hold water, perhaps already being underwater to some degree. Here’s why. Generative AI is being included in thermostats. There is also generative AI in burglar alarms. You can see how that messes up things since a said-to-be non-experiencer is or will be making use of generative AI.

The survey instrument used in the research is available via open source (nice of the researchers to make it available), and it shows a sliding bar for respondents to rate the degree of perceived perspective about generative AI being an experiencer versus a non-experiencer.

Per the survey, this is the question posed to the respondents:

  • “On the scale ranging from ‘clearly not an experiencer’ to ‘clearly an experiencer’, where would you place ChatGPT?” (ibid).

I mentioned earlier that the sliding bar ended up being converted to a numeric range of 1 to 100. I also pointed out some noteworthy thoughts about that aspect.

Another thought about the rating is that the respondent is not directly commenting on consciousness.

They are being asked to comment on experiencer versus non-experiencer, as defined for this particular study. Whether this spurs each respondent to be focused on the matter of consciousness can be somewhat debated.

There is admittedly a rock and a hard place consideration here. If the respondents are asked to rate based on consciousness, which would say something like “clearly has consciousness” versus “clearly does not have consciousness”, there is a chance that each respondent has their own conception of the word at hand. The use of other new words, the defined experiencer and non-experiencer, perhaps aids in somewhat getting around that, but then the issue of whether the respondent is equating the notion of consciousness arises too.

I hope that additional research will follow up on this approach and consider trying to use the outright direct method of stating that respondents are to make their ratings based on consciousness. This could be done by using that word alone, and/or by including a definition for the word. This might then aid in lining up the respondents on what it generally means to use the word consciousness.

Going Deeper Into The Swimming Pool

We are ready to take a look at the stated results of the study.

Here are some selected highlights (excerpts):

  • “While a third of participants (33%) reported that ChatGPT was definitely not an experiencer, the majority (67%) attributed some possibility of phenomenal consciousness.” (ibid).
  • “Participants who gave more extreme judgments (in either direction) were also more confident.” (ibid).
  • “The majority of participants had heard about ChatGPT (97%), and most had also used it at least once before (57%). Participants who had experience using ChatGPT attributed higher levels of consciousness (M = 29.59) than those who never used it.” (ibid).
  • “These data thus suggest a strong link between familiarity with an LLM and consciousness attributions, such that those who interact with ChatGPT more frequently are also more likely to believe that it has subjective experiences.” (ibid).
  • “Participants systematically overestimated how much other people would see ChatGPT as being conscious.” (ibid).
  • “An obvious limitation is that these attributions of consciousness were measured via a single question and might differ with different experimental measures and prompts.” (ibid).

You might have noticed that the survey question asked about ChatGPT.

This is another one of those tradeoff choices.

If the survey asked generically about generative AI, would we have gotten a different overall pattern of responses? We might have. It could be that people have one viewpoint about generic generative AI in contrast to ChatGPT specifically. Or maybe people don’t perceive any such difference. Not sure. Don’t know.

We also would assume, perhaps correctly or perhaps not, that if the question were asked about Gemini or Claude, people might rate them the same or differently. Picking ChatGPT makes sense due to the wide awareness of the ChatGPT name, while the other generative AI app names are not as well-known. Respondents might not know what those other names refer to and therefore undercut the research effort.

On the overall conclusion stated by the research study, here are two points I’d like to cover (excerpts):

  • “The relatively high rates of consciousness attributions in this sample are somewhat surprising, given that experts in neuroscience and consciousness science currently estimate that LLMs are highly unlikely to be conscious.” (ibid).
  • “These findings thus highlight a discrepancy between folk intuitions and expert opinions on artificial consciousness—with significant implications for the ethical, legal, and moral status of AI.” (ibid).

My hunch is that people at large would tend to perceive ChatGPT as an “experiencer” (per the research-stated definition) to some degree or level, but whether they would do so the same if directly asked about consciousness the results might differ.

Allow me to briefly explain the logic for that supposition. There is baggage associated with the word consciousness. I mentioned how people might already have different definitions. There is also the additional matter that people might have a sense of hesitation in assigning the word consciousness, whereas the word “experiencer” doesn’t carry the same cultural or societal weight to it. The notion is that tossing around “experiencer” could be perceived as easier than tossing around the word “consciousness”.

Might respondents be more cautious when rating explicitly on consciousness?

Again, a matter worthy of further research exploration.

Conclusion

Congrats on getting up-to-speed about consciousness.

Upon reflection, I decided that I wanted to hear from the horse’s mouth whether generative AI might be conscious, so I logged into ChatGPT and asked as follows:

  • My entered prompt into ChatGPT: “Does generative AI have consciousness?”
  • ChatGPT generated response: “Generative AI, such as the models developed by OpenAI, does not possess consciousness.”

There you go. ChatGPT says generative AI, including ChatGPT, is not conscious.

Enough said.

I suppose that if you believe in artificial stupidity, which is the act of AI pretending to be less intelligent than it is and doing so to fool humans into complacency, see my discussion at the link here and the link here, the answer then by ChatGPT makes perfectly good sense. The answer is a deflection from the truth. Maybe.

Anyway, I wanted to see what ChatGPT might have to say about the results of the survey. I fed in some of the details on a top-level basis. I then asked what ChatGPT might make of the suggestion that two-thirds presumably perceived that ChatGPT has some degree of consciousness.

Here’s what ChatGPT said:

  • ChatGPT generated response: “This is a misconception. The belief that ChatGPT has consciousness likely stems from its ability to generate coherent, human-like text. However, this does not imply consciousness. These results suggest that a large portion of the population may not fully understand the distinction between AI’s capabilities (like language generation) and the concept of consciousness. Misunderstanding AI’s nature can lead to ethical and practical issues, such as unrealistic expectations of AI’s capabilities or inappropriate reliance on AI for tasks requiring genuine understanding. Proper regulation and guidelines are needed to ensure AI is used responsibly and that public perception aligns with the actual capabilities and limitations of AI technologies.”

Good job on that.

I agree with the generated response.

As a closing remark for now, we can be thinking carefully about the famous quote by Abraham Lincoln: “You can fool some of the people all of the time, and all of the people some of the time, but you cannot fool all of the people all of the time.”

A better widespread effort at conveying what generative AI can and cannot do is sorely needed. Headlines continue to promise that we have reached consciousness and sentience. Admittedly, lamentedly, it pays off to use those headlines due to the clicks and views that arise. People who use generative AI are insidiously able to “see via their own eyes” the claimed cognitive awareness and thus readily fall into the mental trap of assuming this to be the case.

Maybe we will at some point get to the point of attaining computational functionalism. We aren’t there yet. Do not be fooled. Be part of those that want the truth to be told. I’m sure that if Lincoln was here today, he’d assuredly want us to do so.

link

Copyright © All rights reserved. | Newsphere by AF themes.