Home Technology & Digitalisation What are our expectations from AI?

What are our expectations from AI?

Jaymin Kim
How has the line between humans and machines blurred over time?
Director, Commercial Strategy at Marsh McLennan

Artificial Intelligence (AI) has been making headlines almost daily in 2023. Some headlines reflect fear – that chatbots are lying and spreading propaganda.

Historically, the concept of deceptive chatbots existed only in science fiction.

Today, the line between machines and humans is blurring – fast.

A brief overview of AI and Generative AI

AI broadly refers to computational systems that mimic human intelligence, hence “artificial” intelligence. AI systems can be categorised as either:

  1. Narrow AI: AI systems designed to perform a specific task, simulating human intelligence limited to a specific domain. All AI systems developed to date fall into this category.
  2. General AI: Hypothetical AI systems designed to perform any task, generally simulating human intelligence without restriction to a specific domain. General AI does not exist today. Companies like OpenAI aim to develop General AI, “systems that are generally smarter than humans.”

Recent headlines have been referring to Generative AI models, such as ChatGPT, Bard, and DALL-E, which are instances of Narrow AI. These models are versatile and creative in generating new content, but they can do so only in limited, specific domains (for example, ChatGPT is limited to natural language text, and DALL-E is limited to images).

Exhibit 1: How the line between humans and machines is blurring over time

Generative AI models are not new – the concept can be traced back to the 1950s. However, earlier versions were significantly less complex due to a lack of available computational power and data and less sophisticated machine learning techniques. The latest Generative AI models, such as ChatGPT and DALL-E, represent a marked improvement in the diversity and believability of generated outputs, enabled by the increased availability of computational resources and volumes of training data and advancements in deep learning techniques.

Why we are disappointed when today’s Generative AI models “lie”

The believability of the outputs generated by today’s latest Generative AI models is causing some confusion about what these models are – more human or more machine. ChatGPT can respond to our questions about practically any subject we can think of and responds each time with seemingly original, complex, and convincing answers that, historically, we only expected from other humans. This is an important point – for the first time in history, the average retail consumer is interacting with machines that are seemingly human-like. It becomes easy to believe that we may be interacting with another sentient being who responds authoritatively on practically any topic we ask about.

For the first time in history, the average retail consumer is interacting with machines that are seemingly human-like.

Sometimes, Generative AI systems spit out nonsensical, erroneous outputs that cannot be explained based on their training data or other inputs. They are said to have “hallucinated” (or “confabulated”) and are often what headlines point to when accusing Generative AI systems of lying. Hallucinations tend to occur either when Generative AI systems encounter a scenario outside of their training data or when they try to infer patterns when there are none. We often do not realise that a Generative AI system has hallucinated unless we fact-check – after which we feel “lied” to.

Exhibit 2: Our different reactions to errors made by GPS systems vs. Generative AI systems

Historically, when machines produced errors, we did not feel “lied” to – when a GPS system fails to give us accurate and reliable directions, we tend to say that it is faulty and ask for a refund. And companies do product recalls when many GPS systems from the same product line fail to deliver accurately.

Today, when Generative AI systems produce errors, we feel “lied” to because we are simultaneously attributing Generative AI systems with both machine-like and human-like characteristics. And, companies building advanced Generative AI models like ChatGPT are not doing product recalls (at least, not yet, despite calls to pause development) when their models generate inexplicable outputs and make numerous newsworthy errors. Today’s advanced Generative AI models lack complete explainability – their outputs cannot always be explained or predicted based on their inputs and design. Depending on who you ask, this is a feature, not a bug. Generative AI models mark a milestone in the journey toward developing General AI systems, which would theoretically be able to learn new capabilities in new domains that humans did not originally programme them for. To learn net new capabilities in new domains, AI systems, by definition, must lack complete explainability.

Today’s advanced Generative AI models lack complete explainability – their outputs cannot always be explained or predicted based on their inputs and design. Depending on who you ask, this is a feature, not a bug.

Models like ChatGPT already have (limited) emergent capabilities, which are capabilities that the AI models were not intentionally programmed with. Sometimes, these emergent capabilities can be positive – for example, translating text into multiple languages despite not being explicitly trained to do so. Other times, they can be negative – for example, making up believable responses that are factually incorrect.

Our reactions to errors from GPS systems vs. Generative AI systems today differ significantly because we have different expectations from these machines. We expect GPS systems to be machines, period. We expect Generative AI systems to possess error-free characteristics associated with machines that come with product warranties as well as the ability to intentionally deceive – a human characteristic.

Today, when Generative AI systems produce errors, we feel “lied” to because we are simultaneously attributing Generative AI systems with both machine-like and human-like characteristics.

The second part of this piece explains what our future could look like, given how AI is rapidly getting entwined with more and more aspects of our lives.

A version of this article was first published on Metaverse Musings and can be read here.

Print Friendly, PDF & Email
Jaymin Kim
Director, Commercial Strategy at Marsh McLennan

Jaymin Kim is a commercial strategist and writes in her personal capacity. She is an advisor and frequent speaker on the emerging Metaverse and related fields including blockchain technology, digital assets, artificial intelligence, and governance. In her professional capacity, Jaymin is a director at Marsh McLennan. Previously, Jaymin led Innovation at a fintech startup, led Capital Advisory Services at an innovation hub that supports over 1,400 startup companies, and was a management consultant with a focus on financial services and information technology.

You may also like