Home Technology & Digitalisation Where could AI take us?

Where could AI take us?

Jaymin Kim
Humans vs. AI: What does the future hold and where will it lead to?
Director, Commercial Strategy at Marsh McLennan

Today’s ChatGPT has no human-like face or body – despite its human-like, believable responses. We interact with ChatGPT via websites and apps using our two-dimensional laptop or mobile screens.

Tomorrow’s ChatGPT may take on a human-like form. The convergence of various existing and new technologies, including holograms, Mixed Reality (MR), and Generative and/or General AI, will further blend the line between machines and humans (holograms and MR technologies explained here). We have already seen the rise of photorealistic avatars—three-dimensional digital versions of our physical selves that appear on two-dimensional screens—and hologram technology—three-dimensional digital projections that appear in our physical worldMixed Reality (MR) refers to an immersion of physical and digital realms, where digital experiences may be superimposed into our physical world, and physical experiences may be superimposed into our digital world. Some refer to work, life, and play in MR broadly as the “Metaverse” – an emerging reality where our “physical” and “digital” experiences are becoming seamlessly immersed – “phygital.”

The convergence of various existing and new technologies, including holograms, Mixed Reality, and Generative and/or General AI, will further blend the line between machines and humans.

In the coming decade or two, we may be putting on MR glasses to interact with holograms of our human colleagues and friends and holograms of AI bots with three-dimensional faces and bodies. Today many people are already in relationships with AI chatbots in their two-dimensional screen form.

The emergence of AI-powered holograms will force humans to question what it means to be human and raise new challenges for numerous fields, from ethics to cybersecurity. For example, what does it mean to be in a relationship – should we discriminate between human vs. AI relationships? How do we prove if we are human vs. AI? Identity fraud has always been and will continue to be an issue among humans, but in the coming decades, we may be confronting identity fraud among humans and AI bots. Imagine humans pretending to be AI bots, AI bots pretending to be humans, and everything in between.

Already, the line between AI and human agency is blurring. For example, in ongoing copyright infringement lawsuits related to Generative AI, a key point of contention is whether Generative AI-derived outputs can be considered human creations (as a result of human prompts) or are instead machine creations.

The human responsibility to verify AI outputs

We should pause and reflect on what Generative AI systems are. Generative AI systems are designed and built by humans to mimic specific aspects of human intelligence. For example, the deep neural networks underlying today’s latest Generative AI systems were inspired by the complex, interconnected networks of neurons in human brains.

Today’s Generative AI systems are always prone to human error in two ways:

  1. Fallible humans are designing and developing these systems

When a user prompts ChatGPT with a question, and ChatGPT returns a factually incorrect answer, the onus is on the human user to verify the validity of ChatGPT’s answers before believing it word-for-word. The need to verify what machines recommend is not a new concept. When GPS systems erroneously tell us to drive off what appears to be a cliff, most people implicitly know to override the recommendation and do the opposite. We should not be fooled by the authoritative, human-like responses of Generative AI models – they are ultimately machines made by humans to seem like humans.

  1. Human intelligence itself, which these systems mimic, is fallible

To me, AI hallucinations seem very human – people do not always say “I don’t know” when they factually do not know, and people sometimes perceive patterns even when there are none (this is known as pareidolia – for example, seeing shapes in clouds, overfitting in statistics, etc.). Generative AI systems’ emergent capabilities—their unpredictability and lack of complete explainability—are what make them seemingly human. Do we blindly trust humans to tell the truth and be error-free? If our answer is no, then how could we blindly trust ChatGPT, which mimics specific aspects of humans, always to tell the truth and be free from error?

Prerequisites to human verification of AI outputs

At a minimum, we need to know when we are interacting with humans vs. AI and when we are reviewing outputs created with partial or complete AI assistance. As technologies like AI, holograms, and MR converge, it will become increasingly challenging to tell who we are interacting with – humans or AI – and what we are looking at – human vs. AI creations.

Today we have the luxury of having clarity that when we interact with ChatGPT, we interact with an AI chatbot behind a two-dimensional screen. This luxury will not last. It is already becoming difficult to tell between human vs. AI creations, with AI-generated images winning photography awards. Laws mandating disclosure of identity and creation methods may be a prerequisite to ensuring that humans can discern between themselves and machines in the coming decades.

Do we blindly trust humans to tell the truth and be error-free? If our answer is no, then how could we blindly trust ChatGPT, which mimics specific aspects of humans, always to tell the truth and be free from error?

If you don’t blindly trust humans, don’t blindly trust AI

As we make further advancements in the field of AI, we should innately learn the need to verify before believing. The concept of ChatGPT “lying” to us should not be a surprise. If AI systems seem more machine-like, we should verify what they say. If they seem more human-like, we should still verify what they say. Ironically, as AI chatbots take on three-dimensional hologram forms and it becomes increasingly difficult to verify who is human vs. AI, we may become less trusting of each other and AI.

A version of this article was first published on Metaverse Musings and can be read here.

Print Friendly, PDF & Email
Jaymin Kim
Director, Commercial Strategy at Marsh McLennan

Jaymin Kim is a commercial strategist and writes in her personal capacity. She is an advisor and frequent speaker on the emerging Metaverse and related fields including blockchain technology, digital assets, artificial intelligence, and governance. In her professional capacity, Jaymin is a director at Marsh McLennan. Previously, Jaymin led Innovation at a fintech startup, led Capital Advisory Services at an innovation hub that supports over 1,400 startup companies, and was a management consultant with a focus on financial services and information technology.

You may also like