In a Nutshell…
What do you call the uneasy feeling often experienced when watching a ventriloquist show or a CGI-rendered movie where the characters appear human but evoke feelings on the contrary? This unsettling effect can be described as the moment when sensations of viewer familiarity descend rapidly into the uncanny valley—the point where the object or simulation feels "not-quite-human" and "perfectly human'' simultaneously.
In 1970, Japanese roboticist Masahiro Mori coined the term "bukimi no tani"—which interprets the "uncanny valley" when translated to English. Mori hypothesized the relation between an object's degree of proximity to human characteristics and the negative emotions that may arise in response to the object as it straddles the creepy uncanny spectrum.
In his detailed graph illustrating the stages of subject familiarity concerning human-like objects, Mori theorizes how a person's behavior can shift from empathy to antipathy when an object persists in the depths of the uncanny valley. The portrayal of something somewhat familiar but also something entirely unknown triggers a negative psychological response within users, including feelings of repulsion, eeriness, and coldness. Mori describes the evocation of these feelings as a "self-preservation instinct.”
Associate professor in the School of Informatics and Computing at Indiana University, Karl F. MacDorman, says these instinctual emotions protect us not from inanimate objects that are non-human but from things that are "exceedingly similar, such as corpses and related species.” It's also known that objects persisting in the uncanny valley amplify a negative emotional response in the subject when movement is involved, e.g., viewing a bunraku puppet in motion.
While Mori initially coined the term to describe the negative emotions he experienced in response to the sight of unsettling wax figures, it finds new relevance in current times with the arrival of increasingly life-like user avatars and AI-driven virtual agents and influencers.
In the metaverse, we'll be able to traverse vast 3D open worlds, interact with virtual humans, and use personalized avatars that represent our physical form in virtual reality. As of now, the avatars that users adopt on platforms such as Meta's Horizon Worlds, Microsoft Teams, and Apple's iOS allow us to create stylized depictions of ourselves used to attend work meetings, socialize with friends, and play games.
These animated avatars are akin to the kind of cartoonish graphical depictions most of us are comfortable with and have grown familiar with over the years. Low-fidelity graphic characters have permeated TV, console, and online platforms for decades, like those created on Second Life, Nintendo Mii, and Grand Theft Auto Online.
The current tech used in the avatar, virtual agent, and influencer creation process is a while away from rendering virtual entities that offer nothing less than authenticity. Though, we can expect to bridge that technological gap sooner than we think, with the ability of the average user to create photorealistic 3D avatars, virtual agents, and influencers within the next ten years.
With technology enabling us to strive ever closer toward the widespread creation of photorealistic avatars and AI-driven virtual agents, what happens when these avatars and agents enter the realm of the uncanny valley? Will life-like virtual agents and avatars hinder user experience in the metaverse? Or could it perhaps be a useful failsafe that ultimately serves to protect users?
In the context of the metaverse, avatars and virtual agents that appear to be low-fidelity, graphically fictitious characters that do not invoke feelings of eeriness and uncertainty from the user will fall on the former end of the uncanny valley spectrum. On the latter end of the spectrum sits virtual agents that are superbly rendered photo-realistic depictions of virtual life—and due to their life-like realism, also do not evoke feelings of fear and uncertainty in the user. The issue arises when virtual entities straddle the middle of the spectrum—the uncanny valley.
With virtual reality (VR) and augmented reality (AR) wearables intensifying our immersive experience in the metaverse, we can expect our interactions with avatars and virtually rendered agents to become more amplified and intimate. Should avatars and virtual agents persist in the uncanny valley, it may affect the user in more proper ways than the virtual entities portrayed through 2D mediums.
In an interview with Kyodo News, Masahiro Mori explains how the environment in which a subject exists can determine the varying ways the uncanny valley phenomena may affect them. "If the person understands that the space they are in is imaginary, I do not think this presents a problem, even if it is creepy."
Mori also suggests the creation of virtual entities should remain on the former end of the uncanny valley graph and not "risk getting closer to the other side," where robots and newly emerging AI, VR, and AR rendered virtual entities appear perfectly human. Researchers believe that as virtual agents emerge from the uncanny valley and their likeness to humans perfected, our trust in these agents may be used by platform providers to manipulate users in more intensive and intrusive ways.
We anticipate virtual entities persisting in the uncanny realm will most certainly unsettle a percentage of users and perhaps work to interrupt the immersive metaverse experience. Though the up-side to this could be advantageous, as Mori noted, a person’s "self-preservation" instinct could mitigate the opportunity for unethical practices through the misuse of increasingly photorealistic virtual entities.
Should virtual entities emerge from the other side of the uncanny valley, the feelings of eeriness and uncertainty evoked in users could become neutralized with the creation of perfectly depicted virtual agents and their visual proximity to realistic entities. If this is achieved, it may facilitate seamless interactions between users and virtual agents, restoring feelings of safety and certainty. It should be noted that these feelings of safety and certainty emanating from the user may also be used for exploitative purposes.
Louis Rosenberg, CEO of Unanimous AI and pioneer in the field of AR and VR, shared his thoughts on the dangers of AI-driven agents and their potential ability to manipulate users, stating, "Personally, I believe the greatest danger of the metaverse is the prospect that agenda-driven artificial agents controlled by AI algorithms will engage us in 'conversational manipulation' without us realizing that the 'person' we are interacting with is not real."
This opens up the possibility for unethical practices to take hold from corporate meta-platforms, which might deploy the use of increasingly sophisticated AI-driven virtual agents to market products and services on behalf of third parties. Suppose users are unable to distinguish interactions between human entities and artificial ones. In that case, perceptions may become skewed, and the opportunity to manipulate and convince users to make purchases based on their perceived trust in the agent arises.
The Proceedings of the National Academy of Sciences (PNAS) recently published a report that reveals the ways "Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of non-consensual intimate imagery, financial fraud, and disinformation campaigns''. The report explains that "synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces."
Rosenberg reiterates concern for the exploitative potential of virtual agents and avatars and the ways intimate personal information can be used to influence users in unprecedented ways, stating, "the AI agent that is trying to influence us could have access to a vast database about our interests and beliefs, purchasing habits, temperament, etc. So how do we protect against this? Regulation,"
"It could be that they are all required to dress a certain way, indicating they are not real or have some other visual clue. But, an even more effective method would be to ensure that they actually don't look quite human as compared to other users," he explains. Though, if the zeal for creating photorealistic virtual entities is something developers do not want to compromise on, informing users when interacting with an AI-generated virtual agent could be an alternative solution.
Rachel McDonnell, associate professor in Creative Technologies at the School of Computer Science and Statistics at Trinity College Dublin, also shares her thoughts on the potential for photorealistic entities to be used in unscrupulous ways. She states that the danger may arise in "AI-driven video avatars or deep fake videos, where convincing videos can be created of one human, driven by the motion and speech of another."
Founder and CEO of SpaceX, Elon Musk, became the latest victim of deep fake technology when a video of him recently went viral urging people to invest in a phony cryptocurrency scheme. Released by bogus trading platform BitVex, the deep fake footage saw the AI-driven entity claiming investment would result in "30% dividends every day for the rest of their life.” With the likeness of Donald Trump, Barack Obama, and now Elon Musk is so easily used to mislead susceptible audiences, it undermines the long-held notion that "seeing is believing.”
To mitigate a threat actor's exploitation of potentially harmful AI-driven entities, McDonnell suggests using transparent identifiers such as watermarks that allow viewers to discern between authentic content and deep fakes. "Transparency around how avatars and videos are created will help overcome some of the ethical challenges around privacy and misrepresentation," she says. The same rules can apply to virtual influencers and virtual humans—who we except will become more prominent in metaverse spaces as we move forward.
For example, the CGI-rendered virtual influencer Lil Miquela has amassed quite the audience on Instagram, with her prevalence in fashion and music gaining her 3.1 million followers as of this year. So far, Miquela has collaborated with Prada, Calvin Klein, and Dazed Magazine. She even scored a role as an art curator for the virtual commerce platform Complexland. While these collaborations have the potential to influence real-world people and real-world trends, Lil Miquela's fictitious AI-driven origins weren't declared to the public for years.
Brands entering the virtual influencer market, hoping to sell their products and services in this uncharted vertical, should explicitly disclose when they're associating with a personality or influencer that is not a real person. As we move forward, we expect that the regulatory framework surrounding the use and disclosure of virtual agents will be defined and tightened to avoid opportunities for skewed user perception, deception, and manipulation.
While the creation of photorealistic avatars and virtual agents in the metaverse cannot—and should not be avoided if we all wish to enjoy a fully-fleshed metaverse, it's recommended that platform developers, CGI artists, and digital agencies creating virtual agents and avatars keep them within the uncanny valley when appropriate.
Rosenberg describes it best: “This is the most effective path because the response within us is visceral and subconscious, which would protect us most effectively from being fooled." Rosenberg reiterates how regulatory parameters could be set up that make it easy to tell whether a user is interacting with an artificial entity or a real person. Sharing further, "In the metaverse, the simplest thing — like how a virtual persona's eyes move, or hair moves, or even just the speed of their motion (do they take longer to move than an actual human?) is enough to make them seem deeply unreal,"
This way, a descent into the uncanny valley may act as a subconscious safeguard within users, triggering a protective barrier that instinctively alerts users that the entity they're interacting with may not be a genuine person. As we advance, brands and metaverse developers should be wary of the potential for virtual entities to become available without adequate parameters that adhere to the existing regulatory framework. Additionally, for virtual agents driven by artificial intelligence,stringent parameters will need to be further established due to AI's prevalence in biometric technology and its ability to utilize and misuse deeply personal data.
A PEGA survey that asked over 6000 people across six different countries about their thoughts on artificial intelligence revealed that 33% were aware they're engaging with AI-driven technology. The same survey also revealed that 77% of respondents had interacted with AI-driven technology. These concerning stats give us insight into how most people aren't aware when interacting with AI. Should a life-like AI-driven virtual agent access one’s data to sell a product or service, it will be difficult for the user to identify these unscrupulous modes of manipulation. If these issues aren't addressed, unfortunate legal issues surrounding incognito virtual agents' use can potentially cause problems that straddle the unethical realm.
To round up, the question of whether the existence of virtual agents, avatars, and influencers should remain within the uncanny valley or graduate into photorealism remains. We imagine the context in which the photorealistic avatars, agents, and influencers exist will help answer that question. Concerning the usefulness of virtual eLearning, therapeutic, and training experiences, photorealism can go a long way in helping the user feel supported and comforted.
In regards to e-commerce, interpersonal activities, and scenarios involving the transfer or exchange of sensitive personal data, virtual agents persisting in the uncanny valley could be a useful method that keeps users aware and conscious of their behavior, the potential for exploitation, and our perceived trust in those they interact with.
Sharing his thoughts on the future of the metaverse, Masahiro Mori calls for an optimistic approach, sharing, "I hope (those) involved in creating it will make something healthy for the happiness of humanity,"
While a descent into the uncanny valley may serve to protect users from unscrupulous entities in the metaverse, the potential for similarly menacing meta-experiences in the form of liminal spaces may work to inhibit the user experience. Check out our article ‘What Does Liminality Mean For The Future Of The Metaverse?’ to find out how developers may unintentionally create liminal spaces, and how familiarity and presence is a crucial component in avoiding such outcomes.