V.I.K.I. from I, Robot, GLaDOS from Portal or HAL from Odyssey in Space: evil robots that develop their own will and turn against their creators have been an integral part of stories over the past decades. At the same time, artificial intelligence has become an integral part of our everyday lives in recent years. We unlock our smartphones using facial recognition, let Netflix suggest movies and ask Google Maps for directions.
Skepticism arose when AI suddenly began to speak: The release of digital voice assistants such as Amazon’s Alexa or Apple’s Siri, as well as the chatbot ChatGPT from the company OpenAI, was accompanied by critical voices. For some, the whole thing reminded them a little too much of the robots from books, films and video games. Thanks to clever storytelling, AI companies have overcome this mistrust and their products have long since become a natural part of our lives. How did they manage that?
To understand how companies like Apple and Amazon have been able to allay people’s fears of AI through storytelling, we first need to understand where our reservations about it come from in the first place. The fascination with man-made consciousness is by no means anything new. What was ended by stories also began by stories: Artificially created life has been a recurring horror element at least since Mary Shelley’s 1818 novel Frankenstein. In German-speaking countries, artificial intelligence even appeared as early as 1816 in E.T.A. Hoffmann’s story The Sandman.
We explain how texting works with artificial intelligence here.
It would take until 1966 for the first “real” artificial intelligence to see the light of day: ELIZA was the name of the chatbot programmed by computer scientist Joseph Weizenbaum, which was supposed to be modeled on a psychotherapist. She came up with the so-called ELIZA effect: participants in the study in which ELIZA was tested treated her like a human conversation partner. Many of them found it difficult to acknowledge that ELIZA has no consciousness – some completely refused to accept this fact.
ELIZA is thus the first program to (almost) pass the Turing test: Its artificial intelligence was so far on a par with human intelligence that the two could no longer be distinguished by some participants. Even then, this raised uncomfortable questions: If AI can do everything we can do, and do it more intelligently and efficiently, what will keep us from being replaced by it? And if even we can no longer distinguish between AI and humans, what makes us human at all?
In scientific circles, the unpleasant feeling that something is recognizably non-human but still acts that way is known as the uncanny valley effect or acceptance gap. Our brain can clearly categorize robots, it can clearly categorize people – everything in between triggers discomfort or even fear in us. When a new AI is to be introduced, it is therefore particularly important to counteract the uncanny valley effect.
For this very reason, companies are increasingly moving away from the idea of designing AIs that are as humanoid as possible – especially visually. Pioneers in the industry such as Google, Apple, OpenAI etc. have recognized that it is much more important to emphasize at least one of two key factors in their communication: the human-machine interface or the technical aspects underlying the program.
Apple’s Siri was the first voice assistant to establish itself on the market. And not only that: Siri was the only one of its kind at the time. No previous program was able to both receive spoken commands and issue responses in the same way. When the assistant was first released for the iPhone 4S in 2011, Apple’s communications department had a lot to do to introduce Siri to the world. Hardly anyone was initially keen to use an application reminiscent of Samantha from Her, let alone integrate it into their everyday life.
Probably the most common way to make people less skeptical about something is to emotionalize it. For Apple, the emphasis on the human-machine interface was by far the most important component of communication when Siri was introduced. It wasn’t meant to be a replacement, it wasn’t meant to make anyone superfluous – as an assistant, it was meant to support iPhone owners in their everyday lives.
It was therefore clear that emotions had to be brought to the fore during Siri’s world debut. Accompanied by soft, hopeful music, the first Siri-Commercial in quick cuts People in various situations: a mother who needs help with a burst tire to get the children waiting expectantly in her car to the ballet performance on time; a young man who has to get dressed for an important event but has forgotten how to tie a bow tie; a child who wants to know what a weasel looks like…
Bemerkenswert ist, dass jede Einstellung nur einen Satz benötigt, um ihre Geschichte zu erzählen. What these sentences have in common: They are all commands to Siri that are made by voice input. Every little story has a happy ending thanks to AI support. And so Siri is suddenly no longer a threatening, emotionless robot, but an approachable assistant who is always immediately on hand in a wide variety of situations.
In recent years – not least thanks to voice-controlled digital assistants such as Siri – the acceptance and knowledge of artificial intelligence, voice generation and AI-supported environments has increased enormously. So when OpenAI released its chatbot ChatGPT in 2022, the company was lucky enough to be able to build on what Apple started with Siri in 2011.
However, the program is neither voice-controlled, nor is it designed to concisely prepare information from the web for its users. Instead, ChatGPT and its little sister GPT-4 are able to conduct conversations, generate texts and write codes at an unprecedented level – all on their own, not on the basis of a search engine.
The interface and user experience of the application feels extremely sleek – it is elegant and simple, but still at the cutting edge of technology. The program therefore did not have to lose any complexity in the UI design. And it is indeed complex: ChatGPT is a generative language model that has been trained to communicate as naturally as possible on the basis of a neural network with 175 billion parameters using deep learning.
To put it simply: Open AI has recreated a computer brain and taught this brain to speak by showing it how language works using a large number of examples. This concept can therefore be broken down to such an extent that it can be understood by the general public – and what is understood is no longer feared. OpenAI’s communication therefore has a clear focus: to present the technology behind its AIs in such a way that even people who don’t know what a Proximal Policy Optimization algorithm or reinforcement learning is can understand how they work.
The field of artificial intelligence is booming. OpenAI’s ChatGPT was followed by Google’s Bard, GitHub’s Copilot and many more. It is difficult to predict where this will all lead – especially as it is still unclear whether and which EU requirements regarding AI will come into force in the coming years. But one thing is clear: these programs have become an integral part of our everyday lives, and the next AI phenomenon is sure to come. We can’t wait to see what stories it brings us.
Serious on LinkedIn, authentic on Insta and relatable on TikTok. The biggest challenge for brands…
If you really want to be heard and understood, you should focus on the art…
In the digital age, almost no industry can do without software. Providers are a dime…
In the age of digital media and information overload, it is more important than ever…
Video games have become an indispensable medium for storytelling. But what else can we learn…
Employer branding is more than just a buzzword in the pharmaceutical industry - it is…