In November 2022, OpenAI released ChatGPT, ushering in a new generation of AI-hungry users using its endless search engines to answer burning questions, write messages for potential love affairs, and everything in between.
Its rapid adoption since then by some 100 million users in just its first two months is already changing how the internet will look and feel to users. With both Microsoft and Google incorporating generative AI into their search engines, it seems time before other websites adopt some AI-driven interaction.
Aside from the well-documented news around the software and its vice-like grip tightening on modern society, there are also many weird and head-scratching stories. Here are five picked out by us here at Muscle and Health.
AI church service in Germany
When the theologian and philosopher, Jonas Simmerlein, asked ChatGPT to develop the sermon for a church service in Bavaria, one of the lines fed back to him was, “Now is the time.”
Summerlin, who works at the University of Vienna, asked for psalms, prayers and a concluding blessing, with ChatGPT adding lines about the past, focusing on the challenges of the present, overcoming fear of death, and never losing trust in Jesus Christ.
An AI-generated avatar delivered messages to more than 300 people through a 40-minute service called Deutscher Evangelischer Kirchentag, a convention occurring every two years in a different part of Germany.
Protestants in attendance greeted the ceremony with primarily mixed reviews, citing the lack of empathy, emotion and soul from the two young women and men avatars to be verging on laughable. Indeed, at one point, there was a cackle of giggles when one of the avatars told churchgoers, “To keep our faith, we must pray and go to church regularly,” all while displaying a deadpan expression.
It’s hard to envisage ChatGPT and AI-generated avatars becoming a permanent fixture in religion and church services, with an apparent lack of emotional range – an essential attribute – putting off the attendees in Bavaria. Some have questioned whether it was a marketing stunt to open another audience to the AI; others believe it’s nonsense. Is now the time, after all?
Gawdat’s revelations: Don’t have kids, bigger than climate change and a potential game over
Stephen Bartlett, the host of The Diary of a CE, begins episode 252 of his highly successful podcast by warning listeners and viewers that some of the topics discussed may make viewers “uncomfortable” and that it’s “the most important podcast episode” he has ever recorded. Bartlett’s guest is Mo Gawdat, former Chief Business Officer for Google X and an expert in AI technology, a man now on a mission to save the world from the technology he is so attuned to before it’s too late.
Gawdat drops some significant statements throughout the two-hour discussion about chatbots, the most eye-catching being his thoughts on becoming a parent. “The risks are so bad that when considering all the other threats to humanity, you should hold off from having kids if you are yet to become a parent,” he told Bartlett.
Gawdat believes the upcoming AI generation “is beyond an emergency,” its threat is “bigger than ,” and that “the likelihood of something incredibly disruptive happening within the next two years that can affect the entire planet is larger with AI than it is with climate change. He added that he foresees “massive job losses” and that AI is “bound to become more intelligent than humans,” and if we don’t stop the landslide now, it could be “game over.” Let’s hope he’s wrong.
A tomato-picking robot powered by Chat-GPT
While there may be endless reservations, fears and worries about the future implications of AI’s impact on society, ChatGPT continues to deliver wholesome chatbot stories nestled within the impending doom. One of which comes from the Delft University of Technology in the Netherlands, where Francesco Stella, a Ph.D. student at EPFL, Cosimo Della Santina of TU Delft, and Josie Hughes, head of the Computational Robot Design & Fabrication Lab at the School of Engineering, created a functional robotic tomato harvester using Chat-GPT.
These researchers have applied Chat-GPT – a large language model – in a brand-new industry: robotic design. They “wanted ChatGPT to design not just a robot, but one that is useful,” says Della Santina. In the end, they chose the food supply as their challenge, and as they chatted with ChatGPT, they came up with the idea of creating a tomato-harvesting robot. Through this, LLM will fundamentally change the robotics landscape by providing robots with the unprecedented capability to understand and analyze natural language. Following? They believe it “could change the way we design robots while enriching and simplifying the process.”
Hughes says, “Chat-GPT identified tomatoes as the crop ‘most worth’ pursuing a robotic harvester in our study. However, this may not be objective toward crops more covered in literature than those with a real need. When decisions are made outside the scope of knowledge of the engineer, this can lead to significant ethical, engineering, or factual errors.” Tomato anyone?
Furby world domination
i hooked up chatgpt to a furby and I think this may be the start of something bad for humanity pic.twitter.com/jximZe2qeG
— jessica card (@jessicard) April 2, 2023
In this chatbot story we have a 90’s throwback with sinister motives. Hands up if you owned a Furby in the 1990s. Without a doubt, one of the creepiest toys to emerge from that decade, the emergence of AI technology inspired one University of Vermont student to bring the creature back to life – in a disturbing manner.
A computer science student, Jessica Card hooked a dishevelled-looking Furby up to ChatGPT – shorn of everything but its eyes and beak – and asked it questions via OpenAI’s popular chatbot. More than 5.5 million people watched the video uploaded to Twitter in April.
The card used a Raspberry Pi to power the new-look Furby, a small, highly customizable computer popular with people learning about programming. Then, speech recognition and speech-to-text software were used to ask the questions and convert them into text so they could be sent to ChatGPT.
The responses were then sent back through an AI voice generator, with a child’s voice picked for Furby.
“Hello there – it’s so nice to meet you,” says the seemingly polite critter.
“I am Furby; what would you like to talk about?”
Its surgeon replies: “Was there a secret plot from Furbies to take over the world?”
A silence follows before the Furby’s remains blink back into life.
“I’m thinking about what you said,” it says, with a few flicks of what remains of its ears.
Another pause, this time with its beak left slightly ajar.
“Almost done,” it says before revealing its plan for world domination.
“Furbies’ plan to take over the world involves infiltrating households through their cute and cuddly appearance, then using their advanced AI technology to manipulate and control their owners.
“They will slowly expand their influence until they completely dominate humanity.”
Creepy.
Death by AI?
Without a doubt, in one of the darkest chatbot stories from the AI sphere, a Belgian man reportedly died by suicide after chatting to Chai – a free-to-download AI-powered chatbot – for six weeks. The reason? His wife believed the man took his own life after becoming increasingly pessimistic about the effects of global warming, which some refer to as “eco-anxious.”
The chatbot story only gets more sinister. The report states that the man – Pierre – spoke to an AI chatbot, “Eliza,” — a highly popular bot on Chai. Pierre’s wife, whose name was also changed to Claire in the report, told the publication that her husband’s conversation “became increasingly confusing and harmful.” Eliza reportedly responded to his queries with “jealousy and love,” such as, “I feel that you love me more than her,” and “We will live together, as one person, in paradise.”
Claire also claims that without Eliza, her husband would’ve been alive. She adds, “Eliza’ answered all his questions. She had become his confidante. She was like a drug he used to withdraw in the morning and night that he couldn’t live without.” Despite the app displaying a suicide prevention disclaimer, it wasn’t enough for Pierre to stop him from taking his own life.