After taking time off, I returned this week to find my inbox flooded with news about AI tools, issues, missteps and adventures. And the thing that stood out was how much investment there is in having AI chatbots pretend to be someone else.
In the case of Meta, CEO Mark Zuckerberg expanded the cast of AI characters the tech giant's more than 3 billion users can interact with on popular Meta platforms like Facebook, Instagram, Messenger and WhatsApp. Those characters are based on real-life celebrities, athletes and artists, including musician Snoop Dogg, famous person Kylie Jenner, ex-quarterback Tom Brady, tennis star Naomi Osaka, other famous person Paris Hilton and celebrated English novelist Jane Austen.
"The characters are a way for people to have fun, learn things, talk recipes or just pass the time — all within the context of connecting with friends and family," company executives told The New York Times about all these pretend friends you can now converse with.
Said Zuckerberg, "People aren't going to want to interact with one single super intelligent AI — people will want to interact with a bunch of different ones."
But let's not pretend that pretend buddies are just about helping you connect with family and friends. As we know, it's all about the money, and right now tech companies are in a land grab that's currently pitting Meta against other AI juggernauts, including OpenAI's ChatGPT, Microsoft's Bing and Google's Bard. It's a point the Times noted as well: "For Meta, widespread acceptance of its new AI products could significantly increase engagement across its many apps, most of which rely on advertising to make money. More time spent in Meta's apps means more ads shown to its users."
To be sure, Meta wasn't the first to come up with the idea of creating personalities or characters to put a human face on conversational AI chatbots (see ELIZA, who was born in the late '60s.) And it's an approach that seems to be paying off.
Two-year-old Character.ai, which lets you interact with chatbots based on famous people like Taylor Swift and Albert Einstein and fictional characters such as Nintendo's Super Mario, is one of the most visited AI sites and is reportedly seeking funding that would put the startup's valuation at $5 billion to $6 billion, according to Bloomberg. This week Character.ai, which also lets you create your own personality-driven chatbots, introduced a new feature for subscribers, called Character Group Chat, that lets you and your friends chat with multiple AI characters at the same time. (Now's your chance to add Swift and Mario to your group chats.)
But using famous people to hawk AI is only fun if those people are in on it — and by that I mean get paid for their AI avatars. Earlier this month, actor Tom Hanks warned people about a dental ad that used his likeness without his approval. "Beware!!" Hanks told his 9.5 million Instagram followers. "There's a video out there promoting some dental plan with an AI version of me. I have nothing to do with it."
Hanks in an April podcast predicted the perils posed by AI. "Right now if I wanted to, I could get together and pitch a series of seven movies that would star me in them in which I would be 32 years old from now until kingdom come. Anybody can now re-create themselves at any age they are by way of AI or deepfake technology ... I can tell you that there [are] discussions going on in all of the guilds, all of the agencies, and all of the legal firms to come up with the legal ramifications of my face and my voice — and everybody else's — being our intellectual property."
Of course, he was right about all those discussions. The Writers Guild of America just ended the writers strike with Hollywood after agreeing to terms on the use of AI in film and TV. But actors, represented by SAG-AFTRA, are still battling it out, with one of the sticking points being the use of "digital replicas."
OpenAI is rolling out new voice and image capabilities in ChatGPT that let you "have a voice conversation or show ChatGPT what you're talking about." The new capabilities are available to people who pay to use the chatbot (ChatGPT Plus costs $20 per month.)
Says the company, "Snap a picture of a landmark while traveling and have a live conversation about what's interesting about it. When you're home, snap pictures of your fridge and pantry to figure out what's for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you."
So what's it like to talk to ChatGPT? Wall Street Journal reviewer Joanna Stern describes it as similar to the movie Her, in which Joaquin Phoenix falls in love with an AI operating system named Samantha, voiced by Scarlett Johansson.
"The natural voice, the conversational tone and the eloquent answers are almost indistinguishable from a human at times," Stern writes. "But you're definitely still talking to a machine. The response time ... can be extremely slow, and the connection can fail — restarting the app helps. A few times it abruptly cut off the conversation (I thought only rude humans did that!)"
A rude AI? Maybe the chatbots are getting more human after all.
Speaking of more humanlike AIs, a company called Fantasy is creating "synthetic humans" for clients including Ford, Google, LG and Spotify to help them "learn about audiences, think through product concepts and even generate new ideas," reported Wired.
"Fantasy uses the kind of machine learning technology that powers chatbots like OpenAI's ChatGPT and Google's Bard to create its synthetic humans," according to Wired. "The company gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI's GPT and Anthropic's Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client's offerings."
Humans aren't cut out of the loop completely. Fantasy told Wired that for oil and gas company BP, it's created focus groups made up of both real people and synthetic humans and asked them to discuss a topic or product idea. The result? "Whereas a human may get tired of answering questions or not want to answer that many ways, a synthetic human can keep going," Roger Rohatgi, BP's global head of design, told the publication.
So, the end goal may be to just have the bots talking among themselves. But there's a hitch: Training AI characters is no easy feat. Wired spoke with Michael Bernstein, an associate professor at Stanford University who helped create a community of chatbots called Smallville, and it paraphrased him thus:
"Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is 'still an open research question,' he says."
Deloitte updated its report on the "State of Ethics and Trust in Technology" for 2023, and you can download the 53-page report here. It's worth reading, if only as a reminder that the way AI tools and systems are developed, deployed and used is entirely up to us humans.
Deloitte's TL;DR? Organizations should "develop trustworthy and ethical principles for emerging technologies" and work collaboratively with "other businesses, government agencies, and industry leaders to create uniform, ethically robust regulations for emerging technologies."
And if they don't? Deloitte lists the damage from ethical missteps, including reputational harm, human damage and regulatory penalties. The researcher also found that financial damage and employee dissatisfaction go hand in hand. "Unethical behavior or lack of visible attention to ethics can decrease a company's ability to attract and keep talent. One study found employees of companies involved in ethical breaches lost an average of 50% in cumulative earnings over the subsequent decade compared to workers in other companies."
The researcher also found that 56% of professionals are unsure if their companies have ethical guidelines for AI use, according to a summary of the findings by CNET sister site ZDNET.
One of the challenges in removing brain tumors is for surgeons to determine how much around the margins of the tumor they need to remove to ensure they've excised all the bad stuff. It's tricky business, to say the least, because they need to strike a "delicate balance between maximizing the extent of resection and minimizing risk of neurological damage," according to a new study.
That report, published in Nature this week, offers news about a fascinating advance in tumor detection, thanks to an AI neural network. Scientists in the Netherlands developed a deep learning system called Sturgeon that aims to assist surgeons in finding that delicate balance by helping to get a detailed profile of the tumor during surgery.
You can read the Nature report, but I'll share the plain English summary provided by New York Times science writer Benjamin Mueller: "The method involves a computer scanning segments of a tumor's DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hours-long surgery, can help surgeons decide how aggressively to operate."
In tests on frozen tumor samples from prior brain cancer operations, Sturgeon accurately diagnosed 45 of 50 cases within 40 minutes of starting that DNA sequencing, the Times said. And then it was tested during 25 live brain surgeries, most of which were on children, and delivered 18 correct diagnoses.
The Times noted that some brain tumors are difficult to diagnose, and that not all cancers can be diagnosed by way of the chemical modifications the new AI method analyzes. Still, it's encouraging to see what could be possible with new AI technologies as the research continues.
Given all the talk above about how AIs are being used to create pretend versions of real people (Super Mario aside), the word I'd pick for the week would be "anthropomorphism," which is about ascribing humanlike qualities to nonhuman things. But I covered that in the Aug. 19 edition of AI and You.
So instead, I offer up the Council of Europe's definition of "artificial intelligence":
A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim to be able to entrust a machine with complex tasks previously delegated to a human.
However, the term artificial intelligence is criticized by experts who distinguish between "strong" AI (who are able to contextualize very different specialized problems completely independently) and "weak" or "moderate" AI (who perform extremely well in their field of training). According to some experts, "strong" AI would require advances in basic research to be able to model the world as a whole and not just improvements in the performance of existing systems.
For comparison, here's the US State Department quoting the National Artificial Intelligence Act of 2020:
The term "artificial intelligence" means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.