The screenshots indicate that users would be able to select the gender and age of the chatbot. Next, users would be able to choose their AI’s ethnicity and personality. For instance, your AI friend can be “reserved,” “enthusiastic,” “creative,” “witty,” “pragmatic” or “empowering.”
To further customize your AI friend, you can choose their interests, which will “inform its personality and the nature of its conversations,” according to the screenshots. The options include “DIY,” “animals,” “career,” “education,” “entertainment,” “music,” “nature” and more.
Once you have made your selections, you would be able to select an avatar and a name for your AI friend. You would then be taken to a chat window, where you could click a button to start conversing with the AI.
Instagram declined to comment on the matter. And of course, unreleased features may or may not eventually launch to the public, or the feature may be further changed during the development process.
The social network’s decision to develop, and possibly release, an AI chatbot marketed as a “friend” to millions of users has risks. Julia Stoyanovich, the director of NYU’s Centre for Responsible AI and an associate professor of computer science and engineering at the university, told TechCrunch that generative AI can trick users into thinking they are interacting with a real person.
“One of the biggest — if not the biggest — problems with the way we are using generative AI today is that we are fooled into thinking that we are interacting with another human,” Stoyanovich said. “We are fooled into thinking that the thing on the other end of the line is connecting with us. That it has empathy. We open up to it and leave ourselves vulnerable to being manipulated or disappointed. This is one of the distinct dangers of the anthropomorphization of AI, as we call it.”
When asked about the types of safeguards that should be put in place to protect users from risks, Stoyanovich said that “whenever people interact with AI, they have to know that it’s an AI they are interacting with, not another human. This is the most basic kind of transparency that we should demand.”
The development of the “AI friend” feature comes as controversies around AI chatbots have been emerging over the past year. Over the summer, a U.K. court heard a case where a man claimed that an AI chatbot had encouraged him to attempt to kill the late Queen Elizabeth days before he broke into the grounds of Windsor Castle. In March, the widow of a Belgian man who died by suicide claimed that an AI chatbot had convinced him to kill himself.
Other social platforms have launched AI chatbots to mixed results. For instance, Snapchat launched its “My AI” chatbot in February and faced controversy for doing so without appropriate age-gating features, as the chatbot was found to be chatting to minors about topics like covering up the smell of weed and setting the mood for sex.
It’s not clear which AI tools Instagram would use to power the “AI friend,” but as generative AI booms, the social network’s parent company Meta has already begun incorporating the technology into its family of apps. Last month, Meta launched 28 AI chatbots that users can message across Instagram, Messenger and WhatsApp. Some of the chatbots are played by notable names like Kendall Jenner, Snoop Dogg, Tom Brady and Naomi Osaka. It’s worth noting that the launch of the AI personas wasn’t a surprise, given that Paluzzi revealed back in June that the social network was working on AI chatbots.
Unlike the “AI friend” chatbot that can chat about a variety of topics, these interactive AI personas are each designed for different interactions. For instance, the AI chatbot that is played by Kendall Jenner, called Billie, is designed to be an older sister figure that can give young users life advice.
The new “AI friend” chatbot that Instagram appears to be developing seems to be designed to facilitate more open-ended conversations.