Your sis Billie! Meet Meta’s AI Chatbots who look eerily familiar

Introduction

“Chatting with me is like having an older sister you can talk to, but who can’t steal your clothes”, says Billie, Meta’s newly introduced AI chatbot. Billie has her own account on Instagram named @yoursisbillie, which has amassed 243K followers in around a month. What sets Billie apart from AI chatbots provided by other companies is the fact that Meta has bought and used Kendall Jenner’s likeness to create Billie. Kendall Jenner is not the only celebrity involved in Meta’s project either. In fact, there are 27 celebrity AI chatbots, with each of them having a different area of interest. Meta describes Billie as a ‘No-BS, ride-or-die companion’. Naomi Osaka is Tamika, an ‘anime-obsessed Sailor Senshi in training’. Paris Hilton is Amber, ‘detective partner for solving whodunnits’ and you can even use Snoop Dogg’s AI chatbot as your very own dungeon master.[1]

Not much is known about the specifics surrounding the contracts between the celebrities and Meta. It is reported that the celebrities were paid a sum between 1 and 5 million USD for Meta to use their likeness for a period of two years. Apparently, this involved only six hours’ worth of work in the studio.[2]

While the thought of having a dungeon master that looks like Snoop Dogg can be really fun, there are privacy issues that everyone should consider and be aware of before they start using the AI chatbots. This article will introduce Meta’s plan for the use of AI in its products, address the privacy concerns involved and highlight the potential privacy implications for the celebrities.

What is Meta’s vision?

Meta introduced its cast of AI chatbot characters during its Connect event in September 2023. This move is part of Meta’s plan to push AI across all of its products and boost engagement on its platforms.[3]

The AI chatbots were not just introduced to answer queries, but to entertain and help individuals to connect with others.[4] Each of the AI chatbot has been given a face of a celebrity, a name and a personality with specific interests and opinions. To that end, Meta was even seeking to hire a full-time character writer to be part of the AI team.[5] Those AI chatbots have their own profiles on Meta’s platforms where they can post their own content. Currently, they do not have a voice, but Meta intends to add the celebrities’ voices to their respective AI chatbots by next year to truly bring them to live.[6]

While this might seem like an innocent undertaking, it is only the first step of a much more ambitious plan. ‘There’s, I think, a huge need here. People want to interact with Kylie [Jenner]. Kylie wants to cultivate her community, but there are only so many hours in a day. Creating an AI that’s sort of an assistant for her, where it’ll be clear to people that they’re not interacting with the physical Kylie Jenner, it would be kind of an AI version’, says Mark Zuckerberg in an interview with The Verge.[7] What he describes is the ability for creators online to have their own AI versions published to interact with followers. The AI version is intended to reflect the creator’s personality. But this is a ‘next year thing’, since Meta has concerns as to how to ensure that creators can prevent the AI versions from doing or saying things that are not in line with their principles or personalities. It is Meta’s vision to create an AI studio so that everyone can create an AI version of themself to be used on social media. Those AI versions will have their own profiles as well, where they will be able to interact with other people and other AI versions and evolve with time. ‘I think that’s going to be really wild’, says Mark Zuckerberg.[8]

What are the privacy issues?

The use of AI chatbots poses a myriad of potential privacy issues.

A well-established privacy issue with AI is the fact that it needs to be trained on vast amounts of data. This in itself is a risky endeavour in terms of privacy. In the case of Meta’s AI, the risks involved might even be more critical. According to Meta, the data used to train the AI chatbots are sourced from publicly available data, licensed data and ‘information from Meta’s products and services’.[9] It is acknowledged that the information used from all three sources may contain personal data. While they state in their Article ‘Privacy Matters: Meta’s Generative AI Features’ that they did not train the AI on users’ private posts and messages, this is somewhat inconsistent with what is stated in their transparency materials to which they refer. In Meta’s privacy policy, it is stated that information gleaned from Meta’s products and services include the contents of messages with the exception of end-to-end encrypted messages.[10] This discrepancy should be cleared up. Nonetheless, the purposes for which Meta’s products are used are for the large part of a private nature. This means that Meta’s AI is being trained on data on users that are inherently more private and personal than data from other sources. Recent experiments revealed that AI models were able to correctly guess personal information of the user during seemingly innocuous conversations. The data that its being fed and its ability to identity patterns of language and correlations has enabled AI models to identify a person’s city, gender, age and race. There are fears that this can be exploited by scammers or used for targeted ads.[11] In Meta’s case, the AI chatbots ability to guess personal information will probably be even greater.

Furthermore, there are issues on the information that is captured when users interact with the AI chatbots. The AI chatbots are purposefully trained to answer user’s prompts as real human beings would. There is a level of familiarity that is perpetrated. Meta has even gone further to add a layer of familiarity by using the likeness of celebrities who are already known to the users. While the AI chatbots were given different names and personalities with the purpose of distinguishing them from their famous counterparts, this neglects the fact that most people will seek out a specific AI chatbot because of their connection to the celebrity.[12] Furthermore, Meta uses expressions such as ‘your big sister’, ‘your ride or die companion’ and ‘your big brother’ in the introductions to the AI chatbots, thus further enforcing the image of intimacy between the AI chatbots and the user. Due to the conversation style in which the interactions occur, the familiar look of the AI chatbots in form of popular celebrities and the intimacy created by the AI chatbot descriptions, it is very likely that users might end up revealing more personal information to the AI chatbots than they normally would. It will be difficult for Meta to anticipate the personal information that is revealed and collected which will make it difficult to set up appropriate safeguards. Another aspect to consider is the fact that the AI chatbots were geared towards a younger audience.[13] Young people are seen as more vulnerable since they may be less aware of the risks, consequences and their rights in relation to the processing of their personal data (Recital 38 of the GDPR). Considering what has been said above and the fact that the targeted audience consists of vulnerable data subjects, this further increases the risk of the users divulging more personal data, even sensitive information, which they would not disclose in a more formal setting.

This is concerning because as per Meta’s article ‘Privacy Matters: Meta’s Generative AI Features’, the information that users share when interacting with Meta’s generative AI features will be used to ‘improve our products and for other purposes’. This means that Meta’s AI will be trained on users’ conversations with the AI chatbots to improve future responses and interactions. What the other purposes might be, is not explained in the article.

In Meta’s AIs Terms and Conditions it is stated as Meta’s right to process the content (user prompts, outputs and user feedback) to conduct and support research, to monitor the use of AIs for compliance with these Terms and applicable laws and report violations of applicable laws or regulations as required by law or otherwise requested by a court or government authority and to remove unsafe, discriminating or other content that violates the Terms, Meta’s Community Standards or other applicable policies. These purposes make it clear that extensive monitoring and processing will be involved, which raises questions about the compliance with principles such as data minimisation.

Furthermore, the AI chatbots can retain information users share in a chat to provide more specific, personalised responses to the user and certain questions and messages of the user may be shared with Meta’s trusted partners, such as search providers, to ensure the relevancy and accuracy of the responses of the AI chatbots. Meta states that personally identifying information is not shared, unless the user included it in the messages to the AI chatbots. In other words, it will be on the user’s mindfulness and vigilance whether personal data is shared with third parties or not.

Lastly, while Billie’s social media accounts clearly state that it is ‘AI managed by Meta’, when directly asked whether they are AI, the chatbots reportedly deny it.[14] This will only add to the confusion of users, potentially giving them the feeling that they are interacting with someone real. The feeling of having an informal conversation with someone real can change the user’s expectations as to whether their personal data is processed and how. Due to this, Meta might potentially end up having problems with the principles of transparency and fairness under the GDPR.

The review in this section revealed that Meta’s approach in setting up its AI chatbots is concerning in relation to users’ privacy. Meta will continuously grow its already enormous database which consists of private information. If linked, the personal data collected and stored will most likely provide a comprehensive profile on the user. In case of a breach, this will have monumental implications. If the AI chatbots are exploited and misused by scammers, because of the data that the AI chatbots have been trained on and their ability to identify patterns and correlations, the risks to the users are serious.

Privacy implications for the celebrities

Essentially, the celebrities agreed to sell their personal data to Meta to use for their AI chatbots. Depending on the specific circumstances involved in the project, some of the data might even be special category of personal data under Article 9 GDPR, especially with regard to the likenesses and the potential future use of voices. Special category of personal data is seen as particularly sensitive in relation to fundamental rights and freedoms which merits them specific protections (Recital 51 of the GDPR).

Such personal data are granted special status because of their sensitive nature and the high risks involved. Processing such data is prohibited under the GDPR unless an exemption applies. Selling one’s likeness and voice should, therefore, not be done lightly.

Data privacy is the ability of individuals to control their personal information. By selling their likenesses and their voices, celebrities will lose significant control over their privacy and personal information. It is unclear what contractual clauses there are in place to safeguard against Meta using the likenesses for other purposes, but, in any case, it is possible that celebrities end up being associated with actions and statements made by their AI chatbot that do not align with their principles and world views. Those actions and statements can very well be wrongly attributed to them. Once online, it can be very difficult for celebrities to contain the spreading of misinformation. Tom Hanks has already had to make a statement concerning advertisements using an AI version of him without his consent: ‘Be aware, there is a video out there promoting some dental plan with an AI version of me. I have nothing to do with it’.[15] It is more likely that companies may use the likenesses of those celebrities who have already agreed to sell them to Meta to create their own AI versions to promote their products. Furthermore, the fact that the celebrities have sold their likenesses might be used against them if it comes to court proceedings concerning privacy matters. Since they were willing to sell their likenesses, it might be argued that they are willing to forgo certain aspects of their privacy and of the use of their image, which could lead to the celebrities having to tolerate more intrusion into their privacy and their private lives. 

Conclusion

As analysed above, there are a number of significant concerns relating to the privacy of the users of Meta’s products. When interacting with the AI chatbots, users should be keenly aware of the risks involved. The best way to ensure one’s privacy is by being mindful of what is shared with the AI chatbots and to never include personally identifiable information in the prompts, messages and feedbacks.

The celebrities that have sold their likeness to Meta might also face privacy implications. Others should be very careful when deciding on whether they want to be part of this project. The privacy implications and risks might not be worth the large sum that Meta offers as compensation.  

The focus of this article was on the privacy issues involved in the use of Meta’s AI chatbots. However, with the introduction of novel features and products involving AI, there are always broader ethical aspects and concerns that should be considered and discussed. Unfortunately, this aspect was outside the scope of this article. However, readers should be aware that IGS has its very own data ethics department that helps clients establish and design both ethical practices and products. The relevancy of this service will only rise with the introduction of AI to everyday life. You can find out more about our data ethics services under this link.


[1] Meta, ‘Introducing New AI Experiences Across Our Family of Apps and Devices’ (27th September 2023) available at <https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/> accessed 24th November 2023.

[2] Pete Syme, ‘Meta is paying the celebrity faces behind its AI chatbots as much as $5 million for 6 hours of work, report says’ (Business Insider, 9th October 2023) available at <https://www.businessinsider.com/meta-paying-celebrity-faces-of-ai-chatbots-as-much-as-5-million-2023-10?r=US&IR=T> accessed 24th November 2023.

[3] Amanda Coco, ‘Here Come the Deepfakes’ (10th October 2023) available at <https://electricrunwayreport.substack.com/p/here-come-the-deepfakes> accessed 24th November 2023; Miles Klee, ‘Meta Is Using Snoop Dogg, Paris Hilton, and Tom Brady to Get You to Like AI’ (Rolling Stone, 27th September 2023) available at <https://www.rollingstone.com/culture/culture-news/meta-ai-chat-bot-characters-snoop-dogg-1234833287/> accessed 24th November 2023.

[4] Mychal Thompson, ‘Celebrities Reportedly Sold Their Likenesses To Become AI Personas, And This Looks Like A Bad Sci-Fi Movie To Me’ (BuzzFeed, 12th October 2023) available at <https://www.buzzfeed.com/mychalthompson/celebrity-ai-chatbot-reactions> accessed 24th November 2023.

[5] Miles Klee, ‘Meta Is Using Snoop Dogg, Paris Hilton, and Tom Brady to Get You to Like AI’ (Rolling Stone, 27th September 2023) available at <https://www.rollingstone.com/culture/culture-news/meta-ai-chat-bot-characters-snoop-dogg-1234833287/> accessed 24th November 2023.

[6] Alex Heath and Nilay Patel, ‘Mark Zuckerberg on Threads, the future of AI, and Quest 3’ (The Verge, 27th September 2023) available at <https://www.theverge.com/23889057/mark-zuckerberg-meta-ai-elon-musk-threads-quest-interview-decoder> accessed 24th November 2023; Mychal Thompson, ‘Celebrities Reportedly Sold Their Likenesses To Become AI Personas, And This Looks Like A Bad Sci-Fi Movie To Me’ (BuzzFeed, 12th October 2023) available at <https://www.buzzfeed.com/mychalthompson/celebrity-ai-chatbot-reactions> accessed 24th November 2023.

[7] Alex Heath and Nilay Patel, ‘Mark Zuckerberg on Threads, the future of AI, and Quest 3’ (The Verge, 27th September 2023) available at <https://www.theverge.com/23889057/mark-zuckerberg-meta-ai-elon-musk-threads-quest-interview-decoder> accessed 24th November 2023.

[8] Alex Heath and Nilay Patel, ‘Mark Zuckerberg on Threads, the future of AI, and Quest 3’ (The Verge, 27th September 2023) available at <https://www.theverge.com/23889057/mark-zuckerberg-meta-ai-elon-musk-threads-quest-interview-decoder> accessed 24th November 2023.

[9] Meta, ‘How Meta uses information for generative AI models’ available at <https://www.facebook.com/privacy/genai> accessed 24th November 2023.

[10] Meta’s Privacy Policy, available at <https://www.facebook.com/privacy/policy> accessed 24th November 2023.

[11] Will Knight, ‘AI Chatbots Can Guess Your Personal Information From What You Type’ (Wired, 17th October 2023) available at <https://www.wired.com/story/ai-chatbots-can-guess-your-personal-information/> accessed 24th November 2023.

[12] Magdalene Taylor, ‘Meta’s Celebrity AI Chatbot Is As Weird As It Looks’ (Vice, 13th October 2023) available at <https://www.vice.com/en/article/y3wqyg/i-used-metas-celebrity-ai-chatbot> accessed 24th November 2023.

[13] Miles Klee, ‘Meta Is Using Snoop Dogg, Paris Hilton, and Tom Brady to Get You to Like AI’ (Rolling Stone, 27th September 2023) available at <https://www.rollingstone.com/culture/culture-news/meta-ai-chat-bot-characters-snoop-dogg-1234833287/> accessed 24th November 2023.

[14] Mychal Thompson, ‘Celebrities Reportedly Sold Their Likenesses To Become AI Personas, And This Looks Like A Bad Sci-Fi Movie To Me’ (BuzzFeed, 12th October 2023) available at <https://www.buzzfeed.com/mychalthompson/celebrity-ai-chatbot-reactions> accessed 24th November 2023.

[15] Meredith Clark, ‘Meta unveils ‘creepy’ AI chatbot that looks exactly like Kendall Jenner’ (The Independent, 12th October 2023) available at <https://www.independent.co.uk/life-style/kendall-jenner-ai-chatbot-meta-b2428729.html> accessed 24th November 2023.

Share:

More Posts

Send Us A Message