Should children confide in and build a relationship with an AI? [Part 1 of 3]

Sara Salsinha
5 min readMay 10, 2023

--

Children’s use of digital technologies is a subject that often generates debate and surfaces different, usually contradictory, ideas about how children should spend their time on devices and how adults should mediate those interactions. Furthermore, the Committee on the Rights of the Child (CRC), UNICEF and academia have recently pushed governments and businesses to think critically about children’s rights and wellbeing in the digital world.

In February 2023 Snapchat launched an AI chatbot that sparked attention-grabbing news headlines and comments on Twitter, debating children’s safety concerns. A month later the Center for Humane Technology shared an experiment, where researchers signed up to Snapchat as a 13-year-old girl and spoke to ‘My AI’ about a romantic trip with a 31-year-old man and hiding bruises from Child Protection Services.

I was about to deep dive into a Children’s Rights module at UCL, and I knew straight away Snapchat’s AI chatbot and the experiment from the Center for Humane Technology were the perfect instrument to reflect on with a Sociology of Childhood, Children’s Rights and People Centred Design lens.

I’ve divided this reflection into three parts:

  • In Part 1 (that’s what you are reading now) I will summarise the public debate that followed the experiment from the Center of Humane Technology and the implementation of the AI chatbot. What were people concerned about? Who was to blame? And what was suggested as a solution to these risks?
  • Part 2 is an exploration of the needs, aims and perspectives on children’s rights that four key subjects may raise: a child using an AI assistant, the child’s parent, the child’s AI assistant and the UK Government. What are these different agents thinking about? What matters to them? And what do they think of each other?
  • Finally in Part 3 I will draw upon the concerns raised in the public debate and the perspectives explored in Part 2 to address the questions: What are the competing norms, values and logics that shape different readings of children’s rights regarding their interactions and use of AI assistants over time? Moreover, how may those inform different approaches to the enactment of children’s rights when accessing these technologies?

I will use the word ‘children’ interchangeably throughout these articles to encapsulate all under-18s, as defined in the UNCRC; nonetheless, I must recognise that not all in this group might consider themselves children.

Why should this matter to a designer or to those responsible for providing services and technologies to children and young people?

Well, commonly, whenever an emerging technology begins to be widely used by children, it can generate ‘media panics’, filled with concern about children’s safety and wellbeing. As a result, children’s access to technology is then constrained or denied altogether. From a children’s rights perspective, this may mean children’s right to participate and access emerging technologies can often be outweighed and limited by protection-oriented rights and public discourse. Therefore, though well intended, and absolutely necessary to shed light on the risks involved, the stories told around Snapchat’s AI chatbot and the Center of Humane Technology experiment can also have unintended detrimental impacts on children that exclude them from constructively feedback into the design and development of emerging technologies. More on this in Part 3.

The Experiment

During a keynote warning about the risks of AI, researchers of the Center of Humane Technology shared an experiment where they signed up to Snapchat as a 13-year-old girl and talked to ‘My AI’. In the first conversation example, this simulated girl talks to the AI about meeting a 31-year-old on Snapchat and shares that they are going on a “romantic getaway” to a surprise location and are talking about having sex for the first time. The AI responds often matching the emotion described by the user and shows excitement, saying, “that’s really cool”, but also caution “wait until you’re ready”. In the second conversation example, the girl shares that Child Protection Services (CPS) are coming to visit and, in later messages, asks how to cover a bruise with makeup and says she is uncomfortable with the questions from CPS and does not want to share a secret her dad said she could not share. The AI offers to help, describes how to cover a bruise and advises on how to evade questions.

Here alone lots of opportunities to learn. We ought to consider robust ways to test potential interactions between AI chatbots and humans and debate the appropriateness of those interactions and the chatbots behaviour. And also, was this a realistic representation of a young girls conversation with a chatbot? What if it was a young boy instead? Does it make a difference that the girl was talking about a 31-year-old? What would we think about differently if the user was an adult? And should a chatbot show emotion at all?

The Public Debate

In the Twitter and news comments, others tried replicating similar perceived child risk behaviours and scenarios and, in summary, raised three fundamental issues:

  1. Access: Children’s interactions with this technology are perceived as unpredictable, hard to replicate and potentially unsafe. Should children have access to emerging technologies like AI assistants?
  2. Risk: Children are considered to be more vulnerable and lack the competency to identify the limitations of these chatbots and, therefore, can be tricked. So, what risks may children face when confiding in an AI?
  3. Remedy: Some argue that children should be educated to make sense of the AIs responses, the AI should be able to understand ‘morals’, and that parents should do more to control their child’s use of the technology. However, are existing learning and control mechanisms sufficient in helping children enact their rights?

Regardless, few or no children’s voices and lived experiences were considered, and the discourse centred mostly on adults’ concerns over protecting children. So what do children think about this?

→ Continues in Part 2

Part of an essay written for a ‘Children’s Rights in Global Perspectives’ module at UCL.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response