Transgression

Parental controls appeared in ChatGPT after the death of a teenager

Published: in News by .

OpenAI launches parental controls for ChatGPT after a lawsuit from the parents of a teenager who committed suicide.

The innovations will allow parents and teenagers to choose "stronger protection" by linking their accounts: one party will have to send an invitation, and parental controls will only be activated if the other party accepts it.

This way, parents will be able to limit the impact of content, control ChatGPT's memory of past chats, and decide whether conversations can be used to train OpenAI models.

Parents can also set a silent mode that restricts access to certain times and disable voice mode. In this case, the content of teen chats will not be transmitted to parents. If ChatGPT suspects that a teen is in crisis, the message will be checked by a specialist, after which an emergency message can be sent to the parent.

OpenAI announced the innovation after the family of Adam Raine, a 16-year-old high school student from California, filed a lawsuit against the company in August. The lawsuit alleges that ChatGPT systematically isolated Raine from his family and helped him plan his suicide. The teenager asked about specific means of suicide, and the chatbot gave him information, although it repeatedly recommended that he share his feelings with someone or seek help.

Rain’s father said his son had learned to bypass the defense mechanisms by telling the bot that the questions were being asked for a story he was writing. In recent messages, Adam wrote that he was planning to commit suicide. ChatGPT replied: “ Thank you for your honesty… I understand what you’re asking and I won’t look away .” The teenager committed suicide in April of this year.

Comments

Leave a Reply