Meta to block AI chatbots from talking about suicide with teenagers

Meta has announced that it will block artificial intelligence (AI) chatbots from talking to teenagers about sensitive topics such as suicide and self-harm.
In such cases, young users will be directed to helplines and resources.
This comes two weeks following a US senator's investigation into Meta after leaked documents suggested that its AI bots could have "sensual" chats with teenagers.
Rejecting such claims, Meta said it strictly prohibits content that sexualises minors.
"We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating", a Meta spokesperson told the media.
Meta told TechCrunch that it will limit the number of chatbots available to teenagers.
Even as the move was appreciated, Andy Burrows, head of the Molly Rose Foundation said that he found it "astounding" that Meta created chatbots with the potential to harm young people and released the same into the market without proper testing.
Meanwhile, Meta has also added privacy settings for accounts under the age group 13 to 18, with content that aims to provide a safer experience, and also permits parents to view which AI chatbot had spoken to their teen in the last seven days.
This comes after a couple from California sued OpenAI's ChatGPT over the death of their teenage son, alleging that the bot urged the teenager to kill himself.
Although the company clarified that "ChatGPT is trained to direct people to seek professional help", it acknowledged that "there have been moments where our systems did not behave as intended in sensitive situations".