Photo: KIRILL KUDRYAVTSEV / AFP / Getty Images
OpenAI has announced new safety measures for its AI chatbot, ChatGPT, aimed at protecting teenage users. The company will implement an age-prediction system and ID age verification in certain countries to ensure that users are accessing the appropriate version of the software. This move comes as part of OpenAI's efforts to balance user freedom with safety.
OpenAI CEO Sam Altman recently discussed the challenges of maintaining this balance in a blog post. He stated that the company is developing a system to categorize users into two groups: adolescents aged 13 to 17 and adults aged 18 and older. The announcement was strategically made just hours before a Senate Judiciary Committee hearing on the potential risks associated with AI chatbots was set to begin.
The new safety features are part of OpenAI's ongoing commitment to enhancing user safety, particularly for younger audiences. As AI technology continues to evolve, the company is taking proactive steps to address concerns about its impact on minors.