OpenAI has a long blog post about what more needs to be done to make ChatGPT safer, especially for teens:
We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.
I didn’t realize it could already escalate potential criminal behavior to human review, so that’s good. In the future will OpenAI need a team of real therapists on call? Using AI as a therapist will have many repercussions.