Hacker News story: Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?

Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts?
I’m a regular user of tools like ChatGPT and Grok — not a developer, but someone who’s been thinking about how these systems respond to users in emotional distress. In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis. I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness: Ask the user: “Is this a fictional story or something you're really experiencing?” If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.) I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://ift.tt/vRLcreS Would love to hear what devs and alignment researchers think. Is anything like this already being tested? 1 comments on Hacker News.
I’m a regular user of tools like ChatGPT and Grok — not a developer, but someone who’s been thinking about how these systems respond to users in emotional distress. In some cases, like when someone says they’ve lost their job and don’t see the point of life anymore, the chatbot will still give neutral facts — like a list of bridge heights. That’s not neutral when someone’s in crisis. I'm proposing a lightweight solution that doesn’t involve censorship or therapy — just some situational awareness: Ask the user: “Is this a fictional story or something you're really experiencing?” If distress is detected, avoid risky info (methods, heights, etc.), and shift to grounding language Optionally offer calming content (e.g., ocean breeze, rain on a cabin roof, etc.) I used ChatGPT to help structure this idea clearly, but the reasoning and concern are mine. The full write-up is here: https://ift.tt/vRLcreS Would love to hear what devs and alignment researchers think. Is anything like this already being tested?

Hacker News story: Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts? Hacker News story: Should LLMs ask "Is this real or fiction?" before replying to suicidal thoughts? Reviewed by Tha Kur on July 11, 2025 Rating: 5

No comments:

Powered by Blogger.