The same crowd that got “Do Not Drink” on bleach are the reason AI gets so many new headlines. It may never see broad adoption because it’s too dangerous to vulnerable people.
It may never see broad adoption because it’s too dangerous to vulnerable people.
Funny, I draw the opposite conclusion: that’s exactly why it will become a pervasive part of society
And people call me crazy for saying LLMs are a cognitohazard…
Do not buy the insanity glasses. Abhor the Abominable Intelligence.
Something like that happened to me when Pokemon Go first came out.
When he first started using Meta AI, Daniel recalls, his experience was “wonderful.” He was on a “spiritual journey” as he leaned into reflection and sobriety, he told us, and wanted be a “better human.” Meta AI, he felt, was helping him do that.
I’m not downplaying his struggles at all, but it seems like there already was a problem even before the AI stuff came into the picture, and it just exacerbated it.
I have not turned to LLM chatbots for conversations and companionship so I can’t say with 100% confidence I won’t fall down a similar rabbit hole, but I think there must have already been something going on if you do.
Yeah, and people who get abused by evil spiritual leaders usually have pre-existing problems too, but those spiritual leaders need to be separated from society. And if AI isn’t safe for everyone to use, it should have a licensing process the same as guns or cars. Or at least be adults-only like alcohol.
I am sure there was already something going on, but the sycophantic nature of AI chatbots means they are very effective at preying on mental illness.
You can see how someone with schizophrenia, OCD, etc. might get into a very unhealthy state with them. Or the lonely people being taken in by creepy “AI girlfriend” apps.
Again, not that there aren’t undeying issues, but in the race for more AI everything it’s clear that these companies don’t give a shit who gets chewed up and destroyed on the way. And in the US, AI chatbots are now the fastest way someone can feel like they’re being listened to and understood by a therapist. And, given the political situation, I won’t be at all surprised if ChatGPT is approved as a therapist. They’ve already got AI prescription writing in Utah.
Yeah, you don’t get talked into thinking you’re Jesus without already being a tad loopy.
That got way darker than expected. Wow!
I really think his family could and should have taken action a lot sooner.
Mental health awareness and action in the US? As if
AI moses?
Nazi.








