• ChicoSuave@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    13 hours ago

    The same crowd that got “Do Not Drink” on bleach are the reason AI gets so many new headlines. It may never see broad adoption because it’s too dangerous to vulnerable people.

    • synae[he/him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 hours ago

      It may never see broad adoption because it’s too dangerous to vulnerable people.

      Funny, I draw the opposite conclusion: that’s exactly why it will become a pervasive part of society

  • Dogiedog64@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    14 hours ago

    And people call me crazy for saying LLMs are a cognitohazard…

    Do not buy the insanity glasses. Abhor the Abominable Intelligence.

  • scytale@piefed.zip
    link
    fedilink
    English
    arrow-up
    33
    ·
    19 hours ago

    When he first started using Meta AI, Daniel recalls, his experience was “wonderful.” He was on a “spiritual journey” as he leaned into reflection and sobriety, he told us, and wanted be a “better human.” Meta AI, he felt, was helping him do that.

    I’m not downplaying his struggles at all, but it seems like there already was a problem even before the AI stuff came into the picture, and it just exacerbated it.

    I have not turned to LLM chatbots for conversations and companionship so I can’t say with 100% confidence I won’t fall down a similar rabbit hole, but I think there must have already been something going on if you do.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 hours ago

      Yeah, and people who get abused by evil spiritual leaders usually have pre-existing problems too, but those spiritual leaders need to be separated from society. And if AI isn’t safe for everyone to use, it should have a licensing process the same as guns or cars. Or at least be adults-only like alcohol.

    • GreenCrunch@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      18 hours ago

      I am sure there was already something going on, but the sycophantic nature of AI chatbots means they are very effective at preying on mental illness.

      You can see how someone with schizophrenia, OCD, etc. might get into a very unhealthy state with them. Or the lonely people being taken in by creepy “AI girlfriend” apps.

      Again, not that there aren’t undeying issues, but in the race for more AI everything it’s clear that these companies don’t give a shit who gets chewed up and destroyed on the way. And in the US, AI chatbots are now the fastest way someone can feel like they’re being listened to and understood by a therapist. And, given the political situation, I won’t be at all surprised if ChatGPT is approved as a therapist. They’ve already got AI prescription writing in Utah.