• brotato@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    It’s good to see the sentiment growing. Anecdotally, there are non-technical people in my circle that use LLMs frequently as search engine replacements or to do stupid shit like generate pictures and emojis. I hope that begins to decline with the general sentiment called out in this article.

    The sheer number of useless LLM integrations in every website, every mobile app, and hell, even smart TVs is insane. I feel like it’s causing people very real feature fatigue. And all of the Internet content and advertising slop is making the takeover seem so much worse.

    Edit: Grammar, formatting

    • illi@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Search engine replacement is probably the only use case of AI for me - for the times when I don’t know exactly what I’m searching for so the conversation style is helpful.

      • INeedANewUserName@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Is it the conversational style? or that search engines have been designed to be actively worse to keep your eyeballs spending more time looking at advertisements now?

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I react with either neutral apathy, disgust or surprise when somebody tries to show me their latest AI generated blob. Repeat twice and they stop using it. Our fear of social embarassment is higher than our desire to use AI.

      “Look at this picture of me in a Ghibli style I generated”

      “Oh… It’s kinda bad isn’t it? I’d avoid sharing it”

      “Oh remember what we were debating earlier? Gemini said that…”

      “Oh I know what you’re going to say, it said something totally dumb, right? I know, one must be very stupid to trust it haha so anyway what were you saying?”

  • jqubed@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    What began in 2022 as broad optimism about the power of generative AI to make peoples’ lives easier has instead shifted toward a sense of deep cynicism that the technology being heralded as a game changer is, in fact, only changing the game for the richest technologists in Silicon Valley who are benefiting from what appears to be an almost endless supply of money to build their various AI projects — many of which don’t appear to solve any actual problems.

    • edgemaster72@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      many of which don’t appear to solve any actual problems.

      That’s putting it lightly. If only the issue was merely not having sufficient use cases, rather than actively making lives worse through environmental strains, supply chain hoarding, and misinformation.

  • NoForwardslashS@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Three years ago, as OpenAI’s ChatGPT was making its splashy debut, a Pew Research center survey found that nearly one in five Americans saw AI as a benefit rather than a threat. But by 2025, 43 percent of U.S. adults now believe AI is more likely to harm them than help them in the future, according to Pew.

    1 in 5 people seeing something as positive is not a high approval rating in the beginning.