• Yeller_king@reddthat.com
    link
    fedilink
    arrow-up
    2
    ·
    3 hours ago

    It might mean you’ve asked a trivial/routine question you easily could have answered yourself. In the same way someone might just send you a Google response prior to chatgpt.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      That’s the polite variant, but it still involves the use of LLM, and the assumption that machine learning is AI (it’s not, despite what the tech bros tell you). People using LLMs should be treated like people who pick their nose and eat their boogers at the dinner table. :p

  • Sunsofold@lemmings.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    4
    ·
    15 hours ago

    No love for LLMs from me but, flatly, no. Asking a question is soliciting a response. Their response is not the one you wanted, but it is solicited. It would be like you asking for a dick pic from someone, the penis of whom you were interested in seeing, and them responding with a generated image from one of the unfiltered image generators.
    The intellectual equivalent to an unsolicited dick pic is probably spam advertising. A piece of media is being sent to someone who did not request it, by someone who does not care if the recipient does not want to receive it.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      5 hours ago

      We’ve gone into this in detail in the other threads. If you send someone LLM output, your a shitty friend/colleague/whatever.

      • dream_weasel@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 hours ago

        And yet still in no way equivalent to a dick pic. Equivalence here is “raspberriesareyummy doesnt like that” which doesn’t exactly meet muster, even for a shower thought.

    • Mohamed@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      12 hours ago

      Totally agree. It’s no where near the level of a dick pic - a dick pic is sexual harassment.

  • owenfromcanada@lemmy.ca
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    22 hours ago

    I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

    The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

    • Pyr@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      Or just sending the link to chatgpt.

      “Don’t ask me, just ask chatgpt! What am I, your boss or something?!”

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      5
      ·
      21 hours ago

      I don’t quite get the equivalence there.

      It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.

      • owenfromcanada@lemmy.ca
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        21 hours ago

        They’re both garbage, sure, but I wouldn’t call it an equivalent. Especially in severity–one is insulting, the other is sexual harassment.

        The key word is “unsolicited.” An LLM response to a question you ask is garbage, but it’s solicited garbage. Like asking someone in Home Depot where the hammers are, and having them take 10 minutes for them to look it up on their phone. It’s a stupid response, but it was solicited. It’s at least a lazy attempt to respond relevantly, however insulting.

  • mushroommunk@lemmy.today
    link
    fedilink
    arrow-up
    108
    ·
    2 days ago

    I read recently in an article something that struck me as the heart of it and fits.

    “Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read.” - Dan Brooks

    • Štěpán@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      1
      ·
      2 days ago

      That’s something I’ve attempted to say more than once but never formulated this well.

      Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I’m looking at a well written technical document or a crap resembling it. It’s especially hard when I’m very new to the topic.

      Paradoxically, AI slop made me actually read the official documentation much more, as it’s now easier than to do this AI-checking. And also personal blogs, where it’s usually clearly visible they are someone’s beloved little digital garden.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      18
      ·
      2 days ago

      I had this “shower” thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction (“please never ever do that again”)

    • Yaky@slrpnk.net
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      The question I ask is “How do you justify saving your time at expense of others’ time?”

      Haven’t heard a good answer, just mumbling “it can be set to be less verbose…”

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      The most annoying part - the recipients email client probably offered to summarise with an LLM. My bot makes slop for your bot to interpret.

      Its the most inefficient form of communication ever devised. Please decompress my prompt 1000x so the recipient can compress it back to my prompt.

      I will say though, even a chatgpt email tells you a lot about the sender.

    • jjpamsterdam@feddit.org
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Thank you for this great answer! It’s something I intuitively felt but couldn’t put my finger on with the same surgical precision you just did.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Question: why does the linked lemmy.today “theatlantic@ibbit.at” show up here on lemmy.world (https://lemmy.world/c/theatlantic@ibbit.at), but there are zero posts visible in the community? I mean - since you commented from lemmy.today, we are clearly federated? I am confused - I wanted to comment on the article you linked with a question, but I can’t find it via lemmy.world :(

      Edit: Mhh… it seems I could send a federation request specifically for that community. I have done that, I hope someone will respond to it.

        • raspberriesareyummy@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Yeah, it’s working now :) This was the first time I experienced having to subscribe to be able to see posts from a community. Still weird, but if I assume correctly that this works like the Usenet, if I unsubscribe again, now that the community is federated properly, the posts should remain visible to everyone @lemmy.world?

  • CombatWombatEsq@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    To me, it is exactly the same as people who linked lmgtfy.com or responded RTFM. If you send me an LLM summary, I’m assuming you’re claiming that I’m the asshole for bothering you. If I am being lazy, I’ll take the hint. If I’m struggling to find a way to do the research myself, either because I’m not sure how to properly research it myself, or because LLMs have made the internet nigh-unusable, I’m gonna clock you as a tremendous asshole.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      I think there’s an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).

      For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking “Did you just send me LLM output?”

      It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/

      • mech@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        Them: Read The Fucking Manual!

        The Manual
                             The unset builtin treats attempts to unset array
                             subscripts @ and * differently depending on whether
                             the array is indexed or associative, and differently
                             than in previous versions.
                      •      Arithmetic commands ( ((...)) ) and the expressions
                             in an arithmetic for statement can be expanded more
                             than once.
                      •      Expressions used as arguments to arithmetic
                             operators in the [[ conditional command can be
                             expanded more than once.
                      •      The expressions in substring parameter brace
                             expansion can be expanded more than once.
                      •      The expressions in the $((...)) word expansion can
                             be expanded more than once.
                      •      Arithmetic expressions used as indexed array
                             subscripts can be expanded more than once.
                      •      test -v, when given an argument of A[@], where A is
                             an existing associative array, will return true if
                             the array has any set elements.  Bash-5.2 will look
                             for and report on a key named @.
                      •      The ${parameter[:]=value} word expansion will return
                             value, before any variable-specific transformations
                             have been performed (e.g., converting to lowercase).
                             Bash-5.2 will return the final value assigned to the
                             variable.
                      •      Parsing command substitutions will behave as if
                             extended globbing (see the description of the shopt
                             builtin above) is enabled, so that parsing a command
                             substitution containing an extglob pattern (say, as
                             part of a shell function) will not fail.  This
                             assumes the intent is to enable extglob before the
                             command is executed and word expansions are
                             performed.  It will fail at word expansion time if
                             extglob hasn't been enabled by the time the command
                             is executed.```  
        
      • Klear@quokk.au
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        It’s not meant as an actual manual. What you’re really supposed to do is comb through ad-ridden google results until you find that one 10 years old reddit thread where someone thanks a deleted comment for solving the issue you have.

        • raspberriesareyummy@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          until you find that one 10 years old reddit thread where someone thanks a deleted comment for solving the issue you have.

          I wasn’t gonna upvote you, but that one made me chuckle. Also because I have posted many of those “deleted comments” and wiped my reddit profile as clean as I could before leaving years ago.

      • BassTurd@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        The only time it’s been kind of relevant in my dealings is the Arch wiki, because it really is a solid resource. However, sometimes my issue is a specific one and I need more than general information on a process. RTFM ruins communities when someone is looking for support. It’s just an entitled response to someone asking for help.

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Somehow, people don’t get that if we ask something to them, it’s because we want their personal interpretation of it, otherwise, we would use the internet as well

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Specifically this - in terms of learning a language, understanding some nuances also absolutely requires an explanation by a native speaker that has a really good grasp of their language AND a talent of explaining. Both of which are criteria diametrically opposed to the average slop training data.

  • CallMeAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    2 days ago

    I mean on one hand, it’s a shower thought. On the other, this is a really dumb shower thought.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      I often use AI to break up my ADHD mono-sentence paragraph. I’ll stream of consciousness my reply then tell it to not change my wording but break up the excessively long sentences, and to reorder and split things into paragraphs that follow well. I’m still doing the writing, but having an advanced spell check is actually super useful.

    • Drusas@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      I needed that reminder. It doesn’t matter how stupid a showerthought is.