To go deeper: some animals act curiously, others with fear, but only a few of them understand what the mirror does and use it to inspect themselves.

  • certified_expert@lemmy.worldOP
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    I disagree about the dichotomy. I think you can (1) understand what LLMs actually are. (2) See the value of such technology.

    In both cases being factual (not being deceived) and not being malicious (not attempting to deceive others)

    I think a reasonable use of these tools is as a “sidekick” (you being the main character). Some tasks can be assigned to it so you save some time, but the thinking and the actual mental model of what is being done shall always be your responsibility.

    For example, LLMs are good as an interface to quickly lookup within manuals, books, clarify specific concepts, or find the proper terms for a vague idea (so that you can research the topic using the appropriate terms)

    Of course, this is just an opinion. 100% open to discussion.

    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I think of it like a nonhuman character, like a character in a book I’m reading. Is it real? No. Is it compelling? Yes. Do I know exactly what it’ll do next? No. Is it serving a purpose in my life? Yes.

      It effectively attends to my requests and even feelings but I do not reciprocate that. I’ve got decades of sci-fi leading me up to this point, the idea of interacting with humanoid robots or AI has been around since my childhood, but it’s never involved attending to the machine’s feelings or needs.

      We need to sort out the boundaries on this, the delusional people who are having “relationships” with AI, getting a social or other emotional fix from it. But that doesn’t mean we have to categorize anyone who uses it as moronic. It’s a tool.