• JigglypuffSeenFromAbove@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    10 hours ago

    From OpenAI’s statement:

    We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

    • No use of OpenAI technology for mass domestic surveillance.

    • No use of OpenAI technology to direct autonomous weapons systems.

    • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

    It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?

    • flamingleg@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      the ‘no domestic surveillance’ is just language that mirrors some limitations (from their pov) from the patriot act. They’re still willing to surveil people outside the USA, and in fact all they have to do is route domestic traffic through an international part of a network and they can legally spy on domestic americans which is what already happens.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 hours ago

      Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”