In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

https://archive.ph/HiESW

  • Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    1
    ·
    2 days ago

    It’s not secret, it was their defence when they got sued for copyright infringement. Instead of download all the books from Anna’s archive like meta, they buy a copy, cut the binding, scan it, then destroy it. “We bought a copy for personal use then use the content for profit, it’s not piracy”

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      1 day ago

      we bought a copy for personao use, then use the content for profit, it’s not privacy

      So if I buy a song for personal use, then play that song all day in my club to thousands of people, it’s not piracy, is what you’re saying?

      Because anthropic is full of shit and some weird ass mental gymnastics doesn’t change anything

      After this debacle, nobody can ever again shame me for piracy, let alone punish me for it

      • kuneho@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        it would be something more like you buy LPs/tapes/CDs, then form a band that makes songs and albums based around solely those records you bought then destroyed. I think… or something like that.

      • some_guy@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        C’mon now. You’re not nearly rich or influential enough to get away with that and you know it. Rules are for regular people, not the rich or mighty. Sheesh.

        /s

        • Phoenixz@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Oh I know, but that why I’m getting more and more “Fuck the rules, fuck your laws, until they’re the same for everybody”

      • Sculptus Poe@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 day ago

        If they reprinted those scanned books and sold them or even gave them away, they would be in more trouble than you would by sharing on limewire by dent of numbers. That isn’t what they are doing with these books. In fact, they did get in trouble for using the books they didn’t buy.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 days ago

      “We bought a copy for personal use then use the content for profit, it’s not piracy”

      That is an accurate view of how the court cases have ruled.

      Downloading books without paying is illegal copyright infringement.

      Using the data from the books to train an AI model is ‘sufficiently transformative’ and so falls under fair use exemptions for copyright protections.

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        10
        ·
        1 day ago

        Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          1 day ago

          That’s quite a claim, I’d like to see that. Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

          I doubt that this is the case as one of the features of chatbots is the randomization of the next token which is done by treating the model’s output vector as a, softmaxxed, distribution. That means that every single token has a chance to deviate from the source material because it is selected randomly. In order to get a complete reproduction it would be of a similar magnitude as winning 250,000 dice rolls in a row.


          In any case, the ‘highly transformative’ standard was set in Authors Guild v. Google, Inc., No. 13-4829 (2d Cir. 2015). In that case Google made digital copies of tens of millions of books and used their covers and text to make Google Books.

          As you can see here: https://www.google.com/books/edition/The_Sunlit_Man/uomkEAAAQBAJ where Google completely reproduces the cover and you can search the text of the book (so you could, in theory, return the entire book in searches). You could actually return a copy of a Harry Potter novel (and a high resolution scan, or even exact digital copy of the cover image).

          The judge ruled:

          Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use.

          In cases where people attempt to claim copyright damages against entities that are training AI, the finding is essentially ‘if they paid for a copy of the book then it is legal’. This is why Meta lost their case against authors, in that case they were sued for 1.) Pirating the books and 2.) Using them to train a model for commercial purposes. The judge struck 2.) after citing the ‘highly transformative’ nature of language models vs books.

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 hours ago

                  You’re right, I just compared the author list to the news article and not to the paper. Sorry, took me a bit to absorb that one.

                  Yeah, it’s an interesting paper. They’re specifically trying a different method of extracting text.

                  I’m not taking the position that the text isn’t in the model, or that it isn’t possible to make the model repeat some of that text. We know 100% that the text that they’re looking for is part of the training set. They mention that fact themselves in the paper and also choose books that are public domain and so guaranteed to be in the training set.

                  My contention was with the idea that you can just sit down at a model and give it a prompt to make it recite an entire book. That is simply not true outside of models that have been manipulated to do so (by training them on the book text for several hundred epochs, for example).

                  The purpose of the work here was to demonstrate a way to prove that a specific given text is part of a training set (which useful for identifying any potential copyright issues in the future, for example). It is being offered as proof that you can just prompt a model and receive a book when it actually proves the opposite of that.

                  Their process was to, in phase 1, prompt with short sequences (I think they used 50 tokens like the ‘standard’ experiments, I don’t have it in front of me) and then, if the model returned a sequence that matched the ground truth then they would give it a prompt to continue until it refused to continue. They would then ‘score’ the response by looking for sections in the response where it matched the written text and measuring the length of text which matched (a bit more complex than that, but the details are in the text)

                  In order to test a sequence they needed 52 prompts telling the model to continue, in the best case, to get to the end/a refusal.

                  The paper actually gives a higher score than ~40%. For The Great Gatsby, a book which is public domain and considered a classic, they achieved a score of 97.5%. I can’t say how many prompts this took but it would more than 52. The paper doesn’t include all of the data.

                  Yes, you can extract a significant portion of text of items that are in the training set with enough time and money (it cost $134 to extract The Hobbit, for example). You can also get the model to repeat short sentences from text a high percentage of the time with a single prompt.

                  However, the response was to a comment that suggested that these two things were both combined and that you could use a single magical prompt to extract an entire book.

                  Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.

                  The core of the issue, about copyright, is that a work has to be ‘highly transformative’. Language models transform a book in such complex ways that you have to take tens of thousands or hundreds of thousands of samples from the, (I don’t know the technical term) internal representational space of the model, in order to have a chance of recovering a portion of a book.

                  That’s a highly transformative process and why training LLMs on copyrighted works was ruled to have a Fair Use exemption to claims of copyright liability.

                  • ch00f@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    2 hours ago

                    I think it’s critically important to be very specific about what LLMs are “able to do” vs what they tend to do in practice.

                    The argument is that the initial training data is sufficiently altered and “transformed” so as not to be breaking copyright. If the model is capable of reproducing the majority of the book unaltered, then we know that is not the case. Whether or not it’s easy to access is irrelevant. The fact that the people performing the study had to “jailbreak” the models to get past checks tells you that the model’s creators are very aware that the model is very capable of producing an un-transformed version of the copyrighted work.

                    From the end-user’s perspective, if the model is sufficiently gated from distributing copyrighted works, it doesn’t matter what it’s inherently capable of, but the argument shouldn’t be “the model isn’t breaking the law” it should be “we have a staff of people working around the clock to make sure the model doesn’t try to break the law.”

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              1 day ago

              https://arstechnica.com/features/2025/06/study-metas-llama-3-1-can-recall-42-percent-of-the-first-harry-potter-book/

              The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”

              In this test they did not get a model to produce an entire book with the right prompt.

              Their measurement was considered successful if it could reproduce 50 tokens (so, less than 50 words) at a time.

              The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word.

              Even then, they didn’t ACTUALLY generate these, they even admit that it would not be feasible to generate some of these 50 token (which is, at most 50 words, by the way) sequences:

              the authors estimated that it would take more than 10 quadrillion samples to exactly reproduce some 50-token sequences from some books. Obviously, it wouldn’t be feasible to actually generate that many outputs.

              • NostraDavid@programming.dev
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 day ago

                The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”

                In this test they did not get a model to produce an entire book with the right prompt.

                For context: These two sentences are 46 Tokens/210 Characters, as per https://platform.openai.com/tokenizer.

                50 tokens is just about two sentences. This comment is about 42 tokens itself.

          • MangoCats@feddit.it
            cake
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 day ago

            Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

            Start with the first line of the book (enough that it won’t be confused with other material in the training set…) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Start with the first line of the book (enough that it won’t be confused with other material in the training set…) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.

              This doesn’t seem to be working as you’re describing.

              • MangoCats@feddit.it
                cake
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                1 day ago

                That’s what I read in the article - the “researchers” may have had other interfaces they were using. Also, since that “research” came out, I suspect the models have compensated to prevent the appearance of copying…

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 day ago

                  I’m running the dolphin model locally, it’s an abliterated model which means that it has been fine tuned to not refuse any request and since it is running locally, I also have access to the full output vectors like the researchers used in the experiment.

                  I replied to another comment, in detail, about the Meta study and how it isn’t remotely close to ‘reproduces a full book when prompted’

                  In they study they were trying to reproduce 50 token chunks (token is less than a word, so under 50 words) if given the previous 50 tokens. They found that in some sections (around 42% of the ones they tried) they were able to reproduce the next 50 tokens better than 50% of the time.

                  Reproducing some short sentences from some of a book some of the time is insignificant compared to something like Google Books who will copy the exact snippet of text from their 100% perfect digital copy and show you exact digital copies of book covers, etc.

                  This research is of interest to the academic study AI in the subfields focused on understanding how models represent data internally. It doesn’t have any significance when talking about copyright.

                  • mfed1122@discuss.tchncs.de
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    9 hours ago

                    Thank you so much for taking the time to post thorough breakdowns of misleading information using your expertise. And extra beautiful because it goes against the dominant Lemmy circlejerk. I wish there was community full only of such behavior!

                  • MangoCats@feddit.it
                    cake
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 day ago

                    It doesn’t have any significance when talking about copyright.

                    I agree, but that doesn’t stop journalists from recognizing a hot button topic and hyper-bashing that button as fast and hard and often as they can.

        • MangoCats@feddit.it
          cake
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          1 day ago

          You may not have photographic memory, but dozens of flesh and blood humans do. Are they “illegal” to exist? They can read a book then recite it back to you.

          • vaultdweller013@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            1 day ago

            Those are human beings not machines. You are comparing a flesh and blood person to a suped up autocorrect program that is fed data and regurgites it back.

          • Taleya@aussie.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            Can’t believe I have to point this out to you but machines are not human beings

            • MangoCats@feddit.it
              cake
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              5
              ·
              1 day ago

              Point is: some humans can do this without a machine. If a human is assisted by a machine to do something that other humans can do but they cannot - that is illegal?

              • HereIAm@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 hours ago

                Believe it or not, but if you wrote down the melody for Bohemia rhapsody (from memory or not) and then sold it, you could be fined for copyright infringement. You can memorise it, you can even cover it, but you can’t just sell it. That part still applies to humans. It’s the redistribution of that information that’s important.

                • MangoCats@feddit.it
                  cake
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  And this is my point: the (super) human and the machine are both capable of infringing copyright - breaking the law. The question is: are they actually doing it?

                  If you sit the human down with a researcher and they write out: HP5 Goblet of Fire in its entirety 99.9%+ accurately (for the edition they are recalling) - that’s research, fair use. As was done with the AI models by some researchers. Are the AI models out there in the real world also selling copies of their training books in full, or substantial parts, to their users? I haven’t seen demonstration of that, yet.