OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling’s Harry Potter series::A new research paper laid out ways in which AI developers should try and avoid showing LLMs have been trained on copyrighted material.

  • Blapoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    We have to distinguish between LLMs

    • Trained on copyrighted material and
    • Outputting copyrighted material

    They are not one and the same

      • scv@discuss.online
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Legally the output of the training could be considered a derived work. We treat brains differently here, that’s all.

        I think the current intellectual property system makes no sense and AI is revealing that fact.

  • Sentau@lemmy.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 years ago

    I think a lot of people are not getting it. AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour. Similar to how using copyrighted clips in a monetized video can make you get a strike against your channel but if the video is not monetized, the chances of YouTube taking action against you is lower.

    Edit - If this was an open source model available for use by the general public at no cost, I would be far less bothered by claims of copyright infringement by the model

    • Tyler_Zoro@ttrpg.network
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.

      And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?

      Learning is not and has never been considered a financial transaction.

      • zbyte64@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Ehh, “learning” is doing a lot of lifting. These models “learn” in a way that is foreign to most artists. And that’s ignoring the fact the humans are not capital. When we learn we aren’t building a form a capital; when models learn they are only building a form of capital.

        • Tyler_Zoro@ttrpg.network
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 years ago

          Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.

          This is not, “foreign to most artists,” it’s just that most artists have no idea what the mechanism of learning is.

          The method by which you provide input to the network for training isn’t the same thing as learning.

      • maynarkh@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 years ago

        Actually, it has. The whole consept of copyright is relatively new, and corporations absolutely tried to have people who learned proprietary copyrighted information not be able to use it in other places.

        It’s just that labor movements got such non-compete agreements thrown out of our society, or at least severely restricted on humanitarian grounds. The argument is that a human being has the right to seek happiness by learning and using the proprietary information they learned to better their station. By the way, this needed a lot of violent convincing that we have this.

        So yes, knowledge and information learned is absolutely withing the scope of copyright as it stands, it’s only that the fundamental rights that humans have override copyright. LLMs (and companies for that matter) do not have such fundamental rights.

        Copyright by the way is stupid in its current implementation, but OpenAI and ChatGPT does not get to get out of it IMO just because it’s “learning”. We humans ourselves are only getting out of copyright because of our special legal status.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 years ago

    Training AI on copyrighted material is no more illegal or unethical than training human beings on copyrighted material (from library books or borrowed books, nonetheless!). And trying to challenge the veracity of generative AI systems on the notion that it was trained on copyrighted material only raises the spectre that IP law has lost its validity as a public good.

    The only valid concern about generative AI is that it could displace human workers (or swap out skilled jobs for menial ones) which is a problem because our society recognizes the value of human beings only in their capacity to provide a compensation-worthy service to people with money.

    The problem is this is a shitty, unethical way to determine who gets to survive and who doesn’t. All the current controversy about genrative AI does is kick this can down the road a bit. But we’re going to have to address soon that our monied elites will be glad to dispose of the rest of us as soon as they can.

    Also, amatuer creators are as good as professionals, given the same resources. Maybe we should look at creating content by other means than for-profit companies.

  • 0x2d@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    If it’s infringing on JK Rowling’s work, then it’s fine

  • Technoguyfication@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    People are acting like ChatGPT is storing the entire Harry Potter series in its neural net somewhere. It’s not storing or reproducing text in a 1:1 manner from the original material. Certain material, like very popular books, has likely been interpreted tens of thousands of times due to how many times it was reposted online (and therefore how many times it appeared in the training data).

    Just because it can recite certain passages almost perfectly doesn’t mean it’s redistributing copyrighted books. How many quotes do you know perfectly from books you’ve read before? I would guess quite a few. LLMs are doing the same thing, but on mega steroids with a nearly limitless capacity for information retention.

    • Teritz@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Using Copyrighted Work as Art as example still influences the AI which their make Profit from.

      If they use my Works then they need to pay thats it.