geteilt von: https://programming.dev/post/27078650

China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.

  • zlatiah@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 hours ago

    In response, the guidelines regulate the labeling of AI-generated online content throughout its production and dissemination processes, requiring providers to add visible marks to their content in appropriate locations.

    My understanding is that this is meant more as a set of legal guidelines… I’m not a legal scholar, but since China has a history of enforcing certain information-related laws I’d assume they can “legally” enforce it

    On the technical side… there is a subfield of LLM research that focuses on “watermarking” or ensuring that LLM-generated outputs can be clearly identified, so I guess in theory it might be enforceable

    In practice as to whether it will actually be ensured… who knows (facepalm

    • TeamAssimilation@infosec.pub
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      There are telltale signs of AI generation in images, much in the same way image editors leave fingerprints of its use. As image generation improves, its detection will too.

      It’s not much different than the arms race of spam or malware, for example, tools like wasitai.com have a pretty good detection rate right now.

    • spaffel@spaffel.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      12 hours ago

      Okay I was thinking the same just thought there came some new technology that was able to detect ai generated Content reliably