• AutoTL;DRB
    link
    fedilink
    English
    11 year ago

    This is the best summary I could come up with:


    While it takes humans a lot of training to learn and adapt, OpenAI argues large language models could implement new policies instantly.

    Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.

    Mark Zuckerberg’s vision of a perfect automated system hasn’t quite panned out yet, but Meta uses algorithms to moderate the vast majority of harmful and illegal content.

    Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.

    In particular, the gray area of misleading, wrong, and aggressive content that isn’t necessarily illegal poses a great challenge for automated systems.

    Generative AI such as ChatGPT or the company’s image creator, DALL-E, makes it much easier to create misinformation at scale and spread it on social media.


    I’m a bot and I’m open source!