Pandora's Box: Generative AI Companies, ChatGPT, and Human Rights
Generative AI, a cutting-edge technology which enables users to generate new content, such as words, images, and even videos, has taken the tech world by storm. One of the most well-known generative AI products is ChatGPT, which was developed by OpenAI, a Californian tech company. Since its release in November 2022, multiple tech companies, including Google, Amazon, and Baidu, have released similar products. However, as generative AI requires significant financial and technical resources to make and run, only big and well-funded companies can build these products.
Generative AI models are trained on vast amounts of data, and companies that scrape people's content without their knowledge or consent for training data have obvious privacy implications. Furthermore, using billions of images and text without careful filtering and moderation risks perpetuating the worst content-related problems already seen online, such as presenting opinions as facts, creating believable false images or videos, reinforcing structural biases, and generating harmful, discriminatory content. Generative AI systems trained on unrepresentative data simply reproduce inequities from existing internet content.
There are human rights concerns surrounding generative AI, including the heightened risk of surveillance, discrimination, and a lack of accountability for when things go wrong. Furthermore, the dominance of English language in both text and image generative AI applications limits accessibility, and safety concerns around generative AI chatbots can make them seem authoritative and somewhat human-sounding. Therefore, users may place too much trust in them, leading to potentially harmful consequences.
Companies rushing to put out products that are not safe for general use is a significant issue. Earlier versions of generative AI chatbots led to outputs that were problematic and biased, and the current industry competition stokes a race to the bottom, rather than one based on policy and practice that is aligned with human rights. Tech companies have human rights responsibilities that are especially important when they are creating new powerful and exploratory technology. They need to identify and mitigate human rights risks in advance of any product release clearly.
In conclusion, generative AI is a powerful technology, but its development raises important questions about corporate power, accountability, and human rights. As we move forward, it is crucial that we understand these systems' potential outcomes and ensure they are developed and used ethically and responsibly.