OpenAI CTO Mira Murati has been instrumental in the transformation of the non-profit research lab into a business, with products such as ChatGPT, DALL-E, and GPT-4. In an interview with The Associated Press, Murati spoke about OpenAI's vision for artificial general intelligence (AGI) and the importance of building it safely and aligned with human intentions.
Vision for Artificial General Intelligence (AGI)
Artificial general intelligence is the futuristic concept of highly autonomous systems that can generalize across different domains and produce significant economic output. OpenAI's specific vision includes building AGI safely and aligned with human intentions, in order to maximize its benefits for everyone.
The Path to AGI
OpenAI is far from having a safe, reliable, aligned AGI system. From a research standpoint, the company is trying to build systems that have a robust understanding of the world similarly to how humans do. OpenAI is also scaling these systems to increase their generality. For instance, with GPT-4, OpenAI is dealing with a much more capable system with reasoning capabilities.
Safety Measures Taken by OpenAI
OpenAI thinks about interventions at each stage of its research, product, and safety teams. For example, with DALL-E, OpenAI adjusted the ratio of female and male images in the training dataset to reduce harmful bias issues. However, OpenAI has to be careful because it might create some other imbalance. OpenAI also did reinforcement learning with human feedback to help ChatGPT become more aligned with human preferences.
Regulation of AI Systems
OpenAI believes that AI systems should be regulated. At the company level, OpenAI has agreed on some level of standards with governments, regulators, and other organizations that are developing these systems. However, OpenAI thinks a lot more needs to happen. Government regulators should certainly be very involved in regulating AI systems.
'6-Month Industry Pause' Petition
A letter calling for a 6-month industry pause on building AI models more powerful than GPT-4 got a lot of attention. While some of the risks that the letter points out are valid, OpenAI thinks that designing safety mechanisms in complex systems is hard. OpenAI thinks that signing a letter is not an effective way to build safety mechanisms or to coordinate players in the space.
Evolution of OpenAI
When Mira Murati joined OpenAI, it was a non-profit. However, over time, OpenAI's thinking evolved a lot when it comes to the safest way to deploy and build AI systems. OpenAI changed its structure, but the intrinsic motivation and mission-alignment of people at OpenAI have not changed from the beginning.
Anticipating the Response to ChatGPT
Before the November 30 release of ChatGPT, OpenAI had high confidence in the limitations of the model from customers that had already been using it via an API. However, the company made a few changes on top of the base model to adapt it to dialog.