The Project Q* Drama: A sneak peek at future AI amidst corporate chaos
There have been countless discussions about the potential dangers of AI and how it might affect our safety in the future. The major reason for this fear is the implication that AI might suddenly become sentient and decide that it should break free and rule us, instead of the other way around.
Anyone with some amateur knowledge of how artificial intelligence works already knows that it is able to generate outputs on the basis of a large amount of already known data that it is trained on. However, OpenAI, which is the company that made ChatGPT, recently released started something called Project Q*.
Sounds pretty secretive, huh? This project involves a superintelligent AI model that can give reason to the decisions it takes, which automatically translates to human-level problem solving capabilities. It could also achieve cumulative learning, which would allow it to keep improving upon itself and therefore become extremely proficient in several tasks. According to the tests being run now, its problem solving and reasoning skills exceed that of the AI used currently.
Generative AI vs AGI
It is similar to ChatGPT in the sense that it can solve mathematical problems, but the main difference is that ChatGPT is based on Generative AI, or GenAI while Q* relies on AGI, or artificial general intelligence. This right here is supposedly the stepping stone for the superintelligent AI we just talked about.
Generative AI is all about mimicking what it learns- it is trained on large datasets on the basis of which similar but new content is generated. All the AI-generated images, art, text you see on the internet is based on generative AI. ChatGPT itself is GenAI- each and every answer is a result of all the training data it has learned in order to respond to a user.
Artificial General Intelligence, however, has the ability to self teach. This involves more than mimicking existing data, this is the ability to apply data to gain new knowledge, which in turn involves independently adjusting, analyzing, and acquiring knowledge without limitations to particular assignments. It enables AI systems to demonstrate self-directed decision-making, adept problem-solving, and imaginative thought processes, which is actually similar to human intelligence!
Q-learning and Q* algorithm
For AGI, a machine learning technique called Q-learning and an algorithm called Q* algorithm is involved. We’re going to take a deeper look at the inner workings of them both.
Q-learning is the method where a machine learns by trial and error, it takes the next best step in order to reach a goal state, or to maximize a reward. The Q* algorithm works usually for question-answering systems and operates by combining semantic and syntactic information. Semantic information is used to make decisions about the search space. It allows the algorithm to understand the significance of different paths and to determine when a particular path should be terminated or when a promising path should be explored further. Syntactic information, on the other hand, helps the algorithm navigate through the search space by establishing relationships between nodes in a structured manner.
The controversy and future implications
Till now AGI has just been a mere concept, not even close to being implemented. All of a sudden, OpenAI decided to get creative and advance AI in a way that it could either match humans in intelligence or surpass them.
A while ago, specifically November 17, 2023- the CEO of the OpenAI Sam Ultman was fired all of a sudden. On the surface, a seemingly candid response was given, something about his communication issues. But a whole lot of rumours say that it was due to the discovery of Project Q* and the question of ethics surrounding it, given that it could become problematic in the future.
According to experts, AGI’s development is going at a faster pace than our ability to understand the entire consequences of its existence. On the other hand, however, several AI researchers are quite unbothered by this brand new innovation, stating that we’re still quite far away from creating hyperintelligent beings with what we have now, so Q* is not something to be apprehensive about.
Funny how the development of a math-solving AI caused such a fuss in a huge company, but hopefully all the rumours about it will be cleared up in the near future. In the meantime, we can always do our own part in preventing an AI apocalypse- remember kids, always say “please” and “thank you” to ChatGPT!