Manus AI: Challenges Open Ai and Google?
The emergence of Manus, a cutting-edge artificial intelligence (AI) agent, capable of executing complex real-world tasks autonomously, has sparked excitement within the industry. Currently, only available through invite-based preview, this independent AI system has been developed by a relatively obscure startup with backing from investors in China.
What sets Manus apart from traditional AI Models?
Unlike conventional AI tools that primarily offer suggestions or respond to queries, Manus is designed to operate independently and yield fully functional results. This AI agent aims to not only provide answers but also proactively accomplish tasks. Its creators tout its ability to handle diverse real-world applications, such as developing interactive educational content, analyzing stock market trends, comparing financial products, facilitating B2B supplier sourcing, and crafting detailed travel itineraries.
AI agents like Manus represent a significant breakthrough in AI technology, as they can interact with their environment, gather information, and execute tasks autonomously to achieve preset goals. Newsweek's report highlights Manus's autonomous operation, in contrast to many AI models that rely on detailed human guidance through text or voice inputs.

Why the hype around Manus?
Despite the limited information available about its team, structure, and underlying AI technology, Manus has captured the attention of the AI community. X (formerly Twitter) posted a video demonstration showcasing Manus's ability to navigate websites automatically, utilize various functions, and display its workflow in real-time.
According to developers, Manus has outperformed OpenAI's AI models in evaluations using the GAIA benchmark, a renowned assessment technique for AI assistants and generative AI tools. Manus surpassed previous state-of-the-art (SOTA) AI systems in benchmark tests measuring its ability to tackle real-world challenges.

The comparison with OpenAI's AI models reveals the following:
- Level 1: Previous SOTA (67.9 percent), OpenAI (74.3 percent), and Manus (86.5 percent)
- Level 2: Manus (70.1%), OpenAI (69.1%), and Previous SOTA (67.4%)
- Level 3: Previous SOTA (42.3 percent), OpenAI (47.3 percent), Manus (57.7 percent)
Reports suggest that Manus has the capability to outperform some of the most advanced AI models currently available in the market.










