10 Exciting Updates from OpenAI and Hugging Face

Published On Thu Apr 24 2025
10 Exciting Updates from OpenAI and Hugging Face

OpenAI's Five New Models, Hugging Face's Open Robot, U.S. ...

OpenAI refreshed its roster of models and scheduled the largest, most costly one for removal. What’s new: OpenAI introduced five new models that accept text and images inputs and generate text output. Their parameter counts, architectures, training datasets, and training methods are undisclosed. The general-purpose GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano are available via API only. The reasoning models o3 and o4-mini are available via API to qualified developers as well as users of ChatGPT Plus, Pro, and Team, and soon ChatGPT Enterprise and ChatGPT Education. The company will terminate GPT-4.5, which it introduced as a research preview in late February, in July.

GPT-4.1 family:

In an odd turn of version numbers, the GPT-4.1 models are intended to be cost-effective equivalents to GPT-4.5 and updates to GPT-4o. They accept inputs of up to 1 million tokens (compared to GPT-4.5’s and GPT-4o’s 128,000 tokens).

o3 and o4-mini:

These models update o1 and o3-mini, respectively. They have input limits of 200,000 tokens and can be set to low-, medium-, or high-effort modes to process varying numbers of reasoning tokens, which are hidden from users. Unlike their predecessors, they were fine-tuned to decide when and how to use the tools, including web search, code generation and execution, and image editing.

Behind the news:

Late last year, OpenAI introduced o1, the first commercial model trained via reinforcement learning to generate chains of thought. Within a few months, DeepSeek, Google, and Anthropic launched their respective reasoning models DeepSeek-R1, Gemini 2.5 Pro, and Claude 3.7 Sonnet. OpenAI has promised to integrate its general-purpose GPT-series models and o-series reasoning models, but they remain separate for the time being.

Why it matters:

GPT-4.5 was an exercise in scale, and it showed that continuing to increase parameter counts and training data would yield ongoing performance gains. But it wasn’t widely practical on a cost-per-token basis. The new models, including those that use chains of thought and tools, deliver high performance at lower prices.

We’re thinking: Anthropic is one of OpenAI’s key competitors, and a large fraction of the tokens it generates (via API) are for writing code, a skill in which it is particularly strong. OpenAI’s emphasis on models that are good at coding could boost the competition in this area!

Hugging Face's Open Robot, U.S.

Hugging Face has made a name by providing open AI models. Now it’s providing an open robot. What’s new: Hugging Face acquired the French company Pollen Robotics for an undisclosed price. It plans to offer Pollen’s Reachy 2, a robot that runs on code that’s freely available under an Apache 2.0 license, for $70,000.

How it works:

Reachy 2 has two arms, gripper hands, and a wheeled base (optional). It’s designed primarily for education and research in human-robot interaction in real-world settings.

Behind the news:

Last year, Remi Cadene, who worked on Tesla’s Optimus, joined Hugging Face to lead robotics projects. In May, he and his team rolled out the LeRobot open source robotics code library, which provides pretrained models, datasets, and simulators for reinforcement learning and imitation learning. In November, Nvidia announced a collaboration with Hugging Face to accelerate LeRobot’s data collection, training, and verification.

Why it matters:

Hugging Face’s acquisition of Pollen reflects an industry-wide trend.