Google Gemini Is Entering the Advent of Code Challenge
If 2024 taught us anything in the realm of Generative AI, then it is that coding is one of the most promising applications for large language models (LLMs).
In this blog post, I will describe how I am using one of the most advanced LLMs, Gemini Experimental 1121, which currently leads the LMArena Leaderboard, to tackle the Advent of Code challenge.

I will outline my approach and share my open-source code repository so that readers can explore it further and replicate the results.
What is the Advent of Code Challenge?
For those not familiar with the Advent of Code challenge: It is an annual event that runs from December 1st to December 25th, offering daily programming puzzles similar to an advent calendar. Each day, a new two-part puzzle is released where coders can test their coding and problem-solving skills. It’s a fun way for developers of all levels to practice coding.
Both parts of the daily challenge revolve around a similar problem and use the same input data. The idea is to write a Python program that will process the input data and produce a solution (typically a number). The competition runs for 25 days and allows users to collect a maximum of 50 stars (2 per day).
Using LLMs for the Challenge
As mentioned above, the Advent of Code challenge is a great opportunity for Large Language Models (LLMs) like Gemini Experimental 1121. These models can be used to generate code solutions for the challenges. By providing the problem statement to the LLM, it can produce the code, which can then be run to check the solution.
For this project, Google's Gemini Experimental 1121 is employed, which has significantly improved coding and reasoning capabilities. It is available through Google’s AI Studio.
Each day’s challenge is organized in its own directory, containing the problem statement, the code produced by Gemini, and links to the conversations for transparency.

Exploring LLM Capabilities
With this project, the aim is to explore the capabilities of state-of-the-art LLMs in solving coding challenges. The hypothesis is that LLMs like Gemini have advanced enough to tackle most of these challenges successfully. However, this does not imply readiness to handle real-world software challenges at a more complex level.
I hope this project sheds light on the potential of LLMs in coding and provides insight into the future of LLMs + Coding. Follow me on Medium and LinkedIn for more on Generative AI, Machine Learning, and Natural Language Processing.
If you're interested in data science and AI, join our NLP London Meetups.




















