6 Key Insights into AI Trends in the Technology Sector

Published On Mon Dec 23 2024
6 Key Insights into AI Trends in the Technology Sector

Fundswire article | Trustnet

Receive daily news in your inbox. FE fundinfo will use the details you have provided to send you the newsletters you have requested. You can unsubscribe at any time, please see our Privacy Policy. Subscribed

Publication date: 23 December 2024

Published by: Liontrust Asset Management

AI Trends in the Technology Sector

We are fresh off the plane from Silicon Valley, having spent a week meeting with around 30 companies spanning software, hardware, B2C platforms, fintech, healthcare, and energy infrastructure. Commensurate with the diffusion of AI across the economy, we are finding opportunities broadening out from within the technology sector. As usual, half of the companies we met with we hold, while the other half we are considering for our watchlist. Our trip has brought about six key insights:

1. AI scaling laws

The emergence of post-training and inference scaling laws are driving additional compute requirements.

2. Software 1.0 vs. Software 2.0

Traditional software is being completely rearchitected.

3. Vertical large language model agents

Agents are here, poised to disrupt software-as-a-service business models.

How AI Coding Agents Are Revolutionizing Software Development ...

4. The importance of data portability

Value residing within the data layer of the new technology stack.

5. B2C platforms outpace B2B software

Intensifying competition and eroding switching cost advantages.

6. New technology breakthroughs fuel growth opportunities

Synthetic data unlocking advances in robotics.

The major insight over the last seven years in developing new AI systems is that the scale of compute matters − as we scale both compute and data, we train better models, which broadens the end-use cases of AI and enables revenue opportunities across the economy. This trend has held true since 2017 and we believe it has at least another five years to play out, a viewpoint shared by Kevin Scott (CTO of Microsoft), Sam Altman (founder of OpenAI), and Jensen Huang (CEO of Nvidia), despite becoming the subject of investor debate over the past month.

After sitting down with companies across the ecosystem, including Nvidia, we are of the opinion that there is no imminent change in pre-training scaling laws – compute requirements are increasing c.10x every two years with each new vintage of models.

There are, however, two new scaling laws that are emerging. The first is in post-training. This is essentially making the model a specialist by training it on domain-specific data, historically achieved through human reinforcement learning feedback. More recently, however, AI reinforcement learning has become a second feedback avenue − the surprise for us was hearing that the quality of this training is already on par with human reinforcement learning. We believe that compute requirements for post-training will outstrip pre-training soon.

The second is inference time reasoning, and this has only just begun, unlocked by OpenAI’s newest of models, o-1. The longer the context widow and the longer the model thinks, the more compute is required. We are only in the very early stages of inference deployment across the economy, but we think that inference will ultimately dwarf training as an end market for AI compute.

Source: Nvidia, 2024

The Evolution of AI Models

On 12th September 2024 we saw – to little fanfare – the second most defining moment of today’s AI revolution (second only to the launch of ChatGPT – AI’s own lightbulb moment in November 2022). OpenAI released the preview version of a new set of models, o-1, capable of both system 1 and system 2 thinking. In simple terms, these models can now reason, not simply predict − this is ‘system 2 thinking’. We have now reached the stage where AI surpasses PHD-level humans in terms of IQ, and this has given birth to a whole new generation of software.

Model intelligence – IQ test Source: TrackingAI (2024)

Software Revolution

Software 1.0 vs Software 2.0 - traditional software is being completely rearchitected. We have written at length this year about the threat AI-infused software poses to enterprise software company business models. This threat is now becoming an actuality – traditional software is being completely rearchitected, and the transition from software 1.0 to software 2.0 is underway. The ramifications for both software and hardware companies, we believe, are significant.

Software 1.0 is what we have all grown up with and live in every day – thousands of lines of code essentially telling computers how to act in a given situation. This is deterministic software and it is run on CPUs (Intel or AMD) with the most entrenched operating system being Microsoft. Software 2.0 is neural networks – an AI model is embedded within the software, and it decides on the best course of action by itself. The underlying operating system for that is Nvidia, and the underlying unit of compute is the GPU, not the CPU.

It is early days, but we think in ten years’ time this will be how software is run, on accelerated compute infrastructure. The key unlock has been AI’s ability to automate 50% coding, which we see going to 90% before this time next year.

What does this mean for traditional software? Incumbents have got to pivot, and they have to pivot quickly. We do not think the moats that software-as-a-service business models have enjoyed over the past decade hold anymore – think of Salesforce, Intuit, Adobe, Workday, even Microsoft. These are the poster children of the deterministic era.

Revolution Software - Wikipedia

The companies that have an advantage from a principles-first perspective building software 2.0 (machine learning) are those that have been building on cognitive architectures from the outset, not retrofitting existing software to work within a new framework.

Think of today’s software in terms of the house you are living – it depends upon the foundations on which it was built. The agentic software coming to market requires a reconfiguration of these foundations. Incumbents at least have the deep pockets required to re-architect, because this will be necessary.

What about for hardware? We think the role of the CPU, and therefore the relevance of dominant providers of CPUs like Intel, will continue to diminish. As we switch to software 2.0, we need to see the world’s $1 trillion installed base of data center infrastructure modernised to accommodate accelerated compute. This is already happening.

Agents Revolutionizing Software

Vertical large language model agents – agents are here, poised to disrupt Software-as-a-Service business models. It was not long ago that Satya Nadella, CEO of Microsoft, heralded in ‘The Age of Copilots’. AI-powered Copilots were proclaimed to transform the way we work, assisting us in tasks from software development to writing speeches. In some instances, we have seen dramatic ensuing productivity gains, but these have for the most part been confined to certain killer use cases – software development and customer service, two areas where Copilots have delivered productivity gains of c.50%, have proved the exception as opposed to the rule.

Steam Developer: Revolution Software

The Age of Copilots, best thought of as assisting AI software, has transpired to be short-lived. This is because something better has come along. AI is now capable of acting on our behalf; given a task, AI can now go away and complete it, with minimal human intervention. This new breed of software – AI agents – has seen an explosion since September. We have lost count of the number of software companies that have brought AI agents to market since, and we have already reached the point where domain-specific agents are hitting 100% accuracy.

Agential software is part of Software 2.0, built on a new cognitive architecture centered around an LLM (large-language model) that decides the control flow of an application in real-time. The LLM is essentially reasoning its way through a software task, much the same as we would reason over what to have for breakfast. It bears no resemblance to the software we all know and use.

Dominant enterprise software companies today we see as the jack-of-all-trades, master of few – they cover too many bases to deliver the highest quality user experience, compounded by the fact that they are purchased by C-suite executives, not the end user. Specialised B2B companies catering to a certain vertical within enterprise software are proving capable of delivering a 10x better experience, with vertical LLM agents emerging as the unlock to make it into enterprise’s budgets. This is based on the early productivity gains we are witnessing, which are staggering.

Join the Cloud Accounting Software Revolution!

In customer service, AI agents are now achieving deflections ranging from 50-90% (meaning that calls are only passed through to a human 10% of the time when using the most effective AI agents). Agency, a new company backed by Sequoia and HubSpot Ventures, is deploying agential software to streamline B2B workflows such as CRM and ERP, boosting productivity by 10x for its B2B customers, while Anthropic has released an agent that can even now control your computer via the mouse and keyboard. If we take a step back and think about what this could mean for global productivity, Nvidia estimates the total addressable market for AI productivity gains to equal $100 trillion, the size of the global economy, with a 10% efficiency uplift. Agents are a key part of this.

Can the Salesforces of this world compete with their own AI agents? They are certainly trying, and time will tell, but we see a painful recalibration of pricing models ahead. The pricing structure of this new AI software is based on consumption (work done) as opposed to charging on a per seat basis. As such, this new AI software not only offers a leap forward in productivity but is also vastly more cost-effective. The sticking point, however, is that software giants like Salesforce and Adobe rely on seat-based business models and will have to self-disrupt their own high-margin business lines to compete effectively.

The Significance of Data Portability

As every company adopts an AI strategy, you will have all heard them talk about their respective data advantages. This is particularly true for system of record companies – companies that provide software that serves as an authoritative source for critical data elements within an enterprise. This includes CRM systems like Salesforce, ERP systems like SAP or NetSuite, and HR.