The software sector is in an interesting place right now with a lot of cross-currents. Growth has slowed and there is little estimate or narrative momentum. This is largely due to budgets being tight, a lack of headcount growth in most customers, and market penetration. But it is being extrapolated to something bigger in the form of AI disruption.
Ironically, the negative sentiment on software and AI is largely due to AI not being reliable enough for most business use cases. The current generation of AI models hallucinate too much. And the cognitive architecture necessary to manage this lack of reliability is still being built out. So early product offerings from software vendors have mostly sucked.
This is going to meaningfully shift over the next 18 months. First, it is likely that a new generation of models will come to market by the end of this year. These models will be more reliable and more useful across a broader array of tasks. Second, the infrastructure around these models is improving at a rapid pace. Developers are learning how to build applications that allow LLMs to operate with more reliability. The tooling will only improve through the course of this year. And finally, the heavy involvement of open source in the development of AI means that knowledge is rapidly diffused and iterated upon.
SaaS of Theseus
You might be thinking ah ha, so software will end faster than I thought!!! But even with improved models and tooling, LLMs are unlikely to become fully reliable. Hallucinations will still occur. Humans will need to be in the loop, deterministic scaffolding (data, workflow/process, alerting) will still be very necessary, and specialization will be critical to the performance and efficiency of these new AI systems. And, most of the tasks being addressed by AI are what humans do today, which is complementary to existing systems.
The river of AI is likely to flow around the massive existing rock formation that is deterministic software. Gradually eroding it away versus sweeping it away. Software bears are leaping to a fully eroded end state (fully autonomous agents). But that end state is likely well over a decade away.
In the interim period, most existing software vendors have a right to win and to gradually rebuild themselves from workflow wrapped around a database, to workflow with some AI infused wrapped around a database, to workflow and AI agents wrapped around a database, to AI agents wrapped around a database. This is going to happen iteratively with heavy involvement of human end users who today work within existing systems. This gives incumbents an advantage, if they act aggressively to use it.
The Task TAM vs Software TAM
What about seats? How will they price? I don’t think that matters much if this is a transition versus a major discontinuity. Historically, value in software has been easy to capture if created via a differentiated offering. And If software vendors are successful in the above outlined transition the “task TAM” is likely an order of magnitude larger than the existing software TAM.
Take Sales Cloud from Salesforce as an example. It is an ~$8B annual business today. It serves ~10 million users. Those users are likely paid $2-3 trillion in wages and benefits. It isn’t unreasonable to expect AI to automate 15% of tasks from this base of users ($300B) of which 10% ($30B) is captured by the vendor driving that increase.
In some end markets, the size of the task TAM is unbounded, meaning that a substantial increase in low-cost human hour equivalents would drive a corresponding increase in business value. For example, software development has an unbounded task TAM. There is value to a business in creating a lot more software, assuming it is useful software. Another example of an unbounded domain is design. There is value to a business in having a wider funnel of new ideas to test and iterate upon.
This isn’t true for every end market of course. Increasing the number of tasks in the financial close process would likely not create much business value. But, the point is that the AI opportunity is likely a lot larger for software than is currently being contemplated. And there is zero optionality for that potential being priced into the sector today.
Velocity is an Important Input
I think about the software market in two broad buckets. The first is low velocity categories where not much will change. For example, payroll processing is not likely to be impacted much by AI. Payroll is a solved problem and there is a large amount of risk inherent in changing what is already working (for minimal upside). This is true for much of HCM, financials, and many regulated workflows. There is likely some TAM expansion opportunity and some share shift will occur between AI laggards and leaders, but it will happen relatively slowly and likely be captured by existing vendors.
The second bucket is where there is likely to be a lot of change. Creative, sales, marketing, customer support, workflow automation, RPA and software development are examples of these markets. The task TAM is large and in some cases unbounded. And while there is risk associated with change, there is more upside. This will create substantially more category velocity and many more new entrants. There is also significant TAM expansion that is likely to occur. The existing competitive advantages of vendors provide them a right to win, but they must act on it and invest heavily in AI first capabilities.
Below is how I am thinking about the latter bucket:
Exposure to both high velocity and low velocity end markets is desirable - Using Salesforce as an example, sales engagement is likely to going to be infused with AI quickly. This is a high velocity category where customers will be willing to experiment with different vendors to drive productivity. Conversely, salesforce automation is a lower velocity category. CRMs are expensive to implement and maintain and AI isn’t useful as a database replacement. It is beneficial to have exposure to both, with one providing durability and the other starting the journey of iterating with AI.
There will be returns to scale due to the cost of building out AI capabilities - Central AI teams will be built to infuse products with generative AI capabilities and eventually agentic capabilities. Larger vendors will be able to scale these teams across more products and associated task TAMs. Eventually this reality will drive market consolidation.
An unbounded task TAM creates a better risk/reward - For example, I view the opportunity in front of Adobe and Canva as significantly more attractive than that of customer support vendors. Both can benefit from AI, but the former has more upside potential.
Proven product velocity is required. Companies that can’t ship will see their advantage erode more quickly.
Last Thought: Margin Expansion Has Probably Peaked
Software is a business that requires forward investment. Product idea to first meaningful revenue is usually a 3-year process. For the last few years returns on forward investment has been minimal. Correctly, software vendors have become much more disciplined and dialed back investment. But that is likely going to change. Investors should expect R&D intensity to increase for both offensive and defensive reasons.
This creates some obvious and not-so-obvious implications. It creates some risk of near-term multiple compression (though some of this might be priced in) before growth follows through. Vendors may also have to invest through a down macro cycle, which will be challenging. But, particularly for vendors in the higher velocity categories, it raises questions about who the right owner is. How do you underwrite a high cost of capital leveraged buyout when there is a major technology transition? Are there advantages to aggressively consolidating vendors to free up investment for AI capabilities? Probably worth a deeper dive in another post.
Aged well
Good stuff, thank you 💚 🥃