Before the SaaSpocoylpse, there was Excel
Build vs buy has always been a contextual decision. AI doesn’t change that, it just moves where the lines fall.
Markets are spooked. The narrative goes something like: the cost of building software is approaching zero, therefore nobody will buy software anymore, therefore SaaS is finished. You can feel the anxiety in the discourse right now. Investors are nervous. Founders are nervous.
I think the anxiety is pointing at something real but has landed in the wrong place.
There’s a version of this that’s true
Let me not be dismissive about it. If you’re a solo operator or a small team paying $20 a month for a tool that does one discrete, well-defined thing, a simple automation, a basic tracker, a form that sends emails, that subscription is genuinely worth revisiting. You can describe your exact workflow to an AI, get something purpose-built rather than something generic, and probably never look back. That part of the market is under real pressure and pretending otherwise is wishful thinking.
There’s a framing I keep seeing called the barbell economy, which I think is directionally correct. The thin middle of the software market, think products that are moderately complex, don’t have deep network effects, aren’t compliance-heavy, that’s the danger zone. Tools that competed on convenience rather than depth are the most exposed.
But that’s not far from being the whole market.
The saaspocoylpse confuses cost of code with cost of software
As with the debate of “whither software engineers”, coding and making software were never the same thing.
When a regulated business buys enterprise software, they’re not just buying features. They’re buying accountability. They’re transferring risk. They’re getting SOC2 certifications, audit trails, contractual warranties, and a phone number to call when something goes wrong at 2am. “The AI built it” is not a defence in a compliance review. “We have an enterprise agreement with a vendor who is contractually on the hook” is a very different conversation.
The consequences of getting this wrong are concrete and documented. In November 2024, the FDA issued a Warning Letter to Applied Therapeutics after a routine inspection found that electronic clinical trial data, captured in a third-party system, had been deleted, including the audit trails, for all 47 subjects in a study of a rare disease drug for children. The FDA couldn’t verify the data. The drug approval was blocked. Within three weeks, two shareholder lawsuits had been filed in federal court. The cascade from non-compliant data management to blocked approval to litigation took less than a month. The stock price dropped 80%. The software wasn’t the product, the audit trail was. And under 21 CFR Part 11, the FDA’s electronic records regulation, it has to be certified before any of it means anything.
Financial services has the same structure with its own enforcement record. MiFID II requires every transaction to be reported and records retained in an auditable format. This isn’t advisory, the FCA has fined thirteen firms for transaction reporting failures, ranging from a £99,200 penalty against Infinox Capital to a £34.3 million fine against Goldman Sachs International. The size of the firm didn’t change the nature of the obligation.
These are just two of the clearest examples. The same logic holds in aviation, where FAA airworthiness certification requirements mean the maintenance record and the certificate of airworthiness are effectively the same thing: an inadequate or non-compliant system doesn’t just create operational risk, it invalidates the aircraft’s legal status. In nuclear power, the NRC requires that safety-critical software go through verification and validation processes it will formally accept before the system can operate. In legal services, trust account management is subject to bar association oversight in most jurisdictions, with software that handles client funds needing to meet specific compliance standards. In construction, safety incident reporting systems need audit trails that can survive a legal dispute or regulatory investigation years after the fact.
Across all of these, the same structural reality holds: the regulatory certification of the system is a legal prerequisite, not a feature. The data won’t be accepted, the certificate won’t hold, the trade won’t count without it. No internal AI build absorbs that obligation. The cost of these tools was never really in the code.
But the decision was never uniform across categories
Build vs buy has always varied by what the software is actually doing and why, and in my experience organisations typically sit on a spectrum, often reflecting individual tastes as much as formal policy or regulatory requirements.
I want to sketch a rough taxonomy here, not a precise decision framework, more a way of thinking about it.
Accountability-anchored software is where the primary value isn’t the features, it’s the fact that a vendor is contractually responsible for what the software does. General ledger, payroll, regulatory submissions, clinical trial data, building compliance documentation, rights management. The argument for buying here is structural and durable regardless of what AI can now produce. If anything, buying becomes more important as building becomes more tempting, because the temptation to accidentally absorb undisclosed liability is now much easier to act on.
Network-value dependent software is where the value comes substantially from the fact that many organisations use it. Fraud detection models in finance get better because they’re seeing transaction patterns across millions of accounts, not just yours. Adverse event signal detection in pharma depends on population-level data. Materials pricing benchmarks in construction, programmatic advertising infrastructure in media, you cannot replicate these internally because the value is extrinsic to any single organisation. This category stays buy for reasons that have nothing to do with build cost.
Contextually specific workflow tooling is where things get more interesting. These are tools that vendors have built for a generalised version of a process, but where your process is specific enough that the generic version is always a partial fit. Internal deal pipeline tracking in finance. Lab notebook tooling in pharma. Project-specific cost tracking on a construction site. Editorial workflow and commission tracking in media. These exist because bespoke was previously too expensive, not because vendors were genuinely solving the problem well. In a lot of cases, organisations bent their workflows to fit the tool rather than the other way around.
The distinction I find most useful within this category is process maturity. Where the underlying process is well-understood and stable, a vendor has usually done the hard work of generalising it well and you’re probably better off buying. Where the process is still evolving, either because your domain is specific or because the capability itself is genuinely new, investing in models, tooling, and people who can build contextually is likely to produce better outcomes than buying something that’s 70% right and living with the gap.
If you’re building software products rather than buying them, this taxonomy is worth reading from the other side of the table. Which category you’re actually in, as opposed to which category your sales deck implies, is probably the most important strategic question you can ask right now. And if you’re in the third, where does your R&D create network-like effects that bring additional value to your customers?
Excel already told us this
Excel has probably been the most successful enterprise software of the last thirty years. Not because it’s the best spreadsheet (full disclosure, it’s my favourite), but because it’s been the best contextual fit software available to non-engineers. It let domain experts, whether it’s a finance analyst, clinical operations manager, or site quantity surveyor, build exactly what they needed without going through IT. No vendor could match that fit, because no vendor can possibly understand their specific process and data as well as they did.
Every COTS implementation that quietly failed did so partly because someone was still running the real process in a spreadsheet alongside it. SAP go-lives where the finance team was still reconciling in Excel six months later. CRM deployments where the sales team kept their actual pipeline in a shared Google Sheet. The purchased system handled compliance and reporting. The spreadsheet handled reality.
AI is the second time this has happened. Excel democratised contextual tooling for domain experts within limits, the ceiling was what a determined non-engineer could do with formulas and eventually VBA. AI raises that ceiling dramatically and replaces cell references with natural language. The instinct is identical. The tools are considerably more powerful.
What this means is that the shift toward building contextually specific tools isn’t new behaviour. It’s the Excel instinct with a higher ceiling. Organisations have always wanted tools that fit their exact mental model of the work. They’ve always found workarounds when vendors couldn’t provide them. AI makes the workaround significantly more capable.
The same freedom that made Excel work also made Excel sprawl
But there’s a twist.
The reason Excel worked as a shadow IT engine is precisely because people had the freedom to just do it. No procurement process, no IT sign-off, no change management programme. Someone needed a thing, built a thing, and if it worked it survived. Businesses with the most successful AI adoption seem to follow exactly the same pattern, not the organisations that stood up a Centre of Excellence and issued Co-Pilot licences, but the ones where a curious person in finance, or a clinical ops manager, or a site engineer had enough latitude to experiment, built something that fit their actual context, and showed someone else.
But Excel is also exhibit A for what happens without any structural thinking about what you’re building and why. Seventeen versions of a critical model living in different people’s OneDrive folders, no single source of truth, the person who built it has left and nobody quite understands it anymore. Shadow processes that became load-bearing infrastructure without anyone consciously deciding that. The freedom that produced the breakthrough also produced the sprawl.
AI-assisted building exhibits the same dynamic at a higher velocity and a higher ceiling. The tools are more powerful and can handle a wider variety of data, which means both the value and the fragility can accumulate faster.
Main quest vs side quest
There’s one more risk worth naming, which is subtler than sprawl.
Every organisation has a main quest: the thing that creates real value for customers, that differentiates you in the market, that deserves your best people’s attention. Everything else is a side quest. Sometimes necessary, sometimes genuinely useful, but never the point.
The games industry got tangled up in this around 15 years ago. Studios became obsessed with building their own engines, their own middleware, their own live service infrastructure. They had the engineers, so they used them. A lot of them lost the thread of what actually made great games in the process, or even stopped releasing games at all. The studios that thrived stayed focused on the player experience and let someone else own the engine.
The same trap is opening up now. The freedom to build contextually specific tools is genuinely valuable, I endorse the Excel instinct. But a side quest experiment that works can quietly become load-bearing infrastructure that nobody consciously chose to invest in. And if your domain experts are spending their energy maintaining bespoke tooling rather than doing the work that actually wins, you’ve traded one kind of vendor lock-in for another.
Yes, theoretically, you could build your own ERP. The question is whether that’s how you win.
So where does this leave everyone?
I don’t think SaaS dies. I think it bifurcates more sharply than it already has, and the middle has a harder time justifying itself. Accountability-anchored and network-value-dependent software stays buy, for structural reasons. Contextually specific workflow tooling shifts meaningfully toward build-with-AI, particularly where vendor fit has always been partial and process maturity is low. The commoditised convenience tier, tools that existed purely because generic was good enough and bespoke was out of reach, could well get absorbed by the hyper-scalers or frontier model companies (if they can keep their funding going long enough).
For buyers, the harder question is the one I can’t answer for anyone else: how do you hold the tension between giving people the freedom to experiment, which is where the real value comes from, and maintaining enough intentionality that you don’t end up with Excel sprawl at higher velocity, or a team whose main quest has been quietly colonised by side quests nobody consciously chose?
The organisations navigating it well seem to know clearly what their main quest is and use that as the filter. Not “can we build this?” — that answer is increasingly yes. Not even “should we buy or build this?” But: does this serve the thing we’re actually here to do, and are we the right people to own it?
For software vendors, the territory has become genuinely more treacherous, and I think the risk is less about AI eating the market than about vendors misreading which category they’re actually in. If your value is accountability and trust infrastructure, double down on that, it’s more defensible than ever and you should be making it more visible, not less.
If your value is network effects and shared data, the moat is real but it needs to be actively maintained and understood. If you’ve been selling contextual fit to buyers who bent their workflows to accommodate you (and you know who you are) that’s the position that needs honest examination. The buyers now have another option, and it fits better by definition.
The saaspocoylpse framing implies a single wave that either destroys everything. As ever, reality is going to be way messy than that. Some categories get stronger. Some get hollowed out. New categories are going to get created. The ones in the middle are going to have to get a lot clearer about what they’re actually selling and to whom.
The Excel instinct was always right. The sprawl was real too. The AI-amplified version of both is already underway. Those paying attention are going to be best placed to navigate this and come out the other side of the so-called saaspocalypse.

