2nd Order Thinkers
2nd Order Thinkers.
Private Equity's AI Locked In
0:00
-19:00

Private Equity's AI Locked In

Is this a smart move or just a potentially a wedding in Vegas?

On May 4th, Anthropic announced it had launched an Enterprise AI services firm. The new company is a combination of Anthropic and several of the largest private equity (PE) firms on the planet — Blackstone, Hellman & Friedman, and Goldman Sachs.

Pretty much the same day, a different roster of PE firms did the same with OpenAI.

But what drove them to make the same decision?

Between the PEs and the two biggest AI labs, in total, committed $11 billion to the same problem,

ie, get mid-market companies to use THEIR AI.

Because Anthropic and OpenAI managed to convince those overseeing the money to take on the risks on behalf of LPs (will explain soon) and portfolio companies, in exchange for a high-margin services business that outlives the investment.

The press release sells this as democratising AI for the mid-market.

I read the deal docs. Turns out, this isn’t an AI story; it’s about money, distribution, and the people who carry the risk aren’t at the deal table.

To see who’s actually benefiting and who’d suffer from it, we need to start with what the forward-deployed engineers actually do, why PE is the channel of choice, and the devil in the details.


Forward-deployed Engineers: an Old Dog and an even Older Trick.

No, the idea of “forward-deployed engineers” isn’t exactly groundbreaking.

It’s pretty much a rebrand of Solutions Architects or Sales Engineers; it’s customer-facing technical support on deeply technical products, often focusing more on deployment and post-sale customization.

A solution architect in AWS helps their assigned customers resolve technical issues and, apparently, as the name implies, upsells additional AWS services.

Same as IBM service delivery or Salesforce CSAs (Customer Success Architect).

The label got popular because Palantir wears it, and Palantir reads as more AI than the rest.

If you have a product that is complicated to onboard, requires a lot of detail and time to implement well, and is highly specialized, it’s a common approach to have a “service-led growth” team in the organization.

And as far as the role of the “forward-deployed engineer” goes, it’s part technical expert, part sales, part customer feedback capture.

So a more interesting question isn’t anything related to the forward-deployed engineers; instead, why did both OpenAI and Anthropic announce the news simultaneously, collaborating with Private Equity firms?


Why Private Equity (PE)? A Leg In the Door

The first, and obvious, answer is that PE firms are great sales channels.

Rather than having to target every mid-market company directly, they can rely on PE firms (which own tons of high-growth portfolio companies) to introduce them.

You need to remember that both Anthropic and OpenAI are new kids on the block.

Given the authors’ love of mafia movies, imagine this:

The tech giants are like dealers with their own territories…

Microsoft owns the enterprise corridors. Excel first came out in 1985, and gradually, Microsoft came to hold the enterprise and old-style industry by the balls. Then Gmail arrives, and a few years later, Google has the heart of startups, and so on.

These guys don’t need to knock on doors anymore; the customers are so hooked (with no better alternative), so they all crawl back to one of them.

Then Anthropic and OpenAI arrive. They invented the weed+, a drug that we all feel the adrenaline when using, and long for when the services are down.

So to infiltrate and claim the territory, of course, they can’t expect help from the old-timers; their only safe bet is to work with someone who isn’t threatened by their presence, or even welcome it.

So they made a deal with the users’ financiers: the PE firms.

Imagine that! For a portfolio company, instead of borrowing money to buy their weed+, they get it straight up whenever they collateralize a part of their baby.

That’s the role PE plays in this complex setup: they typically have significant sway over how companies operate. It’s a pretty powerful relationship.


The Joint Venture (JV)

At the very top of the enterprise bracket, the implementation players are the typical big consulting firms selling transformation services; it used to be cloud, now with AI.

This new joint venture (JV), however, is explicitly targeting the mid-market. Here’s a part of the announcement:

The company (the joint venture) will serve as an accelerant in bringing AI solutions to mid-size companies… of both portfolio companies of the investment firms and independent companies that can benefit from the platform.

Basically, the targets are businesses that can’t necessarily afford McKinsey and Accenture-level fees, and don’t have in-house capability to spare. That’s the sweet spot to land forward-deployed engineers.

Or as the press release so nicely puts it, “democratizing access to forward-deployed engineers.” Democratizing used to be the favorite go-to term for blockchain; now duly recycled for AI.

The vendor (Anthropic and OpenAI) could build a partnership via traditional structures, of course, but the JV has a few advantages.

To start with, PE firms are committing up to $300m each as a statement of intent to the market. This kind of financial backing makes it look and feel more like a durable business than a typical Go-to-Market partnership, which tends to be looser.

On top of it, Anthropic stays involved in the deployment, rather than relying on a 3rd party implementation partner.

And because of it, they keep the feedback loop. What works, what doesn’t, where clients are struggling? Even if it doesn’t lead directly to training, it certainly shapes the direction of the products; we call it deployment intelligence.

And with this ever-evolving product, AI, the upside is obvious for them to stay involved for a longer time.

It’s no longer a typical transformation engagement, which has a clear end to an ongoing relationship. Here’s the press release:

Claude’s capabilities change on a monthly or even weekly basis, which creates a different kind of engineering challenge than traditional software deployment.

The systems that companies build with AI need to evolve as the models underneath them improve.

So what this new venture really is is a company that will have a long-running, recurring relationship with its clients.

Great as this all sounds, there are serious risks that the media coverage misses. To see them clearly, you need to know how PE actually works under the hood.


A Quick Detour in Private Equity

Skip to the next section if you've sat in those rooms.

So to start with, PE firms are run by fund managers; their job is to use that capital to buy companies, and aim to sell them between 4-7 years later for a return. Fund managers are commonly called General Partners, or GP for short.

The capital they use is provided by Limited Partners (LP).

In practice, these LPs are typically sovereign wealth funds, large family offices, high net worth individuals... And the LP pays the GP a management fee, and if there’s a profit when selling, the GP gets a portion of that too.

At a high level, that’s the structure of every PE fund; the principle is the same for a one-man band or a Goldman Sachs-owned vehicle.

You might notice that when someone from PE talks about value creation, it is all they talk about. And GP tends to put operational teams in a company and pushes the business to aggressively optimize.

The keywords are: relentless focus on a small set of changes, strong financial discipline, limited patience, and ruthless efficiency.

Done well, the company comes out worth more. Done poorly... Google "PE horror story", you'll be busy for hours.

Obviously, these PE firms don’t buy one company at a time. Quite often, they run a portfolio of 5-10 of these companies and buy add-ons to integrate with them.

Now, considering that the companies involved in this structure with Anthropic are amongst the largest on the planet, their portfolio is huge. This is exactly what Anthropic needs. A rotating cast of companies, where they have direct access to the people pulling the strings.

With the structure in mind, the cracks in this JV are much easier to see.

There are two categories of problems.

The first is operational: things that go wrong when forward-deployed engineers hit the reality of a mid-market company. The second is structural: conflicts built into this JV.


What Will Go Sideways in Practice (Operational)?

Operational Risk #1:
Improving the business” that you have 0 clue about.

As mentioned, PE’s standard play is to land an operational team in a business it has just purchased. Those teams already have playbooks. So, like appointing a CEO or CTO they’ve worked with in the past, with the express aim to “do your magic and get x return on this business for us”.

I know because I’m one of those.

The principle behind these playbooks relies on pattern matching.

If you’ve done one of these in the same industry, you’ve learned from your mistakes and your successes. In the process, you have built up two very strong muscles: judgment and domain understanding.

However, forward-deployed engineers typically aren’t domain or industry experts.

Instead, they are AI implementation experts. They know how to get the best out of Anthropic’s products.

But they don’t necessarily know how to adjust a pattern for the reality of a procurement team in a small Fast-Moving Consumer Goods business, for example. Or how to roll up the shared IT needs of a group of vet clinics.

And while the standard answer of every transformation project is “that knowledge comes from the domain experts in the company.

In reality, we all know that: a) the knowledge won’t be documented, b) it’s likely not easy to explain, c) John, who knows everything, actually left 2 years ago.

It’s a classic transformation consulting problem, but now with AI.

And AI, with its innate lack of judgment or inability to learn and contextualize within the organization, is likely to make this worse rather than better.

Operational Risk #2:
Data mess layers up like a mushed wedding cake

One of the defining qualities of mid-market companies, particularly at the lower end, is underdeveloped internal data.

Siloed, needs interpretation. At best, a few dashboards and spreadsheets. At worst, the 'CTO' is still running queries straight on production. This is also a sign of poor data practices (such as governance to ensure data consistency, correctness, and so on).

In most cases, decision-making is based on intuition and experience, then justified by select data rather than by data, ie, the data always agree with the boss

All of these, plus AI (non-deterministic for one), is a recipe for a perfect project disaster.

My guess is that the JV will do a lot of data groundwork while they grab a few low-hanging fruits (a classic way to show momentum with quick wins).

I’d be happy to be proven wrong, but this could easily become “just another tool”; much like we all needed an ERP, a CRM, an HR system, a data warehouse, ... We’ll now just also need agentic-ness.

And that's before we even get to the structural problem, which exists whether the AI delivers or not.


Built For The House To Win. (Structural Risks)

Misalignment #1: Whose interest does the GP actually serve?

Yes, the GP has the power to deploy the funds, but the funds aren’t actually theirs. The LP writes the cheque. The GP makes the calls.

What’s always held this together is fiduciary duty. The concept that the GP delivers impartial advice, in the best interest of making the company more valuable.

Now hold that thought against this new structure.

What’s the GP going to optimize for? The portfolio company’s value, or the JV’s revenue? Because now they also own part of the JV doing the implementation. When the JV recommends more services, more spend, and deeper engagement, the GP would benefit twice:

  • Once from the portfolio company’s improved performance,

  • and once from the JV’s revenue.

That said, this isn’t a new concern; some common solutions are in place, but they still haven’t changed the fact that, structurally, there is a serious conflict of interest.

Misalignment #2: Implement once, a lifetime commitment?

If you remember, PE holds companies for 4-7 years before selling them. So what exactly happens to this deeply embedded AI when the company sells?

In the old days, it was clean. Whatever systems they put in place stayed with the business. The new owner inherited the upside (or downside) and took it from there.

This model is different in two ways.

For the portfolio company:

The JV stays embedded after the portfolio companies change hands.

So the AI sticks around because after deeply embedding Claude everywhere, you’re not going to suddenly rip it out for ChatGPT.

But now the JV stays embedded after the sale. Which means whatever the new owner does, eg, operational changes, new product roadmaps, feeds back into the ex-owners’ knowledge base, gifted for free.

Mark my words here, the language around how this usage and data feed back, and who gets to use the insights, is going to become a new element in due diligence.

For the acquirer:

Before the sale, the GP is running both sides of the table. They can keep the AI costs artificially low — the portfolio company looks lean, the margins look clean, the exit multiple looks earned.

The new buyer inherits the real bill without compensation if they don’t do careful due diligence.

This isn’t the worst yet!

Misalignment #3: A bad apple spoils the whole bunch.

One bad play not only ruins a company but also the entire portfolio.

So far, we only fucked up the LP, the portfolio company, and the future owner of the portfolio company.

What about the PE firms?

As mentioned, they are used to pattern-match across their portfolio. That’s a strength, until it becomes a single point of failure.

In the old model, a PE firm sent an operational team, or a CEO, into one business at a time. If it went wrong, it was one out of a few hundred.

This JV works differently. The whole point is a standardized AI playbook rolled out across the portfolio, ie, the same forward-deployed engineer SOP.

Let’s say, a PE firm with ten healthcare businesses. One flawed AI agent is handling clinical documentation; all ten go down.

And because the PE owns the JV, nobody inside the portfolio gets to ask whether this vendor is the right call.

Not to mention, their whole portfolio stack now relies on Claude being live.

Imagine you hired a contractor to work in your office every day. Over the last 90 days, they failed to show up, or showed up broken, more than 100 times. What’s worse is that your entire team was built around that contractor, so for each no-show, your whole operation freezes.

Exactly the instability from Anthropic in the last 90 days.

Or, given the model competition is so fierce, if Anthropic falls behind GPT-7 or Gemini-X in capability, or if pricing power shifts, it’d cost the PEs in the JV both arms and legs to get out.


So, benefits?

Yes, there are benefits.

After all, these are the shrewdest bunch on earth.

For Anthropic/ OpenAI

A captive user pool. If it shows signs of life quickly, that’s an IPO valuation driver.

Anthropic also keeps a tight feedback loop from implementation since there are no independent implementation partners in between. That should help them improve their product.

For the PEs

The GP has just set up a high-margin service business that will stay with their portfolio companies after they’re gone. And the companies they control are the JV's clients.

Even with some fee offset early on, the important part is that this revenue stream likely outlives the investment. Uncorrelated returns that don’t need to be shared with LPs.

On the other hand, the LP gets possibly better exit valuations, but by and large bears most of the risk of misalignment, implementation failure, and otherwise misallocated investment decisions.

The portfolio companies, in the end, are going through the new wave of transformation.

Whether this AI transformation is successful depends entirely on if the participants read the small print and can manage the risk accordingly.

Not to mention, the moment AI is involved, risk management becomes a different game entirely.

Discussion about this episode

User's avatar

Ready for more?