2nd Order Thinkers
2nd Order Thinkers.
I Saved You $1,000 and a Week in Conference Halls
0:00
-13:00

I Saved You $1,000 and a Week in Conference Halls

I spoke at Data & AI Europe and stayed for every session. Key takeaways for decision-makers + a mini game to assess your AI projects.

Let’s Save You the Trouble.

I spent time at IRM UK’s Data & AI Conference so you don’t have to. A ticket started from $1,000 for a full day of slides, some panel clichés, and polite applause. You can skip all that; here’s everything a decision maker actually needs to know in under 20 minutes.

(Joking and being sarcastic aside, I do have great takeaways and met some interesting, likely long-term connections, so keep reading.)

Everything below is yours, for less than the price of a sandwich, and you’ll be smarter 20 minutes from now.

So…

This summary and the video recording are worth so much more than $5 per month. But guess what? $5 is all I ask for ;)

You’ve likely proudly told someone, “We’re rolling out an AI strategy,” or plugged ChatGPT into a process to tick a digitalization box.

However, you’d soon run into unmeasured hours, modest impact, and the hidden risk that you’re building a tech stack on sand. Every unchecked assumption, every “basic” governance shortcut, is compounding.

Anyone can now talk and make a beautiful PowerPoint-deep AI strategy with ChatGPT.

Yes, these speakers didn’t try to avoid the fancy terms and overhyped words (I’d be making fun of those, yes), but they also revealed some really interesting concepts.

I won’t be giving you a TLDR today… This takeaway is just short enough. By the way, I created a mini-game and linked it at the end of this post to help you figure out what you have and what’s missing in your AI project.

Shall we?


Humans x AI Co-evolution.

I was invited as a speaker. Watch my full talk from the conference below, where I walk through eight research studies on how AI changes human behavior and vice versa.


Governance, Name the Owner or Lose the System

Your AI isn’t making decisions in a vacuum.

Someone—or something—is already choosing when the system acts alone, when it recommends, and when a human must sign off.

The problem is that most CEOs can’t answer this question: Who made the decision of when and how to intervene in an AI process?

The simplest governance framework I saw all week came from Donald Farmer’s “Colleagues and Copilots” deck.

Can’t overstate how much I love this framing of three collaboration models

by https://www.linkedin.com/in/donalddotfarmer/
  • Human-in-the-loop: AI recommends, human decides. High-stakes, final call with a person.

  • Human-over-the-loop: AI operates independently, human monitors, and can intervene.

  • Human-out-of-the-loop: AI runs fully autonomous. Low-risk, high-volume only.

JPMorgan’s fraud detection, which Donald mentioned, is a clean example.

AI scans 200 million transactions daily and flags patterns; human investigators apply contextual judgment, cutting false positives by 60%. The system knows which loop it’s in, and so does every stakeholder (which is not easy).​

You can’t govern or explain an AI application until you know where you’re at. Once you’ve mapped that, you can design the right explanation for each audience. Some examples:

  • Developers need technical audits and traceability.

  • Executives need business rationale and risk sign-off.

  • End users need simple summaries of what the system did and why.

If you can’t map your current AI applications to one of these three models, you don’t have governance, but hope.

How messy would your wreckage look if something breaks?


Measurement, Stop Counting Seats, Start Tracking Outcomes

Here’s the basic product management idea everyone forgot when “AI” showed up: adoption metrics and impact metrics are not the same thing.

Jan Henderyckx’s deck broke this into two categories you can steal:​

Adoption measures (the vanity metrics):

  • Number of AI app users

  • Completed trainings

  • Employee satisfaction scores

Impact measures (the ones that actually matter):

  • Lead-time reductions in specific workflows

  • Percentage of touchless processes (no human intervention required, and don’t need to be double checked or fixed)

  • Cost per transaction before/ after AI

  • Customer support request volume changed before/ after AI

  • Process exception rates

Most organizations are measuring adoption because it’s easy and makes a good-looking dashboard. Impact requires you to instrument workflows, define baselines, and admit when something didn’t move the number.

My last article was exactly explaining why measuring adoption is the wrong approach:

Why Everyone Misread MIT’s 95% AI “Failure” Statistic?

·
Oct 18
Why Everyone Misread MIT’s 95% AI “Failure” Statistic?

You’ve seen this MIT study. Or at least you heard people talk about how 95% of organizations are getting zero return. Your LinkedIn feed is full of people sharing it with knowing nods, saying “I told you so” about the AI hype bubble.

Let’s say, Goldman Sachs deployed AI copilots to 1000 employees. If they only reported “1,000 users,” that’s a press release. If they reported “trader decision latency dropped 1x% and error rate fell 2x%,” that’s proof the thing works.​

Of course, setting these kinds of metrics and facing the reality that latency and error rates might actually go up will make you and your team shy away from the hard decisions is a completely different topic.


AI Literacy. Make It Measurable or Admit You’re Guessing

Train your people on AI” is like saying “drink water.”

Obviously necessary, completely useless as a strategy.

Article 4 of the EU AI Act does require providers and deployers of AI systems to take measures to ensure their staff have a sufficient level of “AI literacy”.

I found this table particularly interesting and can be useful, depending on your AI implementation phase.

  1. Personal AI Maturity Assessment: Each employee completes an assessment that maps them based on role, exposure level, and current capability.

  2. Role-based learning paths: Different occupations get different training. For instance, executives need critical evaluation skills, analysts need prompt engineering (and understand stats 101), and compliance teams need explainability frameworks.

  3. Measure literacy separately from adoption: Track competence levels (not the test in the AI literacy course, we all know how to cheat on that), not just course completions.

The interesting part is that (in theory), then you can now answer “How AI-literate is our procurement team?” with a number and context that’s only relevant to them to perform their day-to-day, instead of a guess or a pointless blanket term.

This seems complicated, and it is. It’s best for an organization and not a startup (or if you have less than 100 people, keep things clean and concise).

This is one of the metrics where the weak links are before a bad decision scales.


AI Doesn’t Need Its Own Religion (About Product)

No separate product/AI teams, lead, or metrics.

Unless you’re building foundational models, your AI product is just a product like any other tech product.

You don’t need a Chief AI Officer, a separate strategy deck, or a department that exists outside your normal technology and product team.

This is a talk from Jörg Ziemann.

Departments that call themselves “Data Strategy” or “AI Strategy” have short lifespans unless they’re embedded in enterprise architecture management, because most AI strategy is just digitalization strategy and an enterprise strategy.

The organizations that treat it like a novelty silo are deemed to still be talking about “AI transformation” in three years, with nothing shipped.​

But the real value in his content is using it to treat AI just like any other product with a lifecycle, governance, and measurable outcomes.​ No, it’s not special.

So you should embed AI work in your product stack. Measure it with the same metrics. Kill it with the same criteria whenever the time comes.

You don’t need a CAIO unless you’re training your own model; otherwise, you need product managers who understand where humans stay in the loop and where they don’t.

Fewer Agents, Tighter Loops

More tools, more agents = more complexity, less control.​

Current AI capability isn’t ready for “human-out-of-the-loop” at scale. The best move is resilient simplicity, ie, fewer agents, bounded loops, tight feedback.

Don’t stretch AI to tasks it can’t reliably complete just because you can technically deploy it.

  • Data contracts between producers and consumers

  • A semantical layer for interoperability

  • Data observability and quality engines in runtime

  • Active metadata reflecting real-time state

Translation: you need plumbing before you can do pretty tricks.

If your architecture can’t handle distributed intelligence, multiple agents coordinating in real time, then adding more agents just compounds failure modes.


The link here is to a Responsibility–Literacy–Measurement Bingo you can play in a boring, I heard this dozen times, AI meeting (wink), or a good reminder of where your organization actually stands.

It’s a 5×5 card covering governance, measurement, literacy, architecture, and culture, everything you can observe, not a vibe.

Enjoy!

Stay curious and stay human.

Discussion about this episode

User's avatar