Distill, the Story of an ex-Colleague
Your company wants to extract you before you leave. Here's what they'll actually get.
Something called “Colleague.skill“ caught fire last week.
This is an AI agent that gathers a departing colleague’s Slack, emails, and files, then generates an .md file you can directly feed to AI to do the colleague’s job as they would.
Then someone built an “Anti-Distill Skill“ to fight back: An agent to prevent yourself from being skill-ified.
This anti-distill promises everything that makes you irreplaceable, and exactly what the system wants to extract is not in the file you left behind.
Has your boss already asked you to install Colleague.skill, or is it time to consider deploying some anti-distilled measure?
I’ll tell you all about it in this one, including the data that explains whether you should panic or not.
Distill Your Colleague
Have you seen Perfume: The Story of a Murderer?
Contextually, there are films with premises closer to what I’m covering here, uploading consciousness, cloning memories, that sort of thing.
But Perfume is the one that I couldn’t shake when I first saw colleague.skill.
The main character is born with an extraordinary sense of smell. He becomes obsessed with capturing people's scent and believes that if he can extract their substance into a bottle, he possesses something more permanent than the person themselves.
Which makes the bottle the essence of a person.
Now here’s the introduction of the colleague.skill README file.
Turn the coldness of departure into the warmth of a Skill. Welcome to cyber-immortality.
The installation was pretty straightforward.
Export a departing colleague’s Slack messages, Google Docs, and work emails, then feed them into this tool. Then AI generates a skill file, eg, writes code following their technical standards, replies to messages in their tone, and even knows when they’d typically dodge blame, so on and so forth.
If you scroll down to the commentary, you’d see some sharp comments, such as:
A colleague, when scattered they’re tokens; when assembled, they’re a Skill.
But then, I saw another project called “Anti-Distill Skill.”
Anti-Distill
How Kafkaesque that your colleague leaves, but their ghost stays in a .md file?
Somehow it’s like going for a health check-up, the doctor tells you your vitals are great, so they’re going to photocopy your organs so they can transplants to other patients, you’re free to go. Then you have to find a second doctor to tamper with the data so the photocopied organs don’t actually work.
Absurd as it sounds, it’s happening, and someone decided to counter it.
What Anti-Distill does: you feed it your completed skill file, and it outputs a version that looks thorough and professional. What it does:
Automatically identifies the “replaceability level” of each section
Replaces core knowledge with correct but useless filler.
Outputs two files:
Sanitized version (for the company) — Structure intact, terminology professional, reads fine— but core is hollowed out
Private backup (for yourself) — Include judgment intuitions, interpersonal network, or intuition for handling edge cases — everything that makes you irreplaceable.
AI vs. Manual Handover.
Some are debating whether this counts as workplace manipulation.
To be honest, I don’t think that’s quite the right framing.
Manipulation implies conscious intent, but most companies that do it likely didn’t think much through.
To most, this is just a handover.
I’ll explain the “just” later. But let’s focus on the act of handover first.
In the old days, we asked those about to leave to prepare a x-page handover document. So the next person can read and hopefully have a better understanding of the job.
But not many realize that the old handover was already broken.
We just didn’t notice because there was no alternative to compare it to.
The new person reads it and either gets it or doesn’t.
The new handover exports three years of your Slack history and generates a 20,000-word AI clone. And that clone could be ten times more useful than the old-style three pages, but it could equally end up with something much worse than it, to an extent that you wish you hadn’t even had the AI clone colleague in the first place.
Things I’ve seen so many times in the last few years that people forget:
The tool changed. And the nature of what’s happening changed with it.
But the need stayed exactly the same.
In both cases, you get someone — a person or a machine — to compile what a departing colleague knew. The old version asked the learner to sit down and write it. The new version scrapes their digital footprint and lets AI assemble it.
Different method but with the same goal:
Capture what’s in someone’s head before they walk out the door.
So at least we agreed the need is identical.
In practice, however, barely anyone wrote a proper document. And barely anyone read the ones that were written.
The leaver was mentally halfway out the door, thinking about their next job, not carefully encoding five years of institutional memory into a Google Doc. And even when they tried, they couldn’t. Experts are so embedded in their own routines that they don’t even know what they know! Studies suggest conventional handovers miss about 70% of the decision-making knowledge that actually matters.
So, where does AI make it different?
Not better. Different. And in some ways, actively worse.
Glitch #1: AI has no bullshit filter.
When a human writes a handover, you’d self-edit without even thinking about it.
But you also skipped the workaround you’ve stopped using a while ago. You’re likely to leave out the sarcastic rant about the client from hell.
AI, however, ingests everything with equal weight.
Very important to keep this in mind, because it changes everything.
Because it's very likely that the frustrated Slack message you fired off in a small ranting channel (we all have one of these), that’s somehow now your communication style.
Or the decision you’ve made, agreed with the lead engineer in a sprint prep meeting, the hacky fix you used once because a mission-critical system was broken, could very likely now be the standard operating procedure in handling a system meltdown.
When a human filter unknowingly separates signal from noise, current from outdated, intentional from accidental.
AI treats your entire digital history as equally valid and equally intentional.
Glitch #2: AI creates the illusion of completeness.
A three-page manually curated Google Doc announces its own gaps.
Everyone who reads it knows it’s incomplete. Nobody looks at three pages and thinks, “Ah, yes, this fully captures what Sarah knew after five years.” (If this is true, then Sarah didn’t do much work.)
A 20,000-word AI-generated skill file does the complete opposite.
It has a facade of thoroughness.
It has sections and subsections. It covers communication patterns, technical standards, and decision-making tendencies. It reads like a comprehensive portrait of your ex-colleague.
So the box is checked. No one goes looking for what’s missing because why check if you already have AI to gather everything about that person? Even if you want to, where do you even start? Because checking would defeat the purpose in the first place!
The old handover failed honestly.
However, the new one fails while looking like it succeeded.
Like the old question, do you rather be cheated by your partner and not know, or do you prefer to be told straight away?
Glitch #3: AI collapses context, or didn’t have one to start with.
Messages exist in moments.
So, no, not all conversations with your colleagues are on Zoom, Slack, or email.
Unless a company sets a rule from now on that you are strictly not allowed to speak to each other during your coffee break, walk to the meeting room, or in an after-work drink session… There’s always so much more subtle and unspoken context exchanged in those times than what’s recorded.
Or think of it like the Telephone game.
See what someone actually knows as the original message. A fraction of that makes it into Slack messages and emails, already a lossy compression. AI then scrapes those fragments, interprets them, and produces a skill file, a third-generation copy.
Each handoff strips context
Glitch #4: AI copies the what, not the why.
Not to mention that people evolve and circumstances change.
Very often, we abandon old approaches, change our minds, and maybe learn from mistakes.
Your comm history from 2023 doesn’t reflect who you were by 2025. Therefore, a decision you made yesterday isn’t a good reference of what you’d do under a similar situation in 2027 when your clone’s in the picture rather than the real you.
Let’s say a colleague always applies a 15% discount for one particular client.
The AI captures the pattern, “Client X gets 15% off.” But maybe the reason was that Client X referred a few enterprise accounts last year, and the discount was an informal thank-you. When those referrals dry up, the AI keeps discounting.
Or worse, it starts pattern-matching and offering 15% to all clients.
This is patterns without reasoning.
It works until conditions change. And conditions always change; in fact, change is a good thing for most businesses. Do you want new clients? Or an upgraded new system? Or a new regulation that plays in your favor?
The moment the context shifts, a pattern without a reason is useless, or worse, misleading. It tells you to do the thing that used to work, confidently, with no understanding of why it worked or whether it still should.
A human colleague, even a mediocre one, can say, “I do it this way because...” An AI skill file just says, “This is how it’s done.”
But there’s no way AI knows that.
The Problems Before the Problem
There’s a line of fine print in the colleague.skill README that most people probably missed.
Source material quality = Skill quality. Chat logs + long docs > manual description only.
Basically, this tool works best with thoughtful, voluntary, long-form writing. The kind of writing that not many produce at work, unless part of their job is documentation.
Gap #1: Externalizing in text doesn’t come naturally
The text interface that AI often has access to is Slack (or Teams) messages, emails, Code commits, Google Docs, and so on. Only a tip of the iceberg, though.
Research consistently tells us how small that layer is.
NASA conducted an internal study on this exact problem.
They found that 60% of their own workforce couldn’t even identify a process for capturing a departing colleague’s knowledge. Worse of, only 14% were satisfied with how offboarding works.
A senior engineer at NASA put it:
We cannot put in a paper-based process to transfer knowledge. What we are losing is decades of experience when senior engineers retire. That experience can NOT be written down and referred to as a 'cookbook' for doing our jobs.
If NASA can’t do it, your company almost certainly can’t either.
Scraping someone’s Slack messages and calling it done isn’t going to cut it.
Gap #2: Fales believes in AI and in offboarding
The real problem isn’t just that AI misses the important stuff.
But the C-levels who push for it or believe it is entirely possible.
For example, the latest news of how Meta is building an AI Mark, trained on his mannerisms, tone, and strategic thinking, so employees can interact with it instead of him.
Zuckerberg genuinely believes he can be bottled and serve it at scale.
So you’ll see more and more C-levels believe that AI is the next knowledge management to pursue, and less value in keeping the actual person around.
Gap #3: Humans. Or say human interaction, unspoken rules, relationships
As I mentioned in this article, humans are the rate-limiting factor in any organization.
Say cooking dinner. You can chop vegetables in two minutes, boil pasta in eight, but if the sauce needs 30 minutes to simmer, the meal takes 30 minutes.
Complex human judgment, consensus, politics, regulation, interpersonal relationships... This mini-game best illustrates the idea (give it a go!), showing you what is slowing the organization down.
Just have a think: what normally stops you from moving a project forward?
Was it technology?
Unlikely, because 99.99% of the companies on earth are not cutting-edge tech companies; you are solving problems that happen because your customers are humans.
The slowest part in our system isn’t the AI.
The bottleneck is still there, and the person who knew how to get past it left behind an .md file that the team will soon find even more tiresome to deal with.
Just a Handover or an Infringement of Your Right?
Now, on to the ownership question.
A researcher at a leading Chinese university (who studies the intersection of AI policy and labor rights) said in a recent interview:
What companies acquire through salary is merely the right to use knowledge during the employment period, not ownership. When a person leaves, their experience should leave with them.
Which I don’t entirely agree with.
Think of a job function or a practice, let’s say, product management in software.
Even if you are the best product person on earth, you are still likely to follow the practice of writing PRDs, running discovery sessions, grooming backlogs, and facilitating sprint reviews.
These aren’t your practices. They’re the discipline’s. The company paid you to apply them in their context to their users, their codebase, and their market.
And the artifacts you produced while doing so, the roadmaps or the specs, are work products. They always belong to the company.
So no, experience doesn’t “leave with you” in its entirety. A chunk of it was never exclusively yours to begin with.
But here’s the twist, the part that was yours, such as your instinct for when a feature request is redundant or your read on which engineer is only going to duct tape the issue, and so on, as I mentioned, are exactly the parts no .md file will ever capture.
I see the ownership debate as a distraction.
Because the stuff companies can extract was arguably theirs to begin with. And the stuff that’s truly yours, they couldn’t take it even if they tried.
Please note, this is not the same as copyright work, fully support the artist’s right not to be cloned.
Colleague.md Is A Failed Attempt to Industrialize Growth and Heritage
Japan has a repair heritage called 金継ぎ(Kintsugi).
Broken ceramics mended with gold. The cracked bowl ends up worth more than the one that never broke.
Colleague.skill tries to skip the breaking altogether.
Because, to many managers, the breaking is slow, imperfect, and full of flaws that need fixing. Yes, AI skill files don’t freeze up, AI doesn’t need a break, AI doesn’t make mistakes (in a way), and it doesn’t second-guess its decisions.
While you do all of the above.
You break, you get into conflicts with colleagues, you don’t applaud when your boss is patronizing you.
You are imperfect.
But Kintsugi doesn't work on a bowl that was never cracked. And exactly this imperfection makes us more valuable with time, not less.








