This series has covered 24 specific AI employee roles — across marketing, sales, customer success, operations, finance, recruiting, legal-ops, product, engineering management, HR, comms, and founder ops. Each role has its own design. All of them share the same operational question: how do you turn the prototype into something permanent?
The answer is a hand-off contract. Four questions, asked once, answered honestly, and revisited quarterly. Most AI employees that fail to become permanent fail because none of the four questions had a real answer.
The four-question contract
Question 1 — Who is the human owner?
Every AI employee needs a named human manager. Not "the team owns it." Not "the COO and the engineering lead share it." A named individual. The owner:
- Reviews the employee's outputs at a defined cadence.
- Adjusts prompts, tools, and access in response to performance.
- Decides scope expansion or contraction.
- Owns the kill-switch.
- Reports on the employee in their own performance review.
If you can't name the owner, the role doesn't make it. There's no shortage of organisational gravity pulling roles back to ad-hoc work; the named owner is what resists the pull.
Question 2 — What KPIs does this role move?
Vague KPIs kill AI employees more than any technical issue. "Improves productivity" is not a KPI. "Reduces brief-turnaround time by 50%" is. "Saves the CFO time" is not. "Drops monthly-close commentary cycle from 3 days to 1 day" is.
The KPIs should be:
- Specific. With numbers.
- Measurable. With a baseline already established.
- Owned. By the human owner.
- Reported. In the same place all the company's other KPIs are.
If the KPIs aren't tracked alongside human roles' KPIs, the AI employee will eventually be invisible to the org's planning conversations. Invisible roles get cut.
Question 3 — What's the failure mode?
Every AI employee can fail. The question is how. Some patterns:
- Wrong outputs not caught by review (the agent's outputs are subtly biased and the reviewer doesn't notice).
- Cost overrun (the agent's inference cost grows faster than its productivity gain).
- Drift (the model's behaviour changes after a provider update, and the eval doesn't catch it).
- Misuse (employees use the agent for tasks outside its scope, generating bad outputs that look authoritative).
- Compliance event (the agent's outputs run afoul of a regulation that wasn't considered at design time).
The owner names the failure modes for this specific role. For each: how would you detect it? What's the recovery? Without these answers, the role fails quietly until something forces a hard look.
Question 4 — How will it retire?
AI employees should retire. Some reasons:
- The workflow they support has changed and the role no longer fits.
- The cost-to-value ratio crossed an unfavourable threshold.
- A more capable role can absorb the work.
- The compliance posture shifted and the role can't be retained.
- The eval cohort shows the role isn't actually moving the KPIs.
The retirement plan includes:
- A knowledge-capture step (what worked, what didn't, lessons for the next attempt).
- A stakeholder-comms step (who needs to know the role is no longer covered).
- A backstop plan (humans pick up the work, or a different role takes over).
Roles without a retirement plan don't retire. They drift, decay, accumulate technical and organisational debt, and eventually generate problems out of scale with their value.
How the contract works in practice
The contract isn't a paper exercise. It's a one-page document that lives with the role. The owner reviews it quarterly:
- Owner still right? (sometimes the role transfers to a new manager).
- KPIs still moving? (with the data attached).
- Failure modes — any new ones since last quarter? Any of the originals tightened or relaxed?
- Retirement plan still appropriate? (sometimes the role's mission changes; the retirement plan changes with it).
Roles that pass the quarterly review continue. Roles that don't get adjusted, downsized, or retired. The owner makes that call.
What this prevents
Most AI strategies we see in 2026 are populated by prototype-stage tools that nobody owns and nobody is accountable for. The pattern is consistent:
- Year 1: a few prototype agents shipped. Excitement.
- Year 2: more prototypes. Dashboards. Slide decks.
- Year 3: leadership notices nothing has actually moved. Quiet de-funding.
The hand-off contract changes the shape. It says: each AI role is a real role. It has a manager, a target, a failure plan, and an end. The org treats AI employees the way it treats human ones — with the discipline that lets them succeed or fail visibly.
The discipline scales
A company with five AI employees, each on a hand-off contract, looks operationally like a company with five additional human reports — except the additional capacity scales differently and the cost shape is different. Both are knowable, plannable, and budgetable. That's what makes the AI strategy real.
Close
This series walked 24 specific AI employees and 1 framing piece — the bot-vs-teammate reframe. The hand-off contract is what makes any of them work. Four questions, asked once, revisited quarterly. The roles that survive are the ones whose owners take the contract seriously.
If you're putting AI to work in your business, start with the contract. The role design follows. The technology is the easy part. The discipline is the project.
Related reading
- An AI employee isn't a bot — it's a teammate with a desk — the reframe this whole series rests on.
- The agent maturity curve — where AI employees sit on the curve.
- Retiring an agent — the discipline of the end.
We build AI-enabled software and help businesses put AI to work. If you're operationalising AI employees, we'd love to hear about it. Get in touch.