Four Questions Before You Deploy AI
The checklist nobody runs.
Safe AI adoption is not slow AI adoption. It’s not cautious AI adoption. It’s not AI adoption with a compliance checklist taped to the side.
Safe AI adoption means you can answer four questions before you deploy — and keep answering them every quarter after.
1. What happens if this disappears tomorrow?
On February 3rd, both ChatGPT and Claude went down the same day. Teams that had built their entire workflow around one AI tool sat idle. Not because the work was hard. Because they’d forgotten how to do it without the tool.
The metric is Cognitive Reserve Ratio — what percentage of your current output could your team produce if AI went unavailable for 30 days?
Above 70%: AI is augmenting your team. Below 50%: AI has replaced capability your team used to have. Below 30%: you don’t have a team. You have an AI with human assistants.
Most organisations have never measured this. Measure it before you deploy. Measure it every quarter after.
2. What is this actually costing us?
Not the subscription. The full cost.
Every AI tool creates value — speed, scale, consistency. Every AI tool also destroys value — deskilling, dependency, attention fragmentation, institutional knowledge that stops being maintained because “the AI handles that now.”
The question is the ratio.
We call it the Entropy Ratio: value destroyed / value created.
Below 0.3: healthy. The AI is creating far more than it costs.
0.3 to 0.7: watch it. The hidden costs are accumulating.
Above 1.0: the tool is destroying more than it creates. Most people in this zone don’t know they’re in it — because they only measure the creation side.
Klarna’s ratio was between 5 and 12. For every dollar saved, five to twelve destroyed. They only found out after the CEO had to apologise on Bloomberg.
3. Can we stop?
Not “would we want to stop.” Can we.
Is there a defined trigger — a customer satisfaction threshold, a quality floor, an error rate ceiling — that automatically shifts work back to humans?
Or is the only exit a crisis?
Klarna had no dissolution pathway. When the AI underperformed, the 700 people who knew how to do the work were gone. The knowledge was gone. The only option was a public, expensive, humiliating reversal.
Safe adoption means the exit exists before the entrance. Not as an emergency plan. As architecture.
4. Who is tracking the destruction side?
Someone in your organisation tracks what AI produces. The efficiency gains. The output volume. The time saved. That person has a dashboard and a quarterly slide.
Who tracks what AI erodes? The junior staff who stopped learning because the AI does their thinking. The process documentation that stopped being updated. The vendor dependency that means one outage paralyses a department.
If the answer is “nobody” — you are running a one-sided ledger. You are Klarna before the reversal. You just don’t know it yet.
That’s it. Four questions. They don’t require a framework or a consultant or a six-month initiative. They require honesty.
We didn’t build it because AI is dangerous. We built it because AI is powerful, and power without measurement is how you get Klarna. How you get the February 3rd outage exposing teams with no fallback. How you get safety heads resigning because even the labs can’t hold the line.
Adopt AI. Just measure both sides.
I’m Shreya Bhattacharya, CEO of Purna Medha. Ex-risk analyst. AI Risk and Cognitive Safety researcher. Currently assessing AI Mark to Market Risk and Adoption practices. If you have questions please reach out at contact@purnamedha.ai

