
Assurance as Infrastructure: What's Still Missing
2026-02-18
AI ethics has matured. The field moved past speculative anxieties about superintelligence, got serious about practical problems like bias and explainability, and recognised that high-level principles don't implement themselves. Each wave of work has been valuable, but it is clear to many who work in the field that something is still missing.
I believe a significant problem is that the discourse remains largely diagnostic. That is, we're good at identifying what's wrong and increasingly good at articulating what should be true. What we haven't built is the infrastructure that helps people actually do the work. That's what the Trustworthy and Ethical Assurance ecosystem is for.
From compliance to capability
A common objection to assurance is that it impedes innovation. More governance, more bureaucracy, more forms to fill in before you can do the real work.
While I can sympathise with this objection, it also mistakes the symptom for the disease. When assurance feels like bureaucracy, it's probably because documentation has been retrofitted or left too late. Decisions get made, then someone scrambles to justify them after the fact. That is slow, and does get in the way of innovation, but it's because you're reconstructing reasoning rather than capturing it as it happens.
But assurance done well is different. It's a process of disciplined thinking, which is both intrinsically rewarding and also has significant practical benefits. You make claims explicit, identify what evidence would support them, and notice where the argument is thin. The goal isn't to satisfy external regulatory requirements or compliance checklists. It's to achieve your own goals more reliably.
This reframes what assurance is for, and moves us away from mere compliance towards building capabilities.
Why this accelerates innovation
If teams build deliberation into how they work, documentation can become a byproduct rather than a separate task. An assurance case could emerge as a trace of your reasoning process. It will still be useful for regulators and auditors, but it will also be produced in the course of doing the work rather than bolted on afterwards.
This is how assurance, when done well, speeds things up. You're not accumulating debt that has to be paid down later when someone asks awkward questions. The thinking and the accountability are aligned from the start. Teams that treat assurance as an afterthought end up doing the work twice. Once to build, once to justify. Teams that deliberate as they go do it once, and the justification comes free.
The TEA ecosystem
Bridging principles and practice requires more than good intentions. You need to know what options exist, how to evaluate them, and how to structure the reasoning that connects choices to goals.
The TEA ecosystem provides this scaffolding through three integrated components:
- The TEA Platform is where assurance cases are built. It provides the conceptual foundation and the tools to construct cases. What does assurance mean? How do argument structures work? How do claims connect to evidence? This is the hub where deliberation happens and artefacts are produced.
- TEA Techniques is a curated library of methods for achieving different assurance goals, from explainability to fairness to security to robustness. Each entry explains what a technique does, when it's appropriate, and where to find implementations. You can't choose well if you don't know what's available.
- TEA Evidence makes a subset of these techniques executable. You can run experiments on your own models and datasets, comparing SHAP, LIME, and partial dependence plots side by side, and feed the results back into the Platform as dynamic evidence within your assurance case.
The first of these two components are already live, and the third is in development. Together, our goal is to support the full arc from orientation through evaluation to structured reasoning. Scaffolding for disciplined science and innovation.
The TEA Platform in action
The TEA Platform provides a structured workspace for building assurance cases. Here you can create claims, link evidence, and visualise the argument structure that connects your project goals to the practices that support them.
New features are being developed that will enhance the user experience, making it easier to create, edit, and share assurance cases.
Browsing TEA Techniques
TEA Techniques provides a searchable library of methods, filterable by assurance goal. Each technique entry explains what it does, when to use it, and links to implementations and further reading.
Our repository of ML/AI assurance techniques is improving and growing, allowing users and teams to more readily find the right technique to help them evidence a claim about their model or system.
Roadmap
The TEA ecosystem is under active development. Here's where we're headed.
Version 1.0 — TEA Platform
Bringing the TEA Platform out of research preview. This release will include a new set of UX improvements, including more-focused collaborative tools, a cleaner UI, and a new plugin ecosystem that will allow the community to extend the platform to suit their own use cases.
Q2 2026 onwards
Techniques integration
Integrating the TEA Techniques dataset directly into the TEA Platform, allowing users to find a technique to help them evidence a claim from within the assurance case builder.
Argument pattern library
Building out a repository of argument patterns for goals like explainability, fairness, security, and safety. This will help teams build skills and knowledge around best practices, using templates that have been designed by AI assurance experts.
Community case studies
Growing a collection of community case studies, helping users showcase how they have assured their own models or systems.
Building what's missing
AI ethics has done important work. But frameworks for critique are not the same as infrastructure for practice.
What's still missing is the capability layer. We need systems and tools that help researchers and practitioners develop the judgement to build well, not just the vocabulary to evaluate afterwards.
That's what we're building.