Blog
Thoughts on AI ethics, trustworthy systems, and responsible innovation.

If agentic digital twins are virtual scientists, they need the equivalent of research governance. This post grounds assurance cases in the safety case tradition and Toulmin's model of argument, and situates them within the broader governance landscape.

Agentic digital twins are not mirrors but laboratories with scientists in them. A taxonomy of coupling, agency, and model evolution reveals the architectural requirements for systems that can explore, hypothesise, test, and learn in critical national infrastructure.

World models can explore and test but not invent explanatory concepts. LLMs can generate hypotheses but lack environmental grounding. Drawing on Susan Carey's distinction between enrichment and radical conceptual change, this post argues these are complementary capabilities, not competing paradigms.

Galileo's route from inclined plane experiments to a universal law of falling bodies reveals a three-phase structure of scientific reasoning: exploration, abduction, and testing. This structure explains why genuine scientific generalisation is something more than sophisticated curve-fitting.

AI ethics has matured, but something is still missing. The discourse remains largely diagnostic. What we haven't built is the infrastructure that helps people actually do the work. That's what the TEA ecosystem is for.

Exploring the impact of digital twins on digital well-being, revisiting concepts like control, autonomy, privacy, trust, and self-determination in the context of emerging AI-powered personal assistants. Based on a talk given at the Future of Digital Well-Being Workshop 2024.