Artificial Fallibility
“The condition under which artificial systems produce artefacts that resemble epistemic outputs—code, explanations, justified modifications—without exhibiting the structural properties those outputs would normally signal in human production.”
Why it matters
Artificial Fallibility is the umbrella condition the programme analyses. Every other concept in the programme—Architectures of Error, the Bidirectional Coherence Paradox, Code Structure Evolution, and the diagnostic apparatus around them—names a particular mechanism, manifestation, or consequence within this broader condition.
Notes
The programme treats Artificial Fallibility as a structural condition rather than a catalogue of failure cases. The unifying question is therefore not “where do AI systems fail?” but “how does the property of being justified behave in artefacts produced by systems without causal coupling to reasons—synchronically and across time?”
Three constitutive papers situate the programme along distinct dimensions:
- Synchronic, single-shot generation — Architectures of Error (the ontology of error in AI-generated code).
- Synchronic, reasoning under observability — Coherent Without Grounding (the Bidirectional Coherence Paradox and the Epistemic Triangle).
- Diachronic, evolving artefacts — Code Structure Evolution (epistemic drift, the point of no return, and the reconstruction cost).
The other concepts (Observability Gap, Integrated Justification, Epistemic Drift, Decision-bearing Branches, etc.) operate within these papers as diagnostic apparatus or formal apparatus. They are derived, in the sense that their philosophical work presupposes the umbrella framing of Artificial Fallibility.