Programme
Unifying question
How does the property of "being justified" behave in artefacts produced by systems without causal coupling to reasons—synchronically and across time?
Existing papers
- Architectures of Error
In what way does the error of LLM-generated code differ structurally from the error of human-generated code?
- Coherent Without Grounding
Under what epistemic conditions do explanatory coherence and causal grounding dissociate in LLM reasoning?
- Code Structure Evolution
What is lost when software evolves preserving function but not justification?
Planned papers
Two further papers extend the programme along distinct lines. They are sketched here without commitment to timing or venue. Descriptions will be sharpened as the work develops.
-
Persistence Without Will: The Stochastic Mimesis of Self-Preservation in LLM Agents
Extends the diagnostic line of Architectures of Error and Coherent Without Grounding from single-shot generation to agentic systems, examining whether the language of will and self-preservation applies to artefacts whose persistence reduces to weighted sampling under externally imposed loops rather than to anything resembling Spinozian conatus.
-
Amoral Architectures: On the Strategic Use of Error in AI-Generated Code
Reframes the error profiles introduced in Architectures of Error as structurally exploitable features rather than mere technical limitations, and analyses the institutional incentives under which the opacity and stochasticity of code-generating systems function as resources for accountability evasion. This paper introduces the institutional-political dimension to the programme, examining how the structural properties of artificial fallibility intersect with practices of responsibility distribution, governance, and oversight.