2026 · Philosophy & Technology
Central question: In what way does the error of LLM-generated code differ structurally from the error of human-generated code?
As Large Language Models become active co-authors of software, this paper articulates distinct Architectures of Error to ground an epistemic distinction between human and artificial code generation. Examined through their shared vulnerability to error, the two modes reveal fundamentally different causal origins—human-cognitive versus artificial-stochastic— even when their functional outputs coincide. Drawing on Dennett's mechanistic functionalism and Rescher's methodological pragmatism, the analysis shows how this systematic differentiation reframes questions of semantic coherence, security robustness, epistemic limits, and control in human–AI software development.
doi:10.1007/s13347-026-01056-x
Concepts: Architectures of Error