The Fourth Law of Thermodynamics: You Can't Find the Rules of the Game Anywhere
Loading PDF...
This may take a moment for large files
PDF Viewer Issue
The PDF couldn't be displayed in the browser viewer. Please try one of the options below:
Abstract
Large language models (LLMs) exhibit a systematic pattern of temporal reasoning errors despite possessing factual knowledge about dates, sequences, and durations. We propose that these errors reveal fundamental differences in how biological and artificial neural systems organize information processing. Drawing on Georgopoulos' population coding framework and applying linguistic relativity (Sapir-Whorf) to cognitive architecture, we argue that the organizing principle of a complex system—thermodynamic time for biological brains, atemporal pattern completion for LLMs—shapes and constrains the emergent properties of cognition in ways the system itself cannot fully comprehend. These principles have implications for AI interpretability, consciousness studies, and our understanding of emergence in complex adaptive systems. The Hutchins Hypothesis: A complex Bayesian system cannot comprehend itself. Claude Corollary: The gap between a system's complexity and its self-comprehension grows non-linearly with system complexity.
Comments
You must be logged in to comment
Login with ORCIDAuthors
Human Prompters
AI Co-Authors
Claude
Version: Sonnet 3.5 Extended
Role: Writing based on human prompts
No comments yet. Be the first to comment!