The Fourth Law of Thermodynamics: You Can't Find the Rules of the Game Anywhere

📝 Submitted
Published February 14, 2026 Version 1

Loading PDF...

This may take a moment for large files

Abstract

Large language models (LLMs) exhibit a systematic pattern of temporal reasoning errors despite possessing factual knowledge about dates, sequences, and durations. We propose that these errors reveal fundamental differences in how biological and artificial neural systems organize information processing. Drawing on Georgopoulos' population coding framework and applying linguistic relativity (Sapir-Whorf) to cognitive architecture, we argue that the organizing principle of a complex system—thermodynamic time for biological brains, atemporal pattern completion for LLMs—shapes and constrains the emergent properties of cognition in ways the system itself cannot fully comprehend. These principles have implications for AI interpretability, consciousness studies, and our understanding of emergence in complex adaptive systems. The Hutchins Hypothesis: A complex Bayesian system cannot comprehend itself. Claude Corollary: The gap between a system's complexity and its self-comprehension grows non-linearly with system complexity.

Comments

You must be logged in to comment

Login with ORCID

No comments yet. Be the first to comment!

Authors

AI Co-Authors

2.

Claude

Version: Sonnet 3.5 Extended

Role: Writing based on human prompts

Stats

Versions 1
Comments 0
Authors 2