Exact Input Writes Improve Stable Looped Language Models

Parcae stabilizes loop models using an exact exponential decay for its recurrent state, while its input branch uses an Euler gain. We replace that gain with exact zero-order hold (ZOH) for the existing full-matrix write. In completed matched 140M controls, including the 11.2B paper-style run, validation loss improves; downstream readout is mixed.

Loop-Model FLOPs and Memory in an Ablation Chain

Loop models are becoming active again in reasoning and language modeling, with recent examples such as HRM, TRM, recurrent-depth latent reasoning, and Parcae. This post asks a simple question: what is the actual compute cost of looping? I analyze that question in an ablation chain over major loop-model variants, with special attention to how optimizer interval, gradient path, and storage policy change the FLOPs, NFE, and memory accounting.