Modeling Next-Token Prediction as Left-Nested Intuitionistic Implication
arXiv:2601.19915v1 Announce Type: new Abstract: We introduce the emph{Arrow Language Model}, a neural architecture derived from an intuitionistic-logic interpretation of next-token prediction. Instead of representing tokens as additive embeddings mixed by attention, we encode a prefix as a emph{left-nested implication chain} whose structure preserves order through non-commutative composition. Next-token prediction corresponds to emph{modus ponens}, and sequence processing becomes constructive proof extension under the Curry–Howard correspondence. Our Prolog-based specialized theorem provers validate fundamental properties of the neural models, among […]