Three Dogmas of Reinforcement Learning (Abel et al., 2024)
Watch David Abel present “Three Dogmas of RL”, joint work with Mark Ho and Anna Harutyunyan. He begins by arguing that RL still lacks a first-principles definition of an agent, and then lays out three “dogmas” in modern RL: We model environments rigorously, but leave agents as afterthoughts We treat learning as “finding a solution” rather than continual adaptation The “reward hypothesis” has implicit conditions most people never examine Read the summary post here: https://sensorimotorai.github.io/2026/03/05/threedogmasrl/ I like this […]