Moving Beyond the Slow, Hierarchical Organization
A conversation with author Jana Werner about how companies must adapt their processes to survive continuous transformation.
A conversation with author Jana Werner about how companies must adapt their processes to survive continuous transformation.
What comes after Transformers? Google Research is proposing a new way to give sequence models usable long term memory with Titans and MIRAS, while keeping training parallel and inference close to linear. Titans is a concrete architecture that adds a deep neural memory to a Transformer style backbone. MIRAS is a general framework that views most modern sequence models as instances of online optimization over an associative memory. Why Titans and MIRAS? Standard Transformers use attention over a […]
Chelsea Follett The debut of the robot butler NEO has drawn widespread ridicule. Unable to perform many chores without a remote human operator, the machine has become a target of social media backlash. Videos circulating online show the robot struggling with basic tasks, such as closing a dishwasher. , But don’t underestimate the potential of robotic housekeepers just yet. The technology is dawning at an opportune time. Consider the growing concerns about plummeting birth rates. Last year saw […]
In March of 2020, I published an essay warning both the public and our policymakers against overreacting to the COVID threat. We overreact, I argued, in times of “epistemic uncertainty,” when we do not know enough about a threat we face and are unclear about our best response. Continue Reading…
An HBR Executive exclusive Q&A with Zak Brown, CEO of McLaren Racing.
Today, we are celebrating the extraordinary impact of Nobel Prize-winner Geoffrey Hinton by investing in the future of the field he helped build. Google is proud to supp…
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain. Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue […]
To make large language models (LLMs) more accurate when answering harder questions, researchers can let the model spend more time thinking about potential solutions. But common approaches that give LLMs this capability set a fixed computational budget for every problem, regardless of how complex it is. This means the LLM might waste computational resources on simpler questions or be unable to tackle intricate problems that require more reasoning. To address this, MIT researchers developed a smarter way to allocate […]
Can a 3B model deliver 30B class reasoning by fixing the training recipe instead of scaling parameters? Nanbeige LLM Lab at Boss Zhipin has released Nanbeige4-3B, a 3B parameter small language model family trained with an unusually heavy emphasis on data quality, curriculum scheduling, distillation, and reinforcement learning. The research team ships 2 primary checkpoints, Nanbeige4-3B-Base and Nanbeige4-3B-Thinking, and evaluates the reasoning tuned model against Qwen3 checkpoints from 4B up to 32B parameters. https://arxiv.org/pdf/2512.06266 Benchmark results On AIME […]
submitted by /u/Mynameis__–__ [link] [comments]