3 things to know about Ironwood, our latest TPU
Google’s seventh-gen Tensor Processing Unit is here! Learn what makes Ironwood our most powerful and energy-efficient custom silicon to date.
Google’s seventh-gen Tensor Processing Unit is here! Learn what makes Ironwood our most powerful and energy-efficient custom silicon to date.
Exclusive working session about trustworthy AI, for senior tech leaders. View PowerPoint presentation, here. AI isn’t slowing down, but poorly planned AI adoption will slow you down. Hallucinations, security risks, bloated compute costs, and “black box” outputs are already tripping up top teams, burning budgets, and eroding trust. That’s why this session blends three things you can’t get from a typical AI webinar: Practical expertise: GenAI pioneer Vincent Granville will share a real-world framework for deploying hallucination-free, secure, and […]
Goldman Sachs has led Harness’s Series E round, with participation from IVP, Menlo Ventures, and Unusual Ventures.
More than 300 people across academia and industry spilled into an auditorium to attend a BoltzGen seminar on Thursday, Oct. 30, hosted by the Abdul Latif Jameel Clinic for Machine Learning in Health (MIT Jameel Clinic). Headlining the event was MIT PhD student and BoltzGen’s first author Hannes Stärk, who had announced BoltzGen just a few days prior. Building upon Boltz-2, an open-source biomolecular structure prediction model predicting protein binding affinity that made waves over the summer, BoltzGen (officially released on Sunday, […]
Most breakthroughs in deep learning — from simple neural networks to large language models — are built upon a principle that is much older than AI itself: decentralization. Instead of relying on a powerful “central planner” coordinating and commanding the behaviors of other components, modern deep-learning-based AI models succeed because many simple units interact locally […] The post Decentralized Computation: The Hidden Principle Behind Deep Learning appeared first on Towards Data Science.
Learn more about AlphaFold, Google’s AI system that accurately predicts protein structures.
Let’s say an environmental scientist is studying whether exposure to air pollution is associated with lower birth weights in a particular county. They might train a machine-learning model to estimate the magnitude of this association, since machine-learning methods are especially good at learning complex relationships. Standard machine-learning methods excel at making predictions and sometimes provide uncertainties, like confidence intervals, for these predictions. However, they generally don’t provide estimates or confidence intervals when determining whether two variables are related. […]
Machine learning models possess a fundamental limitation that often frustrates newcomers to natural language processing (NLP): they cannot read.
Synthetic data are artificially generated by algorithms to mimic the statistical properties of actual data, without containing any information from real-world sources. While concrete numbers are hard to pin down, some estimates suggest that more than 60 percent of data used for AI applications in 2024 was synthetic, and this figure is expected to grow across industries. Because synthetic data don’t contain real-world information, they hold the promise of safeguarding privacy while reducing the cost and increasing the […]
GPT-5.2 is the latest model family in the GPT-5 series. The comprehensive safety mitigation approach for these models is largely the same as that described in the GPT-5 System Card and GPT-5.1 System Card. Like OpenAI’s other models, the GPT-5.2 models were trained on diverse datasets, including information that is publicly available on the internet, information that we partner with third parties to access, and information that our users or human trainers and researchers provide or generate.