Prompt Engineering for Outlier Detection
Learn how to detect outliers by doing a real-life data project and improve the process with AI.
Learn how to detect outliers by doing a real-life data project and improve the process with AI.
When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology. What comes next for this powerful but imperfect tool? With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact […]
How to upgrade and optimize legacy AI/ML models The post On the Challenge of Converting TensorFlow Models to PyTorch appeared first on Towards Data Science.
Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things. A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, […]
More than 300 people across academia and industry spilled into an auditorium to attend a BoltzGen seminar on Thursday, Oct. 30, hosted by the Abdul Latif Jameel Clinic for Machine Learning in Health (MIT Jameel Clinic). Headlining the event was MIT PhD student and BoltzGen’s first author Hannes Stärk, who had announced BoltzGen just a few days prior. Building upon Boltz-2, an open-source biomolecular structure prediction model predicting protein binding affinity that made waves over the summer, BoltzGen (officially released on Sunday, […]
OpenAI researchers are testing “confessions,” a method that trains models to admit when they make mistakes or act undesirably, helping improve AI honesty, transparency, and trust in model outputs.
Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain […]
Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself. Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they […]
Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers. Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments […]
Testing that your AI agent is performing as expected is not easy. Here are a few strategies we learned the hard way. The post How We Are Testing Our Agents in Dev appeared first on Towards Data Science.