Gemini 2.0: Google ushers in the agentic AI era

Google CEO Sundar Pichai has announced the launch of Gemini 2.0, a model that represents the next step in Google’s ambition to revolutionise AI. A year after introducing the Gemini 1.0 model, this major upgrade incorporates enhanced multimodal capabilities, agentic functionality, and innovative user tools designed…

MMG Events Connects 26 Countries Smoothly with NDI – NDI Case study

Join us for an exciting show with special guest Ryan Majchrowski – CEO of MMG Events. In this show Ryan will tell us how MMG Events utilized NDI as the basis for a 4 day event with 150 Hybrid and Virtual sessions to an audience of over 60,000! Learn how they used equipment from Atomos, BirdDog, NDI and more! [embedded content]


Who is MMG?

Your Virtual &​ Hybrid Event Partners

  • Virtual
  • Hybric
  • Audio
  • Video
  • Lighting
  • Projection
  • Stage
  • Set
  • Full Production

The Project

  • Four-day diversity and inclusion festival for the insurance sector​
  • Spanned 26 countries and reached an audience of over 60,000. ​
  • 150 hybrid and virtual sessions that spanned four days and ran around the clock.​
  • ​MMG deployed over 200 NDI signals across their network.​
  • Used NDI technology at the core of their setup, MMG delivered a highly reliable, flexible, and innovative solution that exceeded expectations.​

NDI Based Workflow

  • With over 200 NDI signals in use, the setup offered unmatched flexibility and redundancy:
  • Monitoring and controlNDI Studio Monitor, Shogun Screens, and BirdDog Flex devices allowed the MMG team to monitor all incoming signals, detect issues proactively, and ensure smooth stream transitions.
  • Redundancy at every levelThe system incorporated triple-layer backups for signal processing, internet connectivity, and power. Atomos Shogun devices served as backup RTMP encoders to ensure seamless failover streaming when needed.
  • Versatile signal inputsMMG leveraged tools like vMix, NDI Webcam Input, and Atomos Ninja Ultra to encode and process NDI signals.

Gear Used

  • Monitoring was centralized using NDI Studio Monitor, combined with:
  • Atomos Shogun screens to display extended desktops.
  • BirdDog Flex with Atomos Ninja to act as a reference monitor to ensure accurate processing.

Keys to AI success: Security, sustainability, and overcoming silos

NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success. “2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko,…

Gentrace Secures $8M Series A to Revolutionize Generative AI Testing

Gentrace, a cutting-edge platform for testing and monitoring generative AI applications, has announced the successful completion of an $8 million Series A funding round led by Matrix Partners, with contributions from Headline and K9 Ventures. This funding milestone, which brings the company’s total funding to $14…

Transforming fusion from a scientific curiosity into a powerful clean energy source

If you’re looking for hard problems, building a nuclear fusion power plant is a pretty good place to start. Fusion — the process that powers the sun — has proven to be a difficult thing to recreate here on Earth despite decades of research.

“There’s something very attractive to me about the magnitude of the fusion challenge,” Hartwig says. “It’s probably true of a lot of people at MIT. I’m driven to work on very hard problems. There’s something intrinsically satisfying about that battle. It’s part of the reason I’ve stayed in this field. We have to cross multiple frontiers of physics and engineering if we’re going to get fusion to work.”

The problem got harder when, in Hartwig’s last year in graduate school, the Department of Energy announced plans to terminate funding for the Alcator C-Mod tokamak, a major fusion experiment in MIT’s Plasma Science and Fusion Center that Hartwig needed to do to graduate. Hartwig was able to finish his PhD, and the scare didn’t dissuade him from the field. In fact, he took an associate professor position at MIT in 2017 to keep working on fusion.

“It was a pretty bleak time to take a faculty position in fusion energy, but I am a person who loves to find a vacuum,” says Hartwig, who is a newly tenured associate professor at MIT. “I adore a vacuum because there’s enormous opportunity in chaos.”

Hartwig did have one very good reason for hope. In 2012, he had taken a class taught by Professor Dennis Whyte that challenged students to design and assess the economics of a nuclear fusion power plant that incorporated a new kind of high-temperature superconducting magnet. Hartwig says the magnets enable fusion reactors to be much smaller, cheaper, and faster.

Whyte, Hartwig, and a few other members of the class started working nights and weekends to prove the reactors were feasible. In 2017, the group founded Commonwealth Fusion Systems (CFS) to build the world’s first commercial-scale fusion power plants.

Over the next four years, Hartwig led a research project at MIT with CFS that further developed the magnet technology and scaled it to create a 20-Tesla superconducting magnet — a suitable size for a nuclear fusion power plant.

The magnet and subsequent tests of its performance represented a turning point for the industry. Commonwealth Fusion Systems has since attracted more than $2 billion in investments to build its first reactors, while the fusion industry overall has exceeded $8 billion in private investment.

The old joke in fusion is that the technology is always 30 years away. But fewer people are laughing these days.

“The perspective in 2024 looks quite a bit different than it did in 2016, and a huge part of that is tied to the institutional capability of a place like MIT and the willingness of people here to accomplish big things,” Hartwig says.

A path to the stars

As a child growing up in St. Louis, Hartwig was interested in sports and playing outside with friends but had little interest in physics. When he went to Boston University as an undergraduate, he studied biomedical engineering simply because his older brother had done it, so he thought he could get a job. But as he was introduced to tools for structural experiments and analysis, he found himself more interested in how the tools worked than what they could do.

“That led me to physics, and physics ended up leading me to nuclear science, where I’m basically still doing applied physics,” Hartwig explains.

Joining the field late in his undergraduate studies, Hartwig worked hard to get his physics degree on time. After graduation, he was burnt out, so he took two years off and raced his bicycle competitively while working in a bike shop.

“There’s so much pressure on people in science and engineering to go straight through,” Hartwig says. “People say if you take time off, you won’t be able to get into graduate school, you won’t be able to get recommendation letters. I always tell my students, ‘It depends on the person.’ Everybody’s different, but it was a great period for me, and it really set me up to enter graduate school with a more mature mindset and to be more focused.”

Hartwig returned to academia as a PhD student in MIT’s Department of Nuclear Science and Engineering in 2007. When his thesis advisor, Dennis Whyte, announced a course focused on designing nuclear fusion power plants, it caught Hartwig’s eye. The final projects showed a surprisingly promising path forward for a fusion field that had been stagnant for decades. The rest was history.

“We started CFS with the idea that it would partner deeply with MIT and MIT’s Plasma Science and Fusion Center to leverage the infrastructure, expertise, people, and capabilities that we have MIT,” Hartwig says. “We had to start the company with the idea that it would be deeply partnered with MIT in an innovative way that hadn’t really been done before.”

Guided by impact

Hartwig says the Department of Nuclear Science and Engineering, and the Plasma Science and Fusion Center in particular, have seen a huge influx in graduate student applications in recent years.

“There’s so much demand, because people are excited again about the possibilities,” Hartwig says. “Instead of having fusion and a machine built in one or two generations, we’ll hopefully be learning how these things work in under a decade.”

Hartwig’s research group is still testing CFS’ new magnets, but it is also partnering with other fusion companies in an effort to advance the field more broadly.

Overall, when Hartwig looks back at his career, the thing he is most proud of is switching specialties every six years or so, from building equipment for his PhD to conducting fundamental experiments to designing reactors to building magnets.

“It’s not that traditional in academia,” Hartwig says. “Where I’ve found success is coming into something new, bringing a naivety but also realism to a new field, and offering a different toolkit, a different approach, or a different idea about what can be done.”

Now Hartwig is onto his next act, developing new ways to study materials for use in fusion and fission reactors.

“I’m already interested in moving on to the next thing; the next field where I’m not a trained expert,” Hartwig says. “It’s about identifying where there’s stagnation in fusion and in technology, where innovation is not happening where we desperately need it, and bringing new ideas to that.”

Researchers reduce bias in AI models while preserving or improving accuracy

Machine-learning models can fail when they try to make predictions for individuals who were underrepresented in the datasets they were trained on.

For instance, a model that predicts the best treatment option for someone with a chronic disease may be trained using a dataset that contains mostly male patients. That model might make incorrect predictions for female patients when deployed in a hospital.

To improve outcomes, engineers can try balancing the training dataset by removing data points until all subgroups are represented equally. While dataset balancing is promising, it often requires removing large amount of data, hurting the model’s overall performance.

MIT researchers developed a new technique that identifies and removes specific points in a training dataset that contribute most to a model’s failures on minority subgroups. By removing far fewer datapoints than other approaches, this technique maintains the overall accuracy of the model while improving its performance regarding underrepresented groups.

In addition, the technique can identify hidden sources of bias in a training dataset that lacks labels. Unlabeled data are far more prevalent than labeled data for many applications.

This method could also be combined with other approaches to improve the fairness of machine-learning models deployed in high-stakes situations. For example, it might someday help ensure underrepresented patients aren’t misdiagnosed due to a biased AI model.

“Many other algorithms that try to address this issue assume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not true. There are specific points in our dataset that are contributing to this bias, and we can find those data points, remove them, and get better performance,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and co-lead author of a paper on this technique.

She wrote the paper with co-lead authors Saachi Jain PhD ’24 and fellow EECS graduate student Kristian Georgiev; Andrew Ilyas MEng ’18, PhD ’23, a Stein Fellow at Stanford University; and senior authors Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and Aleksander Madry, the Cadence Design Systems Professor at MIT. The research will be presented at the Conference on Neural Information Processing Systems.

Removing bad examples

Often, machine-learning models are trained using huge datasets gathered from many sources across the internet. These datasets are far too large to be carefully curated by hand, so they may contain bad examples that hurt model performance.

Scientists also know that some data points impact a model’s performance on certain downstream tasks more than others.

The MIT researchers combined these two ideas into an approach that identifies and removes these problematic datapoints. They seek to solve a problem known as worst-group error, which occurs when a model underperforms on minority subgroups in a training dataset.

The researchers’ new technique is driven by prior work in which they introduced a method, called TRAK, that identifies the most important training examples for a specific model output.

For this new technique, they take incorrect predictions the model made about minority subgroups and use TRAK to identify which training examples contributed the most to that incorrect prediction.

“By aggregating this information across bad test predictions in the right way, we are able to find the specific parts of the training that are driving worst-group accuracy down overall,” Ilyas explains.

Then they remove those specific samples and retrain the model on the remaining data.

Since having more data usually yields better overall performance, removing just the samples that drive worst-group failures maintains the model’s overall accuracy while boosting its performance on minority subgroups.

A more accessible approach

Across three machine-learning datasets, their method outperformed multiple techniques. In one instance, it boosted worst-group accuracy while removing about 20,000 fewer training samples than a conventional data balancing method. Their technique also achieved higher accuracy than methods that require making changes to the inner workings of a model.

Because the MIT method involves changing a dataset instead, it would be easier for a practitioner to use and can be applied to many types of models.

It can also be utilized when bias is unknown because subgroups in a training dataset are not labeled. By identifying datapoints that contribute most to a feature the model is learning, they can understand the variables it is using to make a prediction.

“This is a tool anyone can use when they are training a machine-learning model. They can look at those datapoints and see whether they are aligned with the capability they are trying to teach the model,” says Hamidieh.

Using the technique to detect unknown subgroup bias would require intuition about which groups to look for, so the researchers hope to validate it and explore it more fully through future human studies.

They also want to improve the performance and reliability of their technique and ensure the method is accessible and easy-to-use for practitioners who could someday deploy it in real-world environments.

“When you have tools that let you critically look at the data and figure out which datapoints are going to lead to bias or other undesirable behavior, it gives you a first step toward building models that are going to be more fair and more reliable,” Ilyas says.

This work is funded, in part, by the National Science Foundation and the U.S. Defense Advanced Research Projects Agency.

Claude’s Model Context Protocol (MCP): A Developer’s Guide

Anthropic’s Model Context Protocol (MCP) is an open-source protocol that enables secure, two-way communication between AI assistants and data sources like databases, APIs, and enterprise tools. By adopting a client-server architecture, MCP standardizes the way AI models interact with external data, eliminating the need for custom…