Andy Byrne brings more than two decades of experience in sales, marketing, business development, and management to his position as CEO of Clari. Prior to Clari, he was part of the founding executive team at Clearwell Systems – Gartner’s highest-ranking e-discovery company – which he helped…
Smaller, Smarter, and Faster: How Mistral AI is Bringing Edge Devices to the Forefront
Edge computing is changing how we process and manage data. Instead of sending all information to cloud servers, data is now handled directly on devices. This is a transformative advancement, especially for industries that depend on real-time responses, like healthcare, automotive, and smart cities. While cloud…
MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption
The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS)…
A new catalyst can turn methane into something useful
Although it is less abundant than carbon dioxide, methane gas contributes disproportionately to global warming because it traps more heat in the atmosphere than carbon dioxide, due to its molecular structure.
MIT chemical engineers have now designed a new catalyst that can convert methane into useful polymers, which could help reduce greenhouse gas emissions.
“What to do with methane has been a longstanding problem,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study. “It’s a source of carbon, and we want to keep it out of the atmosphere but also turn it into something useful.”
The new catalyst works at room temperature and atmospheric pressure, which could make it easier and more economical to deploy at sites of methane production, such as power plants and cattle barns.
Daniel Lundberg PhD ’24 and MIT postdoc Jimin Kim are the lead authors of the study, which appears today in Nature Catalysis. Former postdoc Yu-Ming Tu and postdoc Cody Ritt also authors of the paper.
Capturing methane
Methane is produced by bacteria known as methanogens, which are often highly concentrated in landfills, swamps, and other sites of decaying biomass. Agriculture is a major source of methane, and methane gas is also generated as a byproduct of transporting, storing, and burning natural gas. Overall, it is believed to account for about 15 percent of global temperature increases.
At the molecular level, methane is made of a single carbon atom bound to four hydrogen atoms. In theory, this molecule should be a good building block for making useful products such as polymers. However, converting methane to other compounds has proven difficult because getting it to react with other molecules usually requires high temperature and high pressures.
To achieve methane conversion without that input of energy, the MIT team designed a hybrid catalyst with two components: a zeolite and a naturally occurring enzyme. Zeolites are abundant, inexpensive clay-like minerals, and previous work has found that they can be used to catalyze the conversion of methane to carbon dioxide.
In this study, the researchers used a zeolite called iron-modified aluminum silicate, paired with an enzyme called alcohol oxidase. Bacteria, fungi, and plants use this enzyme to oxidize alcohols.
This hybrid catalyst performs a two-step reaction in which zeolite converts methane to methanol, and then the enzyme converts methanol to formaldehyde. That reaction also generates hydrogen peroxide, which is fed back into the zeolite to provide a source of oxygen for the conversion of methane to methanol.
This series of reactions can occur at room temperature and doesn’t require high pressure. The catalyst particles are suspended in water, which can absorb methane from the surrounding air. For future applications, the researchers envision that it could be painted onto surfaces.
“Other systems operate at high temperature and high pressure, and they use hydrogen peroxide, which is an expensive chemical, to drive the methane oxidation. But our enzyme produces hydrogen peroxide from oxygen, so I think our system could be very cost-effective and scalable,” Kim says.
Creating a system that incorporates both enzymes and artificial catalysts is a “smart strategy,” says Damien Debecker, a professor at the Institute of Condensed Matter and Nanosciences at the University of Louvain, Belgium.
“Combining these two families of catalysts is challenging, as they tend to operate in rather distinct operation conditions. By unlocking this constraint and mastering the art of chemo-enzymatic cooperation, hybrid catalysis becomes key-enabling: It opens new perspectives to run complex reaction systems in an intensified way,” says Debecker, who was not involved in the research.
Building polymers
Once formaldehyde is produced, the researchers showed they could use that molecule to generate polymers by adding urea, a nitrogen-containing molecule found in urine. This resin-like polymer, known as urea-formaldehyde, is now used in particle board, textiles and other products.
The researchers envision that this catalyst could be incorporated into pipes used to transport natural gas. Within those pipes, the catalyst could generate a polymer that could act as a sealant to heal cracks in the pipes, which are a common source of methane leakage. The catalyst could also be applied as a film to coat surfaces that are exposed to methane gas, producing polymers that could be collected for use in manufacturing, the researchers say.
Strano’s lab is now working on catalysts that could be used to remove carbon dioxide from the atmosphere and combine it with nitrate to produce urea. That urea could then be mixed with the formaldehyde produced by the zeolite-enzyme catalyst to produce urea-formaldehyde.
The research was funded by the U.S. Department of Energy.
3 Questions: Community policing in the Global South
The concept of community policing gained wide acclaim in the U.S. when crime dropped drastically during the 1990s. In Chicago, Boston, and elsewhere, police departments established programs to build more local relationships, to better enhance community security. But how well does community policing work in other places? A new multicountry experiment co-led by MIT political scientist Fotini Christia found, perhaps surprisingly, that the policy had no impact in several countries across the Global South, from Africa to South America and Asia.
The results are detailed in a new edited volume, “Crime, Insecurity, and Community Policing: Experiments on Building Trust,” published this week by Cambridge University Press. The editors are Christia, the Ford International Professor of the Social Sciences in MIT’s Department of Political Science, director of the MIT Institute for Data, Systems, and Society, and director of the MIT Sociotechnical Systems Research Center; Graeme Blair of the University of California at Los Angeles; and Jeremy M. Weinstein of Stanford University. MIT News talked to Christia about the project.
Q: What is community policing, and how and where did you study it?
A: The general idea is that community policing, actually connecting the police and the community they are serving in direct ways, is very effective. Many of us have celebrated community policing, and we typically think of the 1990s Chicago and Boston experiences, where community policing was implemented and seen as wildly successful in reducing crime rates, gang violence, and homicide. This model has been broadly exported across the world, even though we don’t have much evidence that it works in contexts that have different resource capacities and institutional footprints.
Our study aims to understand if the hype around community policing is justified by measuring the effects of such policies globally, through field experiments, in six different settings in the Global South. In the same way that MIT’s J-PAL develops field experiments about an array of development interventions, we created programs, in cooperation with local governments, about policing. We studied if it works and how, across very diverse settings, including Uganda and Liberia in Africa, Colombia and Brazil in Latin America, and the Philippines and Pakistan in Asia.
The study, and book, is the result of collaborations with many police agencies. We also highlight how one can work with the police to understand and refine police practices and think very intentionally about all the ethical considerations around such collaborations. The researchers designed the interventions alongside six teams of academics who conducted the experiments, so the book also reflects an interesting experiment in how to put together a collaboration like this.
Q: What did you find?
A: What was fascinating was that we found that locally designed community policing interventions did not generate greater trust or cooperation between citizens and the police, and did not reduce crime in the six regions of the Global South where we carried out our research.
We looked at an array of different measures to evaluate the impact, such as changes in crime victimization, perceptions of police, as well as crime reporting, among others, and did not see any reductions in crime, whether measured in administrative data or in victimization surveys.
The null effects were not driven by concerns of police noncompliance with the intervention, crime displacement, or any heterogeneity in effects across sites, including individual experiences with the police.
Sometimes there is a bias against publishing so-called null results. But because we could show that it wasn’t due to methodological concerns, and because we were able to explain how such changes in resource-constrained environments would have to be preceded by structural reforms, the finding has been received as particularly compelling.
Q: Why did community policing not have an impact in these countries?
A: We felt that it was important to analyze why it doesn’t work. In the book, we highlight three challenges. One involves capacity issues: This is the developing world, and there are low-resource issues to begin with, in terms of the programs police can implement.
The second challenge is the principal-agent problem, the fact that the incentives of the police may not align in this case. For example, a station commander and supervisors may not appreciate the importance of adopting community policing, and line officers might not comply. Agency problems within the police are complex when it comes to mechanisms of accountability, and this may undermine the effectiveness of community policing.
A third challenge we highlight is the fact that, to the communities they serve, the police might not seem separate from the actual government. So, it may not be clear if police are seen as independent institutions acting in the best interest of the citizens.
We faced a lot of pushback when we were first presenting our results. The potential benefits of community policing is a story that resonates with many of us; it’s a narrative suggesting that connecting the police to a community has a significant and substantively positive effect. But the outcome didn’t come as a surprise to people from the Global South. They felt the lack of resources, and potential problems about autonomy and nonalignment, were real.
A new way to create realistic 3D shapes using generative AI
Creating realistic 3D models for applications like virtual reality, filmmaking, and engineering design can be a cumbersome process requiring lots of manual trial and error.
While generative artificial intelligence models for images can streamline artistic processes by enabling creators to produce lifelike 2D images from text prompts, these models are not designed to generate 3D shapes. To bridge the gap, a recently developed technique called Score Distillation leverages 2D image generation models to create 3D shapes, but its output often ends up blurry or cartoonish.
MIT researchers explored the relationships and differences between the algorithms used to generate 2D images and 3D shapes, identifying the root cause of lower-quality 3D models. From there, they crafted a simple fix to Score Distillation, which enables the generation of sharp, high-quality 3D shapes that are closer in quality to the best model-generated 2D images.
Some other methods try to fix this problem by retraining or fine-tuning the generative AI model, which can be expensive and time-consuming.
By contrast, the MIT researchers’ technique achieves 3D shape quality on par with or better than these approaches without additional training or complex postprocessing.
Moreover, by identifying the cause of the problem, the researchers have improved mathematical understanding of Score Distillation and related techniques, enabling future work to further improve performance.
“Now we know where we should be heading, which allows us to find more efficient solutions that are faster and higher-quality,” says Artem Lukoianov, an electrical engineering and computer science (EECS) graduate student who is lead author of a paper on this technique. “In the long run, our work can help facilitate the process to be a co-pilot for designers, making it easier to create more realistic 3D shapes.”
Lukoianov’s co-authors are Haitz Sáez de Ocáriz Borde, a graduate student at Oxford University; Kristjan Greenewald, a research scientist in the MIT-IBM Watson AI Lab; Vitor Campagnolo Guizilini, a scientist at the Toyota Research Institute; Timur Bagautdinov, a research scientist at Meta; and senior authors Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Justin Solomon, an associate professor of EECS and leader of the CSAIL Geometric Data Processing Group. The research will be presented at the Conference on Neural Information Processing Systems.
From 2D images to 3D shapes
Diffusion models, such as DALL-E, are a type of generative AI model that can produce lifelike images from random noise. To train these models, researchers add noise to images and then teach the model to reverse the process and remove the noise. The models use this learned “denoising” process to create images based on a user’s text prompts.
But diffusion models underperform at directly generating realistic 3D shapes because there are not enough 3D data to train them. To get around this problem, researchers developed a technique called Score Distillation Sampling (SDS) in 2022 that uses a pretrained diffusion model to combine 2D images into a 3D representation.
The technique involves starting with a random 3D representation, rendering a 2D view of a desired object from a random camera angle, adding noise to that image, denoising it with a diffusion model, then optimizing the random 3D representation so it matches the denoised image. These steps are repeated until the desired 3D object is generated.
However, 3D shapes produced this way tend to look blurry or oversaturated.
“This has been a bottleneck for a while. We know the underlying model is capable of doing better, but people didn’t know why this is happening with 3D shapes,” Lukoianov says.
The MIT researchers explored the steps of SDS and identified a mismatch between a formula that forms a key part of the process and its counterpart in 2D diffusion models. The formula tells the model how to update the random representation by adding and removing noise, one step at a time, to make it look more like the desired image.
Since part of this formula involves an equation that is too complex to be solved efficiently, SDS replaces it with randomly sampled noise at each step. The MIT researchers found that this noise leads to blurry or cartoonish 3D shapes.
An approximate answer
Instead of trying to solve this cumbersome formula precisely, the researchers tested approximation techniques until they identified the best one. Rather than randomly sampling the noise term, their approximation technique infers the missing term from the current 3D shape rendering.
“By doing this, as the analysis in the paper predicts, it generates 3D shapes that look sharp and realistic,” he says.
In addition, the researchers increased the resolution of the image rendering and adjusted some model parameters to further boost 3D shape quality.
In the end, they were able to use an off-the-shelf, pretrained image diffusion model to create smooth, realistic-looking 3D shapes without the need for costly retraining. The 3D objects are similarly sharp to those produced using other methods that rely on ad hoc solutions.
“Trying to blindly experiment with different parameters, sometimes it works and sometimes it doesn’t, but you don’t know why. We know this is the equation we need to solve. Now, this allows us to think of more efficient ways to solve it,” he says.
Because their method relies on a pretrained diffusion model, it inherits the biases and shortcomings of that model, making it prone to hallucinations and other failures. Improving the underlying diffusion model would enhance their process.
In addition to studying the formula to see how they could solve it more effectively, the researchers are interested in exploring how these insights could improve image editing techniques.
This work is funded, in part, by the Toyota Research Institute, the U.S. National Science Foundation, the Singapore Defense Science and Technology Agency, the U.S. Intelligence Advanced Research Projects Activity, the Amazon Science Hub, IBM, the U.S. Army Research Office, the CSAIL Future of Data program, the Wistron Corporation, and the MIT-IBM Watson AI Laboratory.
The Role of Semantic Layers in Self-Service BI
As organizational data grows, its complexity also increases. These data complexities become a significant challenge for business users. Traditional data management approaches struggle to manage these data complexities, so advanced data management methods are required to process them. This is where semantic layers come in. A…
From refugee to MIT graduate student
Mlen-Too Wesley has faded memories of his early childhood in Liberia, but the sharpest one has shaped his life.
Wesley was 4 years old when he and his family boarded a military airplane to flee the West African nation. At the time, the country was embroiled in a 14-year civil war that killed approximately 200,000 people, displaced about 750,000, and starved countless more. When Wesley’s grandmother told him he would enjoy a meal during his flight, Wesley knew his fortune had changed. Yet, his first instinct was to offer his food to the people he left behind.
“I made a decision right then to come back,” Wesley says. “Even as I grew older and spent more time in the United States, I knew I wanted to contribute to Liberia’s future.”
Today, the 38-year-old is committed to empowering Liberians through economic growth. Wesley looked to the MITx MicroMasters program in Data, Economics, and Design of Policy (DEDP) to achieve that goal. He examined issues such as micro-lending, state capture, and investment in health care in courses such as Foundations of Development Policy, Good Economics for Hard Times, and The Challenges of Global Poverty. Through case studies and research, Wesley discovered that economic incentives can encourage desired behaviors, curb corruption, and empower people.
“I couldn’t connect the dots”
Liberia is marred by corruption. According to Transparency International’s Corruptions Perception Index for 2023, Liberia scored 25 out of 100, with zero signifying the highest level of corruption. Yet, Wesley grew tired of textbooks and undergraduate professors saying that the status of Liberia and other African nations could be blamed entirely on corruption. Even worse, these sources gave Wesley the impression that nothing could be done to improve his native country. The sentiment frustrated him, he says.
“It struck me as flippant to attribute the challenges faced by billions of people to backward behaviors,” says Wesley. “There are several forces, internal and external, that have contributed to Liberia’s condition. If we really examine them, explore why things happened, and define the change we want, we can plot a way forward to a more prosperous future.”
Driven to examine the economic, political, and social dynamics shaping his homeland and to fulfill his childhood promise, Wesley moved back to Africa in 2013. Over the next 10 years, he merged his interests in entrepreneurship, software development, and economics to better Liberia. He designed a forestry management platform that preserves Liberia’s natural resources, built an online queue for government hospitals to triage patients more effectively, and engineered data visualization tools to support renewable energy initiatives. Yet, to create the impact Wesley wanted, he needed to do more than collect data. He had to analyze and act on it in meaningful ways.
“I couldn’t connect the dots on why things are the way they are,” Wesley says.
“It wasn’t just an academic experience for me”
Wesley knew he needed to dive deeper into data science, and looked to the MicroMasters in DEDP program to help him connect the dots. Established in 2017 by the Abdul Latif Jameel Poverty Action Lab (J-PAL) and MIT Open Learning, the MicroMasters in DEDP program is based on the Nobel Prize-winning work of MIT faculty members Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics, and Abhijit Banerjee, the Ford Foundation International Professor of Economics. Duflo and Banerjee’s research provided an entirely new approach to designing, implementing, and evaluating antipoverty initiatives throughout the world.
The MicroMasters in DEDP program provided the framework Wesley had sought nearly 20 years ago as an undergraduate student. He learned about novel economic incentives that stymied corruption and promoted education.
“It wasn’t just an academic experience for me,” Wesley says. “The classes gave me the tools and the frameworks to analyze my own personal experiences.”
Wesley initially stumbled with the quantitative coursework. Having a demanding career, taking extension courses at another university, and being several years removed from college calculus courses took a toll on him. He had to retake some classes, especially Data Analysis for Social Scientists, several times before he could pass the proctored exam. His persistence paid off. Wesley earned his MicroMasters in DEDP credential in June 2023 and was also admitted into the MIT DEDP master’s program.
“The class twisted my brain in so many different ways,” Wesley says. “The fourth time taking Data Analysis, I began to understand it. I appreciate that MIT did not care that I did poorly on my first try. They cared that over time I understood the material.”
The program’s rigorous mathematics and statistics classes sparked in Wesley a passion for artificial intelligence, especially machine learning and natural language processing. Both provide more powerful ways to extract and interpret data, and Wesley has a special interest in mining qualitative sources for information. He plans to use these tools to compare national development plans over time and among different countries to determine if policymakers are recycling the same words and goals.
Once Wesley earns his master’s degree, he plans to return to Liberia and focus on international development. In the future, he hopes to lead a data-focused organization committed to improving the lives of people in Liberia and the United States.
“Thanks to MIT, I have the knowledge and tools to tackle real-world challenges that traditional economic models often overlook,” Wesley says.