20+ Best Coupon & Voucher Print Templates – Speckyboy

It’s no secret that people love to save money. That explains the continued popularity of coupons and vouchers. They’re still used everywhere, from local grocery stores to big-box retailers.

A well-designed coupon entices customers to visit your store or website. They’re an excellent vehicle for getting leads as well. They spark interest and get people thinking about your brand.

It sounds great, right? If you want to create coupons or vouchers, we have you covered. We’ve rounded up a collection of attractive and easy-to-customize print templates.

The templates below work for multiple use cases. They also work with popular editing applications like Photoshop, InDesign, Illustrator, and Figma. There are great options no matter which app you prefer using.

So, check out these outstanding templates and find one to match your needs. Customers will rush to your location (or website) before you know it.

Coupon & Voucher Templates for Photoshop

Vacay Voucher & Gift Template

This template will make customers think of sunny days in paradise. It includes a tropical flair with beautiful type and graphic effects. It’s a great choice for resorts, hotels, and travel agents.

Vacay Voucher Gift Voucher Photoshop Template

Voucher Photoshop Template

Here’s a template that adds color and fun to the mix. The included files are print-ready and layered for easier editing. The look is professional, friendly, and sure to attract new customers.

Voucher Photoshop Template

Gift Voucher PSD Template

You’ll find a classic retro theme with this Photoshop voucher template. The layout is top-notch and features gorgeous vector shapes. Use it for restaurants and bars, or customize it to match your brand.

Gift Voucher Photoshop Template

Movie Gift Voucher Photoshop Template

Get your popcorn ready and give the gift of a movie. It features an instantly recognizable look and is reminiscent of a movie ticket. It’s a unique way to say thank you to customers.

Movie Voucher Gift Voucher Photoshop Template

Elegant Gift Voucher Template

This colorful voucher card template will put a smile on customer’s faces. It evokes a hand-drawn artistic style that’s fun and functional. It suits any business that values a bit of flair in its promotions.

Gift Voucher Photoshop Template

Fun Voucher Gift Voucher Template

Here’s a cheery option for giving customers a discount. The font and color schemes are bold but you can easily customize them. There’s also room for an eye-catching photo on each side of the document.

Fun Voucher Gift Voucher Photoshop Template

Digital Voucher Template for Photoshop

A high-contrast color scheme is the star of the show of this Photoshop template. Customers can’t help but take notice of the fun look and layout. Nightclubs, salons, and boutiques are perfect choices for this one.

Digital Voucher Design Photoshop Template

Resto Voucher PSD Template

You can use this yummy voucher template to activate your customer’s appetite. Add some delicious photos, customize the text, and keep your kitchen humming. Everyone loves a good deal on good food. Is anyone else hungry?

Resto Voucher Photoshop Template

Coupon & Voucher Templates for InDesign

InDesign Gift Voucher Template

This gift voucher template for InDesign is clean and minimal. It’s perfect for businesses that eschew gimmicks and over-the-top design. Instead, you’ll have something simple and functional. The no-nonsense approach will be a winner with customers.

Gift Voucher InDesign Template

Clean & Modern InDesign Gift Voucher Template

Here’s a great way to add a touch of class to your gift voucher. Spas, salons, and any place that pampers guests will want to check out this template. The simple pleasures are on full display here.

Gift Voucher InDesign Template

Minimal Gift Voucher Template for InDesign

Are you looking for a versatile template with a classic style? This package includes vouchers in both horizontal and vertical layouts. There are plenty of possibilities to make this one your own.

Gift Voucher InDesign Template

Discount & Gift InDesign Voucher Template

You’ll find bold typography and space for attention-grabbing images on this InDesign template. The design is modern and cleverly uses rectangular shapes. The result is a professional document that aims to impress.

Gift Voucher Template for InDesign

Stylish InDesign Gift Voucher Template

Here’s a different take on the previous template. It features triangular shapes and beautiful earth tones. Of course, you can also customize these elements to your heart’s content. It’s a high-end look that goes well with fashion-forward shops.

Gift Voucher Template for InDesign

Coupon & Voucher Templates for Illustrator

Modern Gift Voucher Illustrator Template

A colorful contrast makes this gift certificate a winner. The dark background will make your can’t-miss deal stand out from the competition. Add your logo and a custom image to complete the look.

Voucher Illustrator Template

Minimal Voucher Template for Illustrator

Give your promotions a simple and modern style with this Illustrator template. It’s clean, easy to read, and includes space for all the details. Customize, print, and watch as happy customers come to your door.

Voucher Illustrator Template

Special Offer & Gift Voucher Template

There’s a fun retro vibe happening with this template. The text-based layout lends itself to emphasizing your promotion. There’s no fuss or fancy effects – just a highly versatile document to boost business.

Special Voucher Gift Voucher Illustrator Template

Professional Gift Voucher Illustrator Template

Grab this template, open Adobe Illustrator, and create an attractive coupon in minutes. There is space for your promotional details, contact information, and social media links. And don’t forget your logo and a custom image or two.

Voucher Illustrator Template

Line Art Voucher Template for Illustrator

This template provides an unforgettable artsy feel to your coupons and vouchers. The swirling line art border gives way to a pastel background for text and images. It’s a cool option for any business seeking a modern way to promote its services.

Line Art Voucher Voucher Design Illustrator Template

The Bakry Gift Card & Voucher Template

Give the gift of a tasty treat with this bakery-inspired Illustrator template. The lighthearted design is a fun way to promote a café, coffee shop, or restaurant. Your customers will hurry on over once they have this coupon in hand.

The Bakry Voucher Illustrator Template

Coupon & Voucher Figma Templates

Autumn Fall Voucher Template for Figma

Use this template to celebrate the fall season. It features an autumn color scheme – but you can change it up for use any time of the year. Figma makes customization easy, after all.

Autumn Fall Voucher Template for Figma

Black Friday Voucher Figma Template

Prepare for the biggest day of the year with this Black Friday template for Figma. Offer your best deal, and this stand-out document will do the rest. Its modern look is perfect for just about any use case.

Black Friday Voucher Figma Template

Templates That Bring Style to Your Brand

Distributing coupons or vouchers is still an effective marketing strategy. It’s a great way to reward loyal customers and attract new ones. And it’s well worth the effort.

The templates above provide you with a great head-start. They already feature attention-getting designs. Customize them and get them into the hands of customers. It’s a win-win scenario.

We hope you found this collection useful. Now that you have outstanding templates within reach, it’s time to get creative!


Related Topics

So you want to build a solar or wind farm? Here’s how to decide where.

Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.

The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.

The study, appearing today in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland.

Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”

Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”

Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we, exactly, build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.

To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.

By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”

That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”

In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day.

These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolution globally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.

The researchers say their framework is very flexible and can be easily adapted to any region to account for the local geophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.

Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”

One thing that was surprising about the findings, says Amin, who is a principal investigator in the MIT Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complementarity within a day was not something that one would have expected before this study,” he says.

In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”

The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”

The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges.

A new biodegradable material to replace certain microplastics

Microplastics are an environmental hazard found nearly everywhere on Earth, released by the breakdown of tires, clothing, and plastic packaging. Another significant source of microplastics is tiny beads that are added to some cleansers, cosmetics, and other beauty products.

In an effort to cut off some of these microplastics at their source, MIT researchers have developed a class of biodegradable materials that could replace the plastic beads now used in beauty products. These polymers break down into harmless sugars and amino acids.

“One way to mitigate the microplastics problem is to figure out how to clean up existing pollution. But it’s equally important to look ahead and focus on creating materials that won’t generate microplastics in the first place,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research.

These particles could also find other applications. In the new study, Jaklenec and her colleagues showed that the particles could be used to encapsulate nutrients such as vitamin A. Fortifying foods with encapsulated vitamin A and other nutrients could help some of the 2 billion people around the world who suffer from nutrient deficiencies.

Jaklenec and Robert Langer, an MIT Institute Professor and member of the Koch Institute, are the senior authors of the paper, which appears today in Nature Chemical Engineering. The paper’s lead author is Linzixuan (Rhoda) Zhang, an MIT graduate student in chemical engineering.

Biodegradable plastics

In 2019, Jaklenec, Langer, and others reported a polymer material that they showed could be used to encapsulate vitamin A and other essential nutrients. They also found that people who consumed bread made from flour fortified with encapsulated iron showed increased iron levels.

However, since then, the European Union has classified this polymer, known as BMC, as a microplastic and included it in a ban that went into effect in 2023. As a result, the Bill and Melinda Gates Foundation, which funded the original research, asked the MIT team if they could design an alternative that would be more environmentally friendly.

The researchers, led by Zhang, turned to a type of polymer that Langer’s lab had previously developed, known as poly(beta-amino esters). These polymers, which have shown promise as vehicles for gene delivery and other medical applications, are biodegradable and break down into sugars and amino acids.

By changing the composition of the material’s building blocks, researchers can tune properties such as hydrophobicity (ability to repel water), mechanical strength, and pH sensitivity. After creating five different candidate materials, the MIT team tested them and identified one that appeared to have the optimal composition for microplastic applications, including the ability to dissolve when exposed to acidic environments such as the stomach.

The researchers showed that they could use these particles to encapsulate vitamin A, as well as vitamin D, vitamin E, vitamin C, zinc, and iron. Many of these nutrients are susceptible to heat and light degradation, but when encased in the particles, the researchers found that the nutrients could withstand exposure to boiling water for two hours.

They also showed that even after being stored for six months at high temperature and high humidity, more than half of the encapsulated vitamins were undamaged.

To demonstrate their potential for fortifying food, the researchers incorporated the particles into bouillon cubes, which are commonly consumed in many African countries. They found that when incorporated into bouillon, the nutrients remained intact after being boiled for two hours.

“Bouillon is a staple ingredient in sub-Saharan Africa, and offers a significant opportunity to improve the nutritional status of many billions of people in those regions,” Jaklenec says.

In this study, the researchers also tested the particles’ safety by exposing them to cultured human intestinal cells and measuring their effects on the cells. At the doses that would be used for food fortification, they found no damage to the cells.

Better cleansing

To explore the particles’ ability to replace the microbeads that are often added to cleansers, the researchers mixed the particles with soap foam. This mixture, they found, could remove permanent marker and waterproof eyeliner from skin much more effectively than soap alone.

Soap mixed with the new microplastic was also more effective than a cleanser that includes polyethylene microbeads, the researchers found. They also discovered that the new biodegradable particles did a better job of absorbing potentially toxic elements such as heavy metals.

“We wanted to use this as a first step to demonstrate how it’s possible to develop a new class of materials, to expand from existing material categories, and then to apply it to different applications,” Zhang says.

With a grant from Estée Lauder, the researchers are now working on further testing the microbeads as a cleanser and potentially other applications, and they plan to run a small human trial later this year. They are also gathering safety data that could be used to apply for GRAS (generally regarded as safe) classification from the U.S. Food and Drug Administration and are planning a clinical trial of foods fortified with the particles.

The researchers hope their work could help to significantly reduce the amount of microplastic released into the environment from health and beauty products.

“This is just one small part of the broader microplastics issue, but as a society we’re beginning to acknowledge the seriousness of the problem. This work offers a step forward in addressing it,” Jaklenec says. “Polymers are incredibly useful and essential in countless applications in our daily lives, but they come with downsides. This is an example of how we can reduce some of those negative aspects.”

The research was funded by the Gates Foundation and the U.S. National Science Foundation.

What do we know about the economics of AI?

For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. There is massive investment in AI but little clarity about what it will produce.

Examining AI has become a significant part of Nobel-winning economist Daron Acemoglu’s work. An Institute Professor at MIT, Acemoglu has long studied the impact of technology in society, from modeling the large-scale adoption of innovations to conducting empirical studies about the impact of robots on jobs.

In October, Acemoglu also shared the 2024 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel with two collaborators, Simon Johnson PhD ’89 of the MIT Sloan School of Management and James Robinson of the University of Chicago, for research on the relationship between political institutions and economic growth. Their work shows that democracies with robust rights sustain better growth over time than other forms of government do.

Since a lot of growth comes from technological innovation, the way societies use AI is of keen interest to Acemoglu, who has published a variety of papers about the economics of the technology in recent months.

“Where will the new tasks for humans with generative AI come from?” asks Acemoglu. “I don’t think we know those yet, and that’s what the issue is. What are the apps that are really going to change how we do things?”

What are the measurable effects of AI?

Since 1947, U.S. GDP growth has averaged about 3 percent annually, with productivity growth at about 2 percent annually. Some predictions have claimed AI will double growth or at least create a higher growth trajectory than usual. By contrast, in one paper, “The Simple Macroeconomics of AI,” published in the August issue of Economic Policy, Acemoglu estimates that over the next decade, AI will produce a “modest increase” in GDP between 1.1 to 1.6 percent over the next 10 years, with a roughly 0.05 percent annual gain in productivity.

Acemoglu’s assessment is based on recent estimates about how many jobs are affected by AI, including a 2023 study by researchers at OpenAI, OpenResearch, and the University of Pennsylvania, which finds that about 20 percent of U.S. job tasks might be exposed to AI capabilities. A 2024 study by researchers from MIT FutureTech, as well as the Productivity Institute and IBM, finds that about 23 percent of computer vision tasks that can be ultimately automated could be profitably done so within the next 10 years. Still more research suggests the average cost savings from AI is about 27 percent.

When it comes to productivity, “I don’t think we should belittle 0.5 percent in 10 years. That’s better than zero,” Acemoglu says. “But it’s just disappointing relative to the promises that people in the industry and in tech journalism are making.”

To be sure, this is an estimate, and additional AI applications may emerge: As Acemoglu writes in the paper, his calculation does not include the use of AI to predict the shapes of proteins — for which other scholars subsequently shared a Nobel Prize in October.

Other observers have suggested that “reallocations” of workers displaced by AI will create additional growth and productivity, beyond Acemoglu’s estimate, though he does not think this will matter much. “Reallocations, starting from the actual allocation that we have, typically generate only small benefits,” Acemoglu says. “The direct benefits are the big deal.”

He adds: “I tried to write the paper in a very transparent way, saying what is included and what is not included. People can disagree by saying either the things I have excluded are a big deal or the numbers for the things included are too modest, and that’s completely fine.”

Which jobs?

Conducting such estimates can sharpen our intuitions about AI. Plenty of forecasts about AI have described it as revolutionary; other analyses are more circumspect. Acemoglu’s work helps us grasp on what scale we might expect changes.

“Let’s go out to 2030,” Acemoglu says. “How different do you think the U.S. economy is going to be because of AI? You could be a complete AI optimist and think that millions of people would have lost their jobs because of chatbots, or perhaps that some people have become super-productive workers because with AI they can do 10 times as many things as they’ve done before. I don’t think so. I think most companies are going to be doing more or less the same things. A few occupations will be impacted, but we’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees.”

If that is right, then AI most likely applies to a bounded set of white-collar tasks, where large amounts of computational power can process a lot of inputs faster than humans can.

“It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, et cetera,” Acemoglu adds. “And those are essentially about 5 percent of the economy.”

While Acemoglu and Johnson have sometimes been regarded as skeptics of AI, they view themselves as realists.

“I’m trying not to be bearish,” Acemoglu says. “There are things generative AI can do, and I believe that, genuinely.” However, he adds, “I believe there are ways we could use generative AI better and get bigger gains, but I don’t see them as the focus area of the industry at the moment.”

Machine usefulness, or worker replacement?

When Acemoglu says we could be using AI better, he has something specific in mind.

One of his crucial concerns about AI is whether it will take the form of “machine usefulness,” helping workers gain productivity, or whether it will be aimed at mimicking general intelligence in an effort to replace human jobs. It is the difference between, say, providing new information to a biotechnologist versus replacing a customer service worker with automated call-center technology. So far, he believes, firms have been focused on the latter type of case. 

“My argument is that we currently have the wrong direction for AI,” Acemoglu says. “We’re using it too much for automation and not enough for providing expertise and information to workers.”

Acemoglu and Johnson delve into this issue in depth in their high-profile 2023 book “Power and Progress” (PublicAffairs), which has a straightforward leading question: Technology creates economic growth, but who captures that economic growth? Is it elites, or do workers share in the gains?

As Acemoglu and Johnson make abundantly clear, they favor technological innovations that increase worker productivity while keeping people employed, which should sustain growth better.

But generative AI, in Acemoglu’s view, focuses on mimicking whole people. This yields something he has for years been calling “so-so technology,” applications that perform at best only a little better than humans, but save companies money. Call-center automation is not always more productive than people; it just costs firms less than workers do. AI applications that complement workers seem generally on the back burner of the big tech players.

“I don’t think complementary uses of AI will miraculously appear by themselves unless the industry devotes significant energy and time to them,” Acemoglu says.

What does history suggest about AI?

The fact that technologies are often designed to replace workers is the focus of another recent paper by Acemoglu and Johnson, “Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution — and in the Age of AI,” published in August in Annual Reviews in Economics.

The article addresses current debates over AI, especially claims that even if technology replaces workers, the ensuing growth will almost inevitably benefit society widely over time. England during the Industrial Revolution is sometimes cited as a case in point. But Acemoglu and Johnson contend that spreading the benefits of technology does not happen easily. In 19th-century England, they assert, it occurred only after decades of social struggle and worker action.

“Wages are unlikely to rise when workers cannot push for their share of productivity growth,” Acemoglu and Johnson write in the paper. “Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed. … The impact of automation on workers today is more complex than an automatic linkage from higher productivity to better wages.”

The paper’s title refers to the social historian E.P Thompson and economist David Ricardo; the latter is often regarded as the discipline’s second-most influential thinker ever, after Adam Smith. Acemoglu and Johnson assert that Ricardo’s views went through their own evolution on this subject.

“David Ricardo made both his academic work and his political career by arguing that machinery was going to create this amazing set of productivity improvements, and it would be beneficial for society,” Acemoglu says. “And then at some point, he changed his mind, which shows he could be really open-minded. And he started writing about how if machinery replaced labor and didn’t do anything else, it would be bad for workers.”

This intellectual evolution, Acemoglu and Johnson contend, is telling us something meaningful today: There are not forces that inexorably guarantee broad-based benefits from technology, and we should follow the evidence about AI’s impact, one way or another.

What’s the best speed for innovation?

If technology helps generate economic growth, then fast-paced innovation might seem ideal, by delivering growth more quickly. But in another paper, “Regulating Transformative Technologies,” from the September issue of American Economic Review: Insights, Acemoglu and MIT doctoral student Todd Lensman suggest an alternative outlook. If some technologies contain both benefits and drawbacks, it is best to adopt them at a more measured tempo, while those problems are being mitigated.

“If social damages are large and proportional to the new technology’s productivity, a higher growth rate paradoxically leads to slower optimal adoption,” the authors write in the paper. Their model suggests that, optimally, adoption should happen more slowly at first and then accelerate over time.

“Market fundamentalism and technology fundamentalism might claim you should always go at the maximum speed for technology,” Acemoglu says. “I don’t think there’s any rule like that in economics. More deliberative thinking, especially to avoid harms and pitfalls, can be justified.”

Those harms and pitfalls could include damage to the job market, or the rampant spread of misinformation. Or AI might harm consumers, in areas from online advertising to online gaming. Acemoglu examines these scenarios in another paper, “When Big Data Enables Behavioral Manipulation,” forthcoming in American Economic Review: Insights; it is co-authored with Ali Makhdoumi of Duke University, Azarakhsh Malekian of the University of Toronto, and Asu Ozdaglar of MIT.

“If we are using it as a manipulative tool, or too much for automation and not enough for providing expertise and information to workers, then we would want a course correction,” Acemoglu says.

Certainly others might claim innovation has less of a downside or is unpredictable enough that we should not apply any handbrakes to it. And Acemoglu and Lensman, in the September paper, are simply developing a model of innovation adoption.

That model is a response to a trend of the last decade-plus, in which many technologies are hyped are inevitable and celebrated because of their disruption. By contrast, Acemoglu and Lensman are suggesting we can reasonably judge the tradeoffs involved in particular technologies and aim to spur additional discussion about that.

How can we reach the right speed for AI adoption?

If the idea is to adopt technologies more gradually, how would this occur?

First of all, Acemoglu says, “government regulation has that role.” However, it is not clear what kinds of long-term guidelines for AI might be adopted in the U.S. or around the world.

Secondly, he adds, if the cycle of “hype” around AI diminishes, then the rush to use it “will naturally slow down.” This may well be more likely than regulation, if AI does not produce profits for firms soon.

“The reason why we’re going so fast is the hype from venture capitalists and other investors, because they think we’re going to be closer to artificial general intelligence,” Acemoglu says. “I think that hype is making us invest badly in terms of the technology, and many businesses are being influenced too early, without knowing what to do. We wrote that paper to say, look, the macroeconomics of it will benefit us if we are more deliberative and understanding about what we’re doing with this technology.”

In this sense, Acemoglu emphasizes, hype is a tangible aspect of the economics of AI, since it drives investment in a particular vision of AI, which influences the AI tools we may encounter.

“The faster you go, and the more hype you have, that course correction becomes less likely,” Acemoglu says. “It’s very difficult, if you’re driving 200 miles an hour, to make a 180-degree turn.”

Women’s cross country runs to first NCAA Division III National Championship

Behind All-American performances from senior Christina Crow and juniors Rujuta Sane and Kate Sanderson, the MIT women’s cross country team claimed its first NCAA Division III National Championship on Nov. 23 at the LaVern Gibson Cross Country Course in Indiana.

MIT entered the race as the No. 1 ranked team in the nation after winning its 17th straight NEWMAC conference title and its fourth straight NCAA East Regional Championship in 2024. The Engineers completed a historic season with a run for the record books, taking first in the 6K race to win their first national championship.

The Engineers got out to an early advantage over the University of Chicago through the opening kilometer of the 6K race, with Sanderson among the leaders on the course in seventh place. MIT had all five scoring runners inside the top 30 early in the race.

It was still MIT and the University of Chicago leading the way at the 3K mark, but the Maroons closed the gap on the Engineers, as senior Evelyn Battleson-Gunkel moved toward the front of the pack. MIT’s top seven spread from 14th to 32nd through the 3K mark, showing off the team depth that powered the Engineers throughout the season.

Despite MIT’s early advantage, it was Chicago that had the team lead at the 5K mark, as the top five Maroons on the course spread from 3rd to 34th place to drop Chicago’s team score to 119. Sanderson and Sane found the pace to lead the Engineers in 14th and 17th place, while Crow was in a tight race for the final All-American spot in 41st place, giving MIT a score of 137 at the 5K mark. 

The final 1K of Crow’s collegiate career pushed MIT’s lone senior into an All-American finish with a 35th place performance in 21:43.6. With Sanderson finishing in 21:26.2 to take 16th and Sane in 19th with a time of 21:29.9, sophomore Liv Girand and junior Lexi Fernandez closed in 47th and 51st place, respectively, rallying the Engineers past Chicago over the final 1K to clinch the national title for MIT.

Sanderson is now a two-time All-American after finishing in 34th place during the 2023 National Championship. Crow and Sane earned the honor for the first time. Sanderson and Sane each recorded collegiate personal records in the race. Girand finished with a time of 21:54.2 (47th) while Fernandez had a time of 21:57.6 (51st).

Sophomore Heather Jensen and senior Gillian Roeder helped MIT finish with all seven runners inside the top 55, as Jensen was 54th in 21:58.2 and Roeder was 55th in 21:59.6. MIT finished with an average time of 21:42.3 and a spread of 31.4.

Study: Browsing negative content online makes mental health struggles worse

People struggling with their mental health are more likely to browse negative content online, and in turn, that negative content makes their symptoms worse, according to a series of studies by researchers at MIT.

The group behind the research has developed a web plug-in tool to help those looking to protect their mental health make more informed decisions about the content they view.

The findings were outlined in an open-access paper by Tali Sharot, an adjunct professor of cognitive neurosciences at MIT and professor at University College London, and Christopher A. Kelly, a former visiting PhD student who was a member of Sharot’s Affective Brain Lab when the studies were conducted, who is now a postdoc at Stanford University’s Institute for Human Centered AI. The findings were published Nov. 21 in the journal Nature Human Behavior.

“Our study shows a causal, bidirectional relationship between health and what you do online. We found that people who already have mental health symptoms are more likely to go online and more likely to browse for information that ends up being negative or fearful,” Sharot says. “After browsing this content, their symptoms become worse. It is a feedback loop.”

The studies analyzed the web browsing habits of more than 1,000 participants by using natural language processing to calculate a negative score and a positive score for each web page visited, as well as scores for anger, fear, anticipation, trust, surprise, sadness, joy, and disgust. Participants also completed questionnaires to assess their mental health and indicated their mood directly before and after web-browsing sessions. The researchers found that participants expressed better moods after browsing less-negative web pages, and participants with worse pre-browsing moods tended to browse more-negative web pages.

In a subsequent study, participants were asked to read information from two web pages randomly selected from either six negative webpages or six neutral pages. They then indicated their mood levels both before and after viewing the pages. An analysis found that participants exposed to negative web pages reported to be in a worse mood than those who viewed neutral pages, and then subsequently visited more-negative pages when asked to browse the internet for 10 minutes.

“The results contribute to the ongoing debate regarding the relationship between mental health and online behavior,” the authors wrote. “Most research addressing this relationship has focused on the quantity of use, such as screen time or frequency of social media use, which has led to mixed conclusions. Here, instead, we focus on the type of content browsed and find that its affective properties are causally and bidirectionally related to mental health and mood.”

To test whether intervention could alter web-browsing choices and improve mood, the researchers provided participants with search engine results pages with three search results for each of several queries. Some participants were provided labels for each search result on a scale of “feel better” to “feel worse.” Other participants were not provided with any labels. Those who were provided with labels were less likely to choose negative content and more likely to choose positive content. A followup study found that those who viewed more positive content reported a significantly better mood.

Based on these findings, Sharot and Kelly created a downloadable plug-in tool called “Digital Diet” that offers scores for Google search results in three categories: emotion (whether people find the content positive or negative, on average), knowledge (to what extent information on a webpage helps people understand a topic, on average), and actionability (to what extent information on a webpage is useful on average). MIT electrical engineering and computer science graduate student Jonatan Fontanez ’24, a former undergraduate researcher from MIT in Sharot’s lab, also contributed to the development of the tool. The tool was introduced publicly this week, along with the publication of the paper in Nature Human Behavior.

“People with worse mental health tend to seek out more-negative and fear-inducing content, which in turn exacerbates their symptoms, creating a vicious feedback loop,” Kelly says. “It is our hope that this tool can help them gain greater autonomy over what enters their minds and break negative cycles.”

Seen and heard: The new Edward and Joyce Linde Music Building

Until very recently, Mariano Salcedo, a fourth-year MIT electronic engineering and computer science student majoring in artificial intelligence and decision-making, was planning to apply for a master’s program in computer science at MIT. Then he saw the new Edward and Joyce Linde Music Building, which opened this fall for a selection of classes. “Now, instead of going into computer science, I’m thinking of applying for the master’s program in Music Technology, which is being offered here for the first time next year,” says Salcedo. “The decision is definitely linked to the building, and what the building says about music at MIT.” 
 
Scheduled to open fully in February 2025, the Linde Music Building already makes a bold and elegant visual statement. But its most powerful impact will likely be heard as much as seen. Each of the facility’s elements, including the Thomas Tull Concert Hall, every performance and rehearsal space, each classroom, even the stainless-steel metal panels that form the conic canopies over the cube-like building’s three entrances — has been conceived and constructed to create an ideal environment for music. 

Students are already enjoying the ideal acoustics and customized spaces of the Linde Music Building, even as construction on the site continues. Within the building’s thick red-brick walls, they study subjects ranging from Electronic Music Composition to Conducting and Score Reading to Advanced Music Performance. Myriad musical groups, from the MIT jazz combos to the Balinese Gamelan and the Rambax Senegalese Drum Ensemble, explore and enjoy their new and improved homes, as do those students who will create and perfect the next generation of music production hardware and software. 

“For many of us at MIT, music is very close to our hearts,” notes MIT President Sally Kornbluth. “And the new building now puts music right at the heart of the campus. Its exceptional practice and recording spaces will give MIT musicians the conservatory-level tools they deserve, and the beautiful performance hall will exert its own gravitational pull, drawing audiences from across campus and the larger community who love live music.”

The need and the solution

Music has never been a minor pursuit at MIT. More than 1,500 MIT students enroll in music classes each academic year. And more than 500 student musicians participate in one of 30 on-campus ensembles. Yet until recently there was no centralized facility for music instruction or rehearsal. Practice rooms were scattered and poorly insulated, with sound seeping through the walls. Nor was there a truly suitable space for large performances; while Kresge Auditorium has sufficient capacity and splendid minimalist aesthetics, the acoustics are not optimal.

“It would be very difficult to teach biology or engineering in a studio designed for dance or music,” says Jay Scheib, recently appointed section head for Music and Theater Arts and Class of 1949 Professor. “The same goes for teaching music in a mathematics or chemistry classroom. In the past, we’ve done it, but it did limit us. In our theater program, everything changed when we opened the new theater building (W97) in 2017 and could teach theater in spaces intended for theater. We believe the new music building will have a similar effect on our music program. It will inspire our students and musicians and allow them to hear their music as it was intended to be heard. And it will provide an opportunity to convene people, to inhabit the same space, breathe the same air, and exchange ideas and perspectives.”

“Music-making from multiple musical traditions are areas of tremendous growth at MIT, both in terms of performance and academics,” says Keeril Makan, associate dean for strategic initiatives for the School of Humanities, Arts, and Social Sciences (SHASS). The Michael (1949) and Sonja Koerner Music Composition Professor and former head of the Music and Theater Arts Section, Makan was, and remains, intimately involved in the Linde Music Building project. “In this building, we wanted all forms of music to coexist, whether jazz, classical, or music from around the world. This was not easy; different types of music require different conditions. But we took the time and invested in making spaces that would support all musical genres.”

The idea of creating an epicenter for music at MIT is not new. For several decades, MIT planners and administrators studied various plans and sites on campus, including Kendall Square and areas in West Campus. Then, in 2018, one year after the completion of the Theater Arts Building on Vassar Street, and with support from then-president L. Rafael Reif, the Institute received a cornerstone gift for the music building from arts patron Joyce Linde. Along with her late husband and former MIT Corporation member Edward H. Linde ’62, the late Joyce Linde was a longtime MIT supporter. SANAA, a Tokyo-based architectural firm, was selected for the job in April 2019.

“MIT chose SANAA in part because their architecture is so beautiful,” says Vasso Mathes, the senior campus planner in the MIT Office of Campus Planning who helped select the SANAA team. “But also because they understood that this building is about acoustics. And they brought the world’s most renowned acoustics consultant, Nagata Acoustics International founder Yasuhisa Toyota, to the project.”

Where form meets function

Built on the site of a former parking lot, the Linde Music Building is both stunning and subtle. Designed by Kazuyo Sejima and Ryue Nishizawa of SANAA, which won the 2010 Pritzker Architecture Prize, the three-volume red brick structure centers both the natural and built environments of MIT’s West Campus — harmonizing effortlessly with Eero Saarinen’s Kresge Auditorium and iconic MIT Chapel, both adjacent, while blending seamlessly with surrounding athletic fields and existing landscaping. With a total of 35,000 square feet of usable space, the building’s three distinct volumes dialogue beautifully with their surroundings. The curved roof reprises elements of Kresge Auditorium, while the exterior evokes Boston and Cambridge’s archetypal facades. The glass-walled lobby, where the three cubic volumes converge, is surprisingly intimate, with ample natural light and inviting views onto three distinct segments of campus. 

“One thing I love about this project is that each program has its own identity in form,” says co-founder and principal Ryue Nishizawa of SANAA. “And there are also in-between spaces that can breathe and blend inside and outside spaces, creating a landscape while preserving the singularity of each program.”

There are myriad signature features — particularly the acoustic features designed by Nagata Acoustics. The Beatrice and Stephen Erdely Music and Culture Space offers the building’s most robust acoustic insulation. Conceived as a home for MIT’s Rambax Senegalese Drum Ensemble and Balinese Gamelan — as well as other music ensembles — the high-ceilinged box-in-box rehearsal space features alternating curved wall panels. The first set reflects sound, the second set absorbs it. The two panel styles are virtually identical to the eye. 

With a maximum seating capacity of 390, the Thomas Tull Concert Hall features a suite of gently rising rows that circle a central performance area. The hall can be configured for almost any style and size of performance, from a soloist in the round to a full jazz ensemble. A retractable curtain, an overhanging ring of glass panels, and the same alternating series of curved wall panels offers adaptable and exquisite sound conditions for performers and audience. A season of events are planned for the spring, starting on Feb. 15, 2025, with a celebratory public program and concert. Classrooms, rehearsal spaces, and technical spaces in the Jae S. and Kyuho Lim Music Maker Pavilion — where students will develop state-of-the-art production tools, software, and musical instruments — are similarly outfitted to create a nearly ideal sound environment. 

While acoustic concerns drove the design process for the Linde Music Building, they did not dampen it. Architects, builders, and vendors repeatedly found ingenious and understated ways to infuse beauty into spaces conceived primarily around sound. “There are many technical specifications we had to consider and acoustic conditions we had to create,” says co-founder and principal Kazuyo Sejima of SANAA. “But we didn’t want this to be a purely technical building; rather, a building where people can enjoy creating and listening to music, enjoy coming together, in a space that was functional, but also elegant.”

Realized with sustainable methods and materials, the building features radiant-heat flooring, LED lighting, high-performance thermally broken windows, and a green roof on each volume. A new landscape and underground filters mitigate flood risk and treat rain and stormwater. A two-level 142-space parking garage occupies the space beneath the building. The outdoor scene is completed by Madrigal, a site-specific sculpture by Sanford Biggers. Commissioned by MIT, and administered by the List Visual Arts Center, the Percent-for-Art program selected Sanford Biggers through a committee formed for this project. The 18-foot metal, resin, and mixed-media piece references the African American quilting tradition, weaving, as in a choral composition, diverse patterns and voices into a colorful counterpoint. “Madrigal stands as a vibrant testament to the power of music, tradition, and the enduring spirit of collaboration across time,” says List Visual Arts Center director Paul Ha. “It connects our past and future while enriching our campus and inspiring all who encounter it.”

New harmonies

With a limited opening for classes this fall, the Linde Music Building is already humming with creative activity. There are hands-on workshops for the many sections of class 21M.030 (Introduction to Musics of the World) — one of SHASS’s most popular CI-H classes. Students of music technology hone their skills in digital instrument design and electronic music composition. MIT Balinese Gamelan and the drummers of Rambax enjoy the sublime acoustics of the Music and Culture Space, where they can hear and refine their work in exquisite detail. 

“It is exciting for me, and all the other students who love music, to be able to take classes in this space completely devoted to music and music technology,” says fourth-year student Mariano Salcedo. “To work in spaces that are made specifically for music and musicians … for us, it’s a nice way of being seen.”

The Linde Music Building will certainly help MIT musicians feel seen and heard. But it will also enrich the MIT experience for students in all schools and departments. “Music courses at MIT have been popular with students across disciplines. I’m incredibly thrilled that students will have brand-new, brilliantly designed spaces for performance, instruction, and prototyping,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “The building will also offer tremendous opportunities for students to gather, build community, and innovate across disciplines.”

“This building and its three programs encapsulate the breadth of interest among our students,” says Melissa Nobles, MIT chancellor and Class of 1922 Professor of Political Science. Nobles was a steadfast advocate for the music building project. “It will strengthen our already-robust music community and will draw new people in.” 

The Linde Music Building has inspired other members of the MIT community. “Now faculty can use these truly wonderful spaces for their research,” says Makan. “The offices here are also studios, and have acoustic treatments and sound isolation. Musicians and music technologists can work in those spaces.” Makan is composing a piece for solo violin to be premiered in the Thomas Tull Concert Hall early next year. During the performance, student violinists will deploy strategically in various points about the hall to accompany the piece, taking full advantage of the space’s singular acoustics. 

Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences, expects the Linde Music Building to inspire people beyond the MIT community as well. “Of course this building brings incredible resources to MIT’s music program: top-quality rehearsal spaces, a professional-grade recording studio, and new labs for our music technology program,” he says “But the world-class concert hall will also create new opportunities to connect with people in the Boston area. This is truly a jewel of the MIT campus.”

February open house and concert

The MIT Music and Theater Arts Section plans to host an open house in the new building on Feb. 15, 2025. Members of the MIT community and the general public will be invited to an afternoon of activities and performances. The celebration of music will continue with a series of concerts open to the public throughout the spring. Details will be available at the Music and Theater Arts website.

Want to design the car of the future? Here are 8,000 designs to get you started.

Car design is an iterative and proprietary process. Carmakers can spend several years on the design phase for a car, tweaking 3D forms in simulations before building out the most promising designs for physical testing. The details and specs of these tests, including the aerodynamics of a given car design, are typically not made public. Significant advances in performance, such as in fuel efficiency or electric vehicle range, can therefore be slow and siloed from company to company.

MIT engineers say that the search for better car designs can speed up exponentially with the use of generative artificial intelligence tools that can plow through huge amounts of data in seconds and find connections to generate a novel design. While such AI tools exist, the data they would need to learn from have not been available, at least in any sort of accessible, centralized form.

But now, the engineers have made just such a dataset available to the public for the first time. Dubbed DrivAerNet++, the dataset encompasses more than 8,000 car designs, which the engineers generated based on the most common types of cars in the world today. Each design is represented in 3D form and includes information on the car’s aerodynamics — the way air would flow around a given design, based on simulations of fluid dynamics that the group carried out for each design.

Side-by-side animation of rainbow-colored car and car with blue and green lines
In a new dataset that includes more than 8,000 car designs, MIT engineers simulate the aerodynamics for a given car shape, which they represent in various modalities, including “surface fields” (left) and “streamlines” (right).

Credit: Courtesy of Mohamed Elrefaie

Each of the dataset’s 8,000 designs is available in several representations, such as mesh, point cloud, or a simple list of the design’s parameters and dimensions. As such, the dataset can be used by different AI models that are tuned to process data in a particular modality.

DrivAerNet++ is the largest open-source dataset for car aerodynamics that has been developed to date. The engineers envision it being used as an extensive library of realistic car designs, with detailed aerodynamics data that can be used to quickly train any AI model. These models can then just as quickly generate novel designs that could potentially lead to more fuel-efficient cars and electric vehicles with longer range, in a fraction of the time that it takes the automotive industry today.

“This dataset lays the foundation for the next generation of AI applications in engineering, promoting efficient design processes, cutting R&D costs, and driving advancements toward a more sustainable automotive future,” says Mohamed Elrefaie, a mechanical engineering graduate student at MIT.

Elrefaie and his colleagues will present a paper detailing the new dataset, and AI methods that could be applied to it, at the NeurIPS conference in December. His co-authors are Faez Ahmed, assistant professor of mechanical engineering at MIT, along with Angela Dai, associate professor of computer science at the Technical University of Munich, and Florin Marar of BETA CAE Systems.

Filling the data gap

Ahmed leads the Design Computation and Digital Engineering Lab (DeCoDE) at MIT, where his group explores ways in which AI and machine-learning tools can be used to enhance the design of complex engineering systems and products, including car technology.

“Often when designing a car, the forward process is so expensive that manufacturers can only tweak a car a little bit from one version to the next,” Ahmed says. “But if you have larger datasets where you know the performance of each design, now you can train machine-learning models to iterate fast so you are more likely to get a better design.”

And speed, particularly for advancing car technology, is particularly pressing now.

“This is the best time for accelerating car innovations, as automobiles are one of the largest polluters in the world, and the faster we can shave off that contribution, the more we can help the climate,” Elrefaie says.

In looking at the process of new car design, the researchers found that, while there are AI models that could crank through many car designs to generate optimal designs, the car data that is actually available is limited. Some researchers had previously assembled small datasets of simulated car designs, while car manufacturers rarely release the specs of the actual designs they explore, test, and ultimately manufacture.

The team sought to fill the data gap, particularly with respect to a car’s aerodynamics, which plays a key role in setting the range of an electric vehicle, and the fuel efficiency of an internal combustion engine. The challenge, they realized, was in assembling a dataset of thousands of car designs, each of which is physically accurate in their function and form, without the benefit of physically testing and measuring their performance.

To build a dataset of car designs with physically accurate representations of their aerodynamics, the researchers started with several baseline 3D models that were provided by Audi and BMW in 2014. These models represent three major categories of passenger cars: fastback (sedans with a sloped back end), notchback (sedans or coupes with a slight dip in their rear profile) and estateback (such as station wagons with more blunt, flat backs). The baseline models are thought to bridge the gap between simple designs and more complicated proprietary designs, and have been used by other groups as a starting point for exploring new car designs.

Library of cars

In their new study, the team applied a morphing operation to each of the baseline car models. This operation systematically made a slight change to each of 26 parameters in a given car design, such as its length, underbody features, windshield slope, and wheel tread, which it then labeled as a distinct car design, which was then added to the growing dataset. Meanwhile, the team ran an optimization algorithm to ensure that each new design was indeed distinct, and not a copy of an already-generated design. They then translated each 3D design into different modalities, such that a given design can be represented as a mesh, a point cloud, or a list of dimensions and specs.

The researchers also ran complex, computational fluid dynamics simulations to calculate how air would flow around each generated car design. In the end, this effort produced more than 8,000 distinct, physically accurate 3D car forms, encompassing the most common types of passenger cars on the road today.

To produce this comprehensive dataset, the researchers spent over 3 million CPU hours using the MIT SuperCloud, and generated 39 terabytes of data. (For comparison, it’s estimated that the entire printed collection of the Library of Congress would amount to about 10 terabytes of data.)

The engineers say that researchers can now use the dataset to train a particular AI model. For instance, an AI model could be trained on a part of the dataset to learn car configurations that have certain desirable aerodynamics. Within seconds, the model could then generate a new car design with optimized aerodynamics, based on what it has learned from the dataset’s thousands of physically accurate designs.

The researchers say the dataset could also be used for the inverse goal. For instance, after training an AI model on the dataset, designers could feed the model a specific car design and have it quickly estimate the design’s aerodynamics, which can then be used to compute the car’s potential fuel efficiency or electric range — all without carrying out expensive building and testing of a physical car.

“What this dataset allows you to do is train generative AI models to do things in seconds rather than hours,” Ahmed says. “These models can help lower fuel consumption for internal combustion vehicles and increase the range of electric cars — ultimately paving the way for more sustainable, environmentally friendly vehicles.”

This work was supported, in part, by the German Academic Exchange Service and the Department of Mechanical Engineering at MIT.

MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia.

The 10 delegates to COP16 included faculty, researchers, and students from the MIT Environmental Solutions Initiative (ESI), the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Department of Urban Studies and Planning (DUSP), the Institute for Data, Systems, and Society (IDSS), and the Center for Sustainability Science and Strategy.

In previous years, MIT faculty had participated sporadically in the discussions. This organized engagement, led by the ESI, is significant because it brought representatives from many of the groups working on biodiversity across the Institute; showcased the breadth of MIT’s research in more than 15 events including panels, roundtables, and keynote presentations across the Blue and Green Zones of the conference (with the Blue Zone representing the primary venue for the official negotiations and discussions and the Green Zone representing public events); and created an experiential learning opportunity for students who followed specific topics in the negotiations and throughout side events.

The conference also gathered attendees from governments, nongovernmental organizations, businesses, other academic institutions, and practitioners focused on stopping global biodiversity loss and advancing the 23 goals of the Kunming-Montreal Global Biodiversity Framework (KMGBF), an international agreement adopted in 2022 to guide global efforts to protect and restore biodiversity through 2030.

MIT’s involvement was particularly pronounced when addressing goals related to building coalitions of sub-national governments (targets 11, 12, 14); technology and AI for biodiversity conservation (targets 20 and 21); shaping equitable markets (targets 3, 11, and 19); and informing an action plan for Afro-descendant communities (targets 3, 10, and 22).

Building coalitions of sub-national governments

The ESI’s Natural Climate Solutions (NCS) Program was able to support two separate coalitions of Latin American cities, namely the Coalition of Cities Against Illicit Economies in the Biogeographic Chocó Region and the Colombian Amazonian Cities coalition, who successfully signed declarations to advance specific targets of the KMGBF (the aforementioned targets 11, 12, 14).

This was accomplished through roundtables and discussions where team members — including Marcela Angel, research program director at the MIT ESI; Angelica Mayolo, ESI Martin Luther King Fellow 2023-25; and Silvia Duque and Hannah Leung, MIT Master’s in City Planning students — presented a set of multi-scale actions including transnational strategies, recommendations to strengthen local and regional institutions, and community-based actions to promote the conservation of the Biogeographic Chocó as an ecological corridor.

“There is an urgent need to deepen the relationship between academia and local governments of cities located in biodiversity hotspots,” said Angel. “Given the scale and unique conditions of Amazonian cities, pilot research projects present an opportunity to test and generate a proof of concept. These could generate catalytic information needed to scale up climate adaptation and conservation efforts in socially and ecologically sensitive contexts.”

ESI’s research also provided key inputs for the creation of the Fund for the Biogeographic Chocó Region, a multi-donor fund launched within the framework of COP16 by a coalition composed of Colombia, Ecuador, Panamá, and Costa Rica. The fund aims to support biodiversity conservation, ecosystem restoration, climate change mitigation and adaptation, and sustainable development efforts across the region.

Technology and AI for biodiversity conservation

Data, technology, and artificial intelligence are playing an increasing role in how we understand biodiversity and ecosystem change globally. Professor Sara Beery’s research group at MIT focuses on this intersection, developing AI methods that enable species and environmental monitoring at previously unprecedented spatial, temporal, and taxonomic scales.

During the International Union of Biological Diversity Science-Policy Forum, the high-level COP16 segment focused on outlining recommendations from scientific and academic community, Beery spoke on a panel alongside María Cecilia Londoño, scientific information manager of the Humboldt Institute and co-chair of the Global Biodiversity Observations Network, and Josh Tewksbury, director of the Smithsonian Tropical Research Institute, among others, about how these technological advancements will help humanity achieve our biodiversity targets. The panel emphasized that AI innovation was needed, but with emphasis on direct human-AI partnership, AI capacity building, and the need for data and AI policy to ensure equity of access and benefit from these technologies.

As a direct outcome of the session, for the first time, AI was emphasized in the statement on behalf of science and academia delivered by Hernando Garcia, director of the Humboldt Institute, and David Skorton, secretary general of the Smithsonian Institute, to the high-level segment of the COP16.

That statement read, “To effectively address current and future challenges, urgent action is required in equity, governance, valuation, infrastructure, decolonization and policy frameworks around biodiversity data and artificial intelligence.”

Beery also organized a panel at the GEOBON pavilion in the Blue Zone on Scaling Biodiversity Monitoring with AI, which brought together global leaders from AI research, infrastructure development, capacity and community building, and policy and regulation. The panel was initiated and experts selected from the participants at the recent Aspen Global Change Institute Workshop on Overcoming Barriers to Impact in AI for Biodiversity, co-organized by Beery.

Shaping equitable markets

In a side event co-hosted by the ESI with CAF-Development Bank of Latin America, researchers from ESI’s Natural Climate Solutions Program — including Marcela Angel; Angelica Mayolo; Jimena Muzio, ESI research associate; and Martin Perez Lara, ESI research affiliate and director for Forest Climate Solutions Impact and Monitoring at World Wide Fund for Nature of the U.S. — presented results of a study titled “Voluntary Carbon Markets for Social Impact: Comprehensive Assessment of the Role of Indigenous Peoples and Local Communities (IPLC) in Carbon Forestry Projects in Colombia.” The report highlighted the structural barriers that hinder effective participation of IPLC, and proposed a conceptual framework to assess IPLC engagement in voluntary carbon markets.

Communicating these findings is important because the global carbon market has experienced a credibility crisis since 2023, influenced by critical assessments in academic literaturejournalism questioning the quality of mitigation results, and persistent concerns about the engagement of private actors with IPLC. Nonetheless, carbon forestry projects have expanded rapidly in Indigenous, Afro-descendant, and local communities’ territories, and there is a need to assess the relationships between private actors and IPLC and to propose pathways for equitable participation. 

Seven people stand in a line, posing for the camera

Panelists pose at the equitable markets side event at the Latin American Pavilion in the Blue Zone.


The research presentation and subsequent panel with representatives of the association for Carbon Project Developers in Colombia Asocarbono, Fondo Acción, and CAF further discussed recommendations for all actors in the value chain of carbon certificates — including those focused on promoting equitable benefit-sharing and safeguarding compliance, increased accountability, enhanced governance structures, strengthened institutionality, and regulatory frameworks  — necessary to create an inclusive and transparent market.

Informing an action plan for Afro-descendant communities

The Afro-Interamerican Forum on Climate Change (AIFCC), an international network working to highlight the critical role of Afro-descendant peoples in global climate action, was also present at COP16.

At the Afro Summit, Mayolo presented key recommendations prepared collectively by the members of AIFCC to the technical secretariat of the Convention on Biological Diversity (CBD). The recommendations emphasize:

  • creating financial tools for conservation and supporting Afro-descendant land rights;
  • including a credit guarantee fund for countries that recognize Afro-descendant collective land titling and research on their contributions to biodiversity conservation;
  • calling for increased representation of Afro-descendant communities in international policy forums;
  • capacity-building for local governments; and
  • strategies for inclusive growth in green business and energy transition.

These actions aim to promote inclusive and sustainable development for Afro-descendant populations.

“Attending COP16 with a large group from MIT contributing knowledge and informed perspectives at 15 separate events was a privilege and honor,” says MIT ESI Director John E. Fernández. “This demonstrates the value of the ESI as a powerful research and convening body at MIT. Science is telling us unequivocally that climate change and biodiversity loss are the two greatest challenges that we face as a species and a planet. MIT has the capacity, expertise, and passion to address not only the former, but also the latter, and the ESI is committed to facilitating the very best contributions across the institute for the critical years that are ahead of us.”

A fuller overview of the conference is available via The MIT Environmental Solutions Initiative’s Primer of COP16.

Liquid on Mars was not necessarily all water

Dry river channels and lake beds on Mars point to the long-ago presence of a liquid on the planet’s surface, and the minerals observed from orbit and from landers seem to many to prove that the liquid was ordinary water. 

Not so fast, the authors of a new Perspectives article in Nature Geoscience suggest. Water is only one of two possible liquids under what are thought to be the conditions present on ancient Mars. The other is liquid carbon dioxide (CO2), and it may actually have been easier for CO2 in the atmosphere to condense into a liquid under those conditions than for water ice to melt. 

While others have suggested that liquid CO2 (LCO2) might be the source of some of the river channels seen on Mars, the mineral evidence has seemed to point uniquely to water. However, the new paper cites recent studies of carbon sequestration, the process of burying liquefied CO2 recovered from Earth’s atmosphere deep in underground caverns, which show that similar mineral alteration can occur in liquid CO2 as in water, sometimes even more rapidly.

The new paper is led by Michael Hecht, principal investigator of the MOXIE instrument aboard the NASA Mars Rover Perseverance. Hecht, a research scientist at MIT’s Haystack Observatory and a former associate director, says, “Understanding how sufficient liquid water was able to flow on early Mars to explain the morphology and mineralogy we see today is probably the greatest unsettled question of Mars science. There is likely no one right answer, and we are merely suggesting another possible piece of the puzzle.”

In the paper, the authors discuss the compatibility of their proposal with current knowledge of Martian atmospheric content and implications for Mars surface mineralogy. They also explore the latest carbon sequestration research and conclude that “LCO2–mineral reactions are consistent with the predominant Mars alteration products: carbonates, phyllosilicates, and sulfates.” 

The argument for the probable existence of liquid CO2 on the Martian surface is not an all-or-nothing scenario; either liquid CO2, liquid water, or a combination may have brought about such geomorphological and mineralogical evidence for a liquid Mars.

Three plausible cases for liquid CO2 on the Martian surface are proposed and discussed: stable surface liquid, basal melting under CO2 ice, and subsurface reservoirs. The likelihood of each depends on the actual inventory of CO2 at the time, as well as the temperature conditions on the surface.

The authors acknowledge that the tested sequestration conditions, where the liquid CO2 is above room temperature at pressures of tens of atmospheres, are very different from the cold, relatively low-pressure conditions that might have produced liquid CO2 on early Mars. They call for further laboratory investigations under more realistic conditions to test whether the same chemical reactions occur.

Hecht explains, “It’s difficult to say how likely it is that this speculation about early Mars is actually true. What we can say, and we are saying, is that the likelihood is high enough that the possibility should not be ignored.”