Students research pathways for MIT to reach decarbonization goals

A number of emerging technologies hold promise for helping organizations move away from fossil fuels and achieve deep decarbonization. The challenge is deciding which technologies to adopt, and when.

MIT, which has a goal of eliminating direct campus emissions by 2050, must make such decisions sooner than most to achieve its mission. That was the challenge at the heart of the recently concluded class 4.s42 (Building Technology — Carbon Reduction Pathways for the MIT Campus).

The class brought together undergraduate and graduate students from across the Institute to learn about different technologies and decide on the best path forward. It concluded with a final report as well as student presentations to members of MIT’s Climate Nucleus on May 9.

“The mission of the class is to put together a cohesive document outlining how MIT can reach its goal of decarbonization by 2050,” says Morgan Johnson Quamina, an undergraduate in the Department of Civil and Environmental Engineering. “We’re evaluating how MIT can reach these goals on time, what sorts of technologies can help, and how quickly and aggressively we’ll have to move. The final report details a ton of scenarios for partial and full implementation of different technologies, outlines timelines for everything, and features recommendations.”

The class was taught by professor of architecture Christoph Reinhart but included presentations by other faculty about low- and zero-carbon technology areas in their fields, including advanced nuclear reactors, deep geothermal energy, carbon capture, and more.

The students’ work served as an extension of MIT’s Campus Decarbonization Working Group, which Reinhart co-chairs with Director of Sustainability Julie Newman. The group is charged with developing a technology roadmap for the campus to reach its goal of decarbonizing its energy systems.

Reinhart says the class was a way to leverage the energy and creativity of students to accelerate his group’s work.

“It’s very much focused on establishing a vision for what could happen at MIT,” Reinhart says. “We are trying to bring these technologies together so that we see how this [decarbonization process] would actually look on our campus.”

A class with impact

Throughout the semester, every Thursday from 9 a.m. to 12 p.m., around 20 students gathered to explore different decarbonization technology pathways. They also discussed energy policies, methods for evaluating risk, and future electric grid supply changes in New England.

“I love that this work can have a real-world impact,” says Emile Germonpre, a master’s student in the Department of Nuclear Science and Engineering. “You can tell people aren’t thinking about grades or workload — I think people would’ve loved it even if the workload was doubled. Everyone is just intrinsically motivated to help solve this problem.”

The classes typically began with an introduction to one of 10 different technologies. The introductions covered technical maturity, ease of implementation, costs, and how to model the technology’s impact on campus emissions. Students were then split into teams to evaluate each technology’s feasibility.

“I’ve learned a lot about decarbonization and climate change,” says Johnson Quamina. “As an undergrad, I haven’t had many focused classes like this. But it was really beneficial to learn about some of these technologies I hadn’t even heard of before. It’s awesome to be contributing to the community like this.”

As part of the class, students also developed a model that visualizes each intervention’s effect on emissions, allowing users to select interventions or combinations of interventions to see how they shape emissions trajectories.

“We have a physics-based model that takes into account every building,” says Reinhart. “You can look at variants where we retrofit buildings, where we add rooftop photovoltaics, nuclear, carbon capture, and adopting different types of district underground heating systems. The point is you can start to see how fast we could do something like this and what the real game-changers are.”

The class also designed and conducted a preliminary survey, to be expanded in the fall, that captures the MIT community’s attitudes towards the different technologies. Preliminary results were shared with the Climate Nucleus during students’ May 9 presentations.

“I think it’s this unique and wonderful intersection of the forward-looking and innovative nature of academia with real world impact and specificity that you’d typically only find in industry,” Germonpre says. “It lets you work on a tangible project, the MIT campus, while exploring technologies that companies today find too risky to be the first mover on.”

From MIT’s campus to the world

The students recommended MIT form a building energy team to audit and retrofit all campus buildings. They also suggested MIT order a comprehensive geological feasibility survey to support planning regarding shallow and deep borehole fields for harvesting underground heat. A third recommendation was to communicate with the MIT community as well as with regulators and policymakers in the area about the deployment of nuclear batteries and deep geothermal boreholes on campus.

The students’ modeling tool can also help members of the working group explore various decarbonization pathways. For instance, installing rooftop photovoltaics now would effectively reduce emissions, but installing them in a few decades, when the regional electricity grid is expected to be reducing its reliance on fossil fuels anyways, would have a much smaller impact.

“When you have students working together, the recommendations are a little less filtered, which I think is a good thing,” Reinhart says. “I think there’s a real sense of urgency in the class. For certain choices, we have to basically act now.”

Reinhart plans to do more activities related to the Working Group and the class’ recommendations in the fall, and he says he’s currently engaged with the Massachusetts Governor’s Office to explore doing something similar for the state.

Students say they plan to keep working on the survey this summer and continue studying their technology areas. In the longer term, they believe the experience will help them in their careers.

“Decarbonization is really important, and understanding how we can implement new technologies on campuses or in buildings provides me with a more well-rounded vision for what I could design in my career,” says Johnson Quamina, who wants to work as a structural or environmental engineer but says the class has also inspired her to consider careers in energy.

The students’ findings also have implications beyond MIT campus. In accordance with MIT’s 2015 climate plan that committed to using the campus community as a “test bed for change,” the students’ recommendations also hold value for organizations around the world.

“The mission is definitely broader than just MIT,” Germonpre says. “We don’t just want to solve MIT’s problem. We’ve dismissed technologies that were too specific to MIT. The goal is for MIT to lead by example and help certain technologies mature so that we can accelerate their impact.”

Improving working environments amid environmental distress

In less than a decade, MIT economist Namrata Kala has produced a corpus of work too rich, inventive, and diverse to be easily summarized. Let’s try anyway.

Kala, an associate professor at the MIT Sloan School of Management, often studies environmental problems and their effects on workers and firms, with implications for government policy, corporate managers, and anyone concerned about climate change. She also examines the effects of innovation on productivity, from farms to factories, and scrutinizes firm organization in light of such major changes.

Kala has published papers on topics including the long-term effects of climate change on agriculture in Africa and India; the impact of mechanization on farmers’ incomes; the extent to which linguistic differences create barriers to trade; and even the impact of LED light bulbs on factory productivity. Characteristically, Kala looks at issues of global scale and pinpoints their effects at the level of individuals.

Consider one paper Kala and two colleagues published a couple of years ago, about the effects of air pollution on garment factory workers in India. The scholars examined patterns of particulate-matter pollution and linked that to detailed, worker-level data about how productive workers were along the production line. The study shows that air pollution damages sewing productivity, and that some managers (not all) are adept at recognizing which workers are most affected by it.

What emerges from much of this work is a real-time picture of human adaptation in a time of environmental distress.

“I feel like I’m part of a long tradition of trying to understand resilience and adaptation, but now in the face of a changing world,” Kala says. “Understanding interventions that are good for resilience while the world is changing is what motivates me, along with the fact that the vast majority of the world is vulnerable to events that may impact economic growth.”

For her research and teaching, Kala was awarded tenure at MIT last year.

Joining academia, then staying in it

Kala, who grew up in Punjab, India, was long mindful of big issues pertaining to society, the economy, and the environment.

“Growing up in India, it’s very difficult not to be interested in the some of the questions that are important for development and environmental economics,” Kala says.

However, Kala did not expect that interest to lead her into academia. She attended Delhi University as an undergraduate, earning her degree with honors in economics while expecting to find a job in the area of development. To help facilitate that, Kala enrolled in a one-year master’s program at Yale University, in international and development economics.

Before that year was out, Kala had a new realization: Studying development problems was integral to solving them. Academia is not on the sidelines when it comes to development, but helps generate crucial knowledge to foster better and smarter growth policies.

“I came to Yale for a one-year master’s because I didn’t know if I wanted to be in a university for another two years,” Kala says. “I wanted to work on problems in the world. And that’s when I became enthralled with research. It was this wonderful year where I could study anything, and it completely changed my perspective on what I could do next. So I did the PhD, and that’s how I became an economist.”

After receiving her PhD in 2015, Kala spent the next two years supported by a Prize Fellowship in Economics, History, and Politics at Harvard University and a postdoctoral fellowship at MIT’s own Abdul Latif Jameel Poverty Action Lab (J-PAL). In 2017, she joined the MIT faculty on a full-time basis, and has remained at the Institute since then.

The source material for Kala’s studies varies widely, though in all cases she is looking for ways to construct well-defined empirical studies tackling major questions, with key issues often revealed in policy or firm details.

“I find reading stuff about policy reform strangely interesting,” she quips.

Development, but with environmental quality

Indeed, sometimes the spark for Kala’s studies comes from her own broad knowledge of past policy reforms, combined with an ability to locate data that reveals their effects.

For instance, one working paper Kala and a colleague recently completed looks at an Indian policy to move industrial firms out of Delhi in order to help solve the city’s pollution problems; the policy randomly relocated companies in an industrial belt around the city. But what effect did this have on the firms? After examining the records of 20,000 companies, the researchers found these firms’ survival rate was 8 percent to 20 percent lower than if the policy called for them to be clustered more efficiently.

That finding suggests how related environmental policies can be designed in the future.

“This environmental policy was important in that it improved air quality in Delhi, but there’s a way to do that which also reduces the cost on firms,” Kala says.

Kala says she expects India to be the locus of many, though hardly all, of her future studies. The country provides a wide range of opportunities for research.

“India currently has both the largest number of poor people in the world as well as 21 of the 30 most polluted cities in the world,” Kala says. “Clearly, the tradeoff between development and environmental quality is extremely salient, and we need progress in understanding industrial policies that are at least environmentally neutral or improving environmental quality.”

Kala will continue to look for new ways to take pressing, large-scale issues and study their effects in daily life. But the fact that her work ranges so widely is not just due to the places she studies; it is also because of the place she studies them from. MIT, she believes, has provided her with an environment of its own, which in this case enhances her own productivity.

“One thing that helps a lot is having colleagues and co-authors to bounce ideas of off,” Kala says. “Sloan is the heart of so much interdisciplinary work. That is one big reason why I’ve had a broad set of interests and continue to work on many things.”

“At Sloan,” she adds, “there are people doing fascinating things that I’m happy to listen to, as well as people in different disciplines working on related things who have a perspective I find extremely enriching. There are excellent economists, but I also go into seminars about work or productivity or the environment and come away with a perspective I don’t think I could have come up with myself.”

A data-driven approach to making better choices

Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.

Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.

Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.

Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.

The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.

“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.

There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.

Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights so they compose well within machine learning tools, and to understand areas and topics where the integration of behavioral economics and algorithmic tools might be most fruitful.

Students also produce ideas, develop associated research, and see the bigger picture. They’re led to understand where an insight fits and see where the broader research agenda is leading. Participants can think critically about what supervised LLMs can (and cannot) do, to understand how to integrate those capacities with the models and insights of behavioral economics, and to recognize the most fruitful areas for the application of what investigations uncover.

The dangers of subjectivity and bias

According to Rambachan, behavioral economics acknowledges that biases and mistakes exist throughout our choices, even absent algorithms. “The data used by our algorithms exist outside computer science and machine learning, and instead are often produced by people,” he continues. “Understanding behavioral economics is therefore essential to understanding the effects of algorithms and how to better build them.”

Rambachan sought to make the course accessible regardless of attendees’ academic backgrounds. The class included advanced degree students from a variety of disciplines.

By offering students a cross-disciplinary, data-driven approach to investigating and discovering ways in which algorithms might improve problem-solving and decision-making, Rambachan hopes to build a foundation on which to redesign existing systems of jurisprudence, health care, consumer lending, and industry, to name a few areas.

“Understanding how data are generated can help us understand bias,” Rambachan says. “We can ask questions about producing a better outcome than what currently exists.”

Useful tools for re-imagining social operations

Economics doctoral student Jimmy Lin was skeptical about the claims Rambachan and Mullainathan made when the class began, but changed his mind as the course continued.

“Ashesh and Sendhil started with two provocative claims: The future of behavioral science research will not exist without AI, and the future of AI research will not exist without behavioral science,” Lin says. “Over the course of the semester, they deepened my understanding of both fields and walked us through numerous examples of how economics informed AI research and vice versa.”

Lin, who’d previously done research in computational biology, praised the instructors’ emphasis on the importance of a “producer mindset,” thinking about the next decade of research rather than the previous decade. “That’s especially important in an area as interdisciplinary and fast-moving as the intersection of AI and economics — there isn’t an old established literature, so you’re forced to ask new questions, invent new methods, and create new bridges,” he says.

The speed of change to which Lin alludes is a draw for him, too. “We’re seeing black-box AI methods facilitate breakthroughs in math, biology, physics, and other scientific disciplines,” Lin  says. “AI can change the way we approach intellectual discovery as researchers.”

An interdisciplinary future for economics and social systems

Studying traditional economic tools and enhancing their value with AI may yield game-changing shifts in how institutions and organizations teach and empower leaders to make choices.

“We’re learning to track shifts, to adjust frameworks and better understand how to deploy tools in service of a common language,” Rambachan says. “We must continually interrogate the intersection of human judgment, algorithms, AI, machine learning, and LLMs.”

Lin enthusiastically recommended the course regardless of students’ backgrounds. “Anyone broadly interested in algorithms in society, applications of AI across academic disciplines, or AI as a paradigm for scientific discovery should take this class,” he says. “Every lecture felt like a goldmine of perspectives on research, novel application areas, and inspiration on how to produce new, exciting ideas.”

The course, Rambachan says, argues that better-built algorithms can improve decision-making across disciplines. “By building connections between economics, computer science, and machine learning, perhaps we can automate the best of human choices to improve outcomes while minimizing or eliminating the worst,” he says.

Lin remains excited about the course’s as-yet unexplored possibilities. “It’s a class that makes you excited about the future of research and your own role in it,” he says.

Paying it forward

MIT professors Erik Lin-Greenberg and Tracy Slatyer truly understand the positive impact that advisors have in the life of a graduate student. Two of the most recent faculty members to be named “Committed to Caring,” they attribute their excellence in advising to the challenging experiences and life-changing mentorship they received during their own graduate school journeys.

Tracy Slatyer: Seeing the PhD as a journey

Tracy Slatyer is a professor in the Department of Physics who works on particle physics, cosmology, and astrophysics. Focused on unraveling the mysteries of dark matter, Slatyer investigates potential new physics through the analysis of astrophysical and cosmological data, exploring scenarios involving novel forces and theoretical predictions for photon signals.

One of Slatyer’s key approaches is to prioritize students’ educational journeys over academic accomplishments alone, also acknowledging the prevalence of imposter syndrome.

Having struggled in graduate coursework themselves, Slatyer shares their personal past challenges and encourages students to see the big picture: “I try to remind [students] that the PhD is a marathon, not a sprint, and that once you have your PhD, nobody will care if it took you one year or three to get through all the qualifying exams and required classes.” Many students also expressed gratitude for how  Slatyer offered opportunities to connect outside of work, including invitations to tea-time.

One of Slatyer’s key beliefs is the need for community amongst students, postdocs, and professors. Slatyer encourages students to meet with professors outside of their primary field of interest and helps advisees explore far-ranging topics. They note the importance of connecting with individuals at different career stages, often inviting students to conferences at other institutions, and hosting visiting scientists.

Advisees noted Slatyer’s realistic portrayal of expectations within the field and open discussion of work-life balance. They maintain a document with clear advising guidelines, such as placing new students on projects with experienced researchers. Slatyer also schedules weekly meetings to discuss non-research topics, including career goals and upcoming talks.

In addition, Slatyer does not shy away from the fact that their field is competitive and demanding. They are honest about their experiences in academia, noting that networking may be just as important as academic performance for a successful career.

Erik Lin-Greenberg: Empathy and enduring support

Erik Lin-Greenberg is an assistant professor in the history and culture of science and technology in the Department of Political Science. His research examines how emerging military technology affects conflict dynamics and the use of force.

Lin-Greenberg’s thoughtful supervision of his students underlies his commitment to cultivating the next generation of researchers. Students are grateful for his knack for identifying weak arguments, as well as his guidance through challenging publication processes: “For my dissertation, Erik has mastered the difficult art of giving feedback in a way that does not discourage.”

Lin-Greenberg’s personalized approach is further evidence of his exceptional teaching. In the classroom, students praise his thorough preparation, ability to facilitate rich discussions, and flexibility during high-pressure periods. In addition, his unique ability to break down complex material makes topics accessible to the diverse array of backgrounds in the classroom.

His mentorship extends far beyond academics, encompassing a genuine concern for the well-being of his students through providing personal check-ins and unwavering support.

Much of this empathy comes from Erik’s own tumultuous beginnings in graduate school at Columbia University, where he struggled to keep up with coursework and seriously considered leaving the program. He points to the care and dedication of mentors, and advisor Tonya Putnam in particular, as having an enormous impact.

“She consistently reassured me that I was doing interesting work, gave amazing feedback on my research, and was always open and transparent,” he recounts. “When I’m advising today, I constantly try to live up to Tonya’s example.”

In his own group, Erik chooses creative approaches to mentorship, including taking mentees out for refreshments to navigate difficult dissertation discussions. In his students’ moments of despair, he boosts their mood with photos of his cat, Major General Lansdale.

Ultimately, one nominator credited his ability to continue his PhD to Lin-Greenberg’s uplifting spirit and endless encouragement: “I cannot imagine anyone more deserving of recognition than Erik Lin-Greenberg.”

Researchers demonstrate the first chip-based 3D printer

Imagine a portable 3D printer you could hold in the palm of your hand. The tiny device could enable a user to rapidly create customized, low-cost objects on the go, like a fastener to repair a wobbly bicycle wheel or a component for a critical medical operation.

Researchers from MIT and the University of Texas at Austin took a major step toward making this idea a reality by demonstrating the first chip-based 3D printer. Their proof-of-concept device consists of a single, millimeter-scale photonic chip that emits reconfigurable beams of light into a well of resin that cures into a solid shape when light strikes it.

The prototype chip has no moving parts, instead relying on an array of tiny optical antennas to steer a beam of light. The beam projects up into a liquid resin that has been designed to rapidly cure when exposed to the beam’s wavelength of visible light.

By combining silicon photonics and photochemistry, the interdisciplinary research team was able to demonstrate a chip that can steer light beams to 3D print arbitrary two-dimensional patterns, including the letters M-I-T. Shapes can be fully formed in a matter of seconds.

In the long run, they envision a system where a photonic chip sits at the bottom of a well of resin and emits a 3D hologram of visible light, rapidly curing an entire object in a single step.

This type of portable 3D printer could have many applications, such as enabling clinicians to create tailor-made medical device components or allowing engineers to make rapid prototypes at a job site.

“This system is completely rethinking what a 3D printer is. It is no longer a big box sitting on a bench in a lab creating objects, but something that is handheld and portable. It is exciting to think about the new applications that could come out of this and how the field of 3D printing could change,” says senior author Jelena Notaros, the Robert J. Shillman Career Development Professor in Electrical Engineering and Computer Science (EECS), and a member of the Research Laboratory of Electronics.

Joining Notaros on the paper are Sabrina Corsetti, lead author and EECS graduate student; Milica Notaros PhD ’23; Tal Sneh, an EECS graduate student; Alex Safford, a recent graduate of the University of Texas at Austin; and Zak Page, an assistant professor in the Department of Chemical Engineering at UT Austin. The research appears today in Nature Light Science and Applications.

Printing with a chip

Experts in silicon photonics, the Notaros group previously developed integrated optical-phased-array systems that steer beams of light using a series of microscale antennas fabricated on a chip using semiconductor manufacturing processes. By speeding up or delaying the optical signal on either side of the antenna array, they can move the beam of emitted light in a certain direction.

Such systems are key for lidar sensors, which map their surroundings by emitting infrared light beams that bounce off nearby objects. Recently, the group has focused on systems that emit and steer visible light for augmented-reality applications.

They wondered if such a device could be used for a chip-based 3D printer.

At about the same time they started brainstorming, the Page Group at UT Austin demonstrated specialized resins that can be rapidly cured using wavelengths of visible light for the first time. This was the missing piece that pushed the chip-based 3D printer into reality.

“With photocurable resins, it is very hard to get them to cure all the way up at infrared wavelengths, which is where integrated optical-phased-array systems were operating in the past for lidar,” Corsetti says. “Here, we are meeting in the middle between standard photochemistry and silicon photonics by using visible-light-curable resins and visible-light-emitting chips to create this chip-based 3D printer. You have this merging of two technologies into a completely new idea.”

Their prototype consists of a single photonic chip containing an array of 160-nanometer-thick optical antennas. (A sheet of paper is about 100,000 nanometers thick.) The entire chip fits onto a U.S. quarter.

When powered by an off-chip laser, the antennas emit a steerable beam of visible light into the well of photocurable resin. The chip sits below a clear slide, like those used in microscopes, which contains a shallow indentation that holds the resin. The researchers use electrical signals to nonmechanically steer the light beam, causing the resin to solidify wherever the beam strikes it.

A collaborative approach

But effectively modulating visible-wavelength light, which involves modifying its amplitude and phase, is especially tricky. One common method requires heating the chip, but this is inefficient and takes a large amount of physical space.

Instead, the researchers used liquid crystal to fashion compact modulators they integrate onto the chip. The material’s unique optical properties enable the modulators to be extremely efficient and only about 20 microns in length.

A single waveguide on the chip holds the light from the off-chip laser. Running along the waveguide are tiny taps which tap off a little bit of light to each of the antennas.

The researchers actively tune the modulators using an electric field, which reorients the liquid crystal molecules in a certain direction. In this way, they can precisely control the amplitude and phase of light being routed to the antennas.

But forming and steering the beam is only half the battle. Interfacing with a novel photocurable resin was a completely different challenge.

The Page Group at UT Austin worked closely with the Notaros Group at MIT, carefully adjusting the chemical combinations and concentrations to zero-in on a formula that provided a long shelf-life and rapid curing.

In the end, the group used their prototype to 3D print arbitrary two-dimensional shapes within seconds.

Building off this prototype, they want to move toward developing a system like the one they originally conceptualized — a chip that emits a hologram of visible light in a resin well to enable volumetric 3D printing in only one step.

“To be able to do that, we need a completely new silicon-photonics chip design. We already laid out a lot of what that final system would look like in this paper. And, now, we are excited to continue working towards this ultimate demonstration,” Jelena Notaros says.

This work was funded, in part, by the U.S. National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the Robert A. Welch Foundation, the MIT Rolf G. Locher Endowed Fellowship, and the MIT Frederick and Barbara Cronin Fellowship.

The unexpected origins of a modern finance tool

In the early 1600s, the officials running Durham Cathedral, in England, had serious financial problems. Soaring prices had raised expenses. Most cathedral income came from renting land to tenant farmers, who had long leases so officials could not easily raise the rent. Instead, church leaders started charging periodic fees, but these often made tenants furious. And the 1600s, a time of religious schism, was not the moment to alienate church members.

But in 1626, Durham officials found a formula for fees that tenants would accept. If tenant farmers paid a fee equal to one year’s net value of the land, it earned them a seven-year lease. A fee equal to 7.75 years of net value earned a 21-year lease.

This was a form of discounting, the now-common technique for evaluating the present and future value of money by assuming a certain rate of return on that money. The Durham officials likely got their numbers from new books of discounting tables. Volumes like this had never existed before, but suddenly local church officials were applying the technique up and down England.

As financial innovation stories go, this one is unusual. Normally, avant-garde financial tools might come from, well, the financial avant-garde — bankers, merchants, and investors hunting for short-term profits, not clergymen.

“Most people have assumed these very sophisticated calculations would have been implemented by hard-nosed capitalists, because really powerful calculations would allow you to get an economic edge and increase profits,” says MIT historian William Deringer, an expert in the deployment of quantitative reasoning in public life. “But that was not the primary or only driver in this situation.”

Deringer has published a new research article about this episode, “Mr. Aecroid’s Tables: Economic Calculations and Social Customs in the Early Modern Countryside,” appearing in the current issue of the Journal of Modern History. In it, he uses archival research to explore how the English clergy started using discounting, and where. And one other question: Why?

Enter inflation

Today, discounting is a pervasive tool. A dollar in the present is worth more than a dollar a decade from now, since one can earn money investing it in the meantime. This concept heavily informs investment markets, corporate finance, and even the NFL draft (where trading this year’s picks yields a greater haul of future picks). As the historian William N. Goetzmann has written, the related idea of net present value “is the most important tool in modern finance.” But while discounting was known as far back as the mathematician Leonardo of Pisa (often called Fibonacci) in the 1200s, why were English clergy some of its most enthusiastic early adopters?

The answer involves a global change in the 1500s: the “price revolution,” in which things began costing more, after a long period when prices had been constant. That is, inflation hit the world.

“People up to that point lived with the expectation that prices would stay the same,” Deringer says. “The idea that prices changed in a systematic way was shocking.”

For Durham Cathedral, inflation meant the organization had to pay more for goods while three-quarters of its revenues came from tenant rents, which were hard to alter. Many leases were complex, and some were locked in for a tenant’s lifetime. The Durham leaders did levy intermittent fees on tenants, but that led to angry responses and court cases.

Meanwhile, tenants had additional leverage against the Church of England: religious competition following the Reformation. England’s political and religious schisms would lead it to a midcentury civil war. Maybe some private landholders could drastically increase fees, but the church did not want to lose followers that way.

“Some individual landowners could be ruthlessly economic, but the church couldn’t, because it’s in the midst of incredible political and religious turmoil after the Reformation,” Deringer says. “The Church of England is in this precarious position. They’re walking a line between Catholics who don’t think there should have been a Reformation, and Puritans who don’t think there should be bishops. If they’re perceived to be hurting their flock, it would have real consequences. The church is trying to make the finances work but in a way that’s just barely tolerable to the tenants.”

Enter the books of discounting tables, which allowed local church leaders to finesse the finances. Essentially, discounting more carefully calibrated the upfront fees tenants would periodically pay. Church leaders could simply plug in the numbers as compromise solutions.

In this period, England’s first prominent discounting book with tables was published in 1613; its most enduring, Ambrose Acroyd’s “Table of Leasses and Interest,” dated to 1628-29. Acroyd was the bursar at Trinity College at Cambridge University, which as a landholder (and church-affiliated institution) faced the same issues concerning inflation and rent. Durham Cathedral began using off-the-shelf discounting formulas in 1626, resolving decades of localized disagreement as well.

Performing fairness

The discounting tables from books did not only work because the price was right. Once circulating clergy had popularized the notion throughout England, local leaders could justify using the books because others were doing it. The clergy were “performing fairness,” as Deringer puts it.

“Strict calculative rules assured tenants and courts that fines were reasonable, limiting landlords’ ability to maximize revenues,” Deringer writes in the new article.

To be sure, local church leaders in England were using discounting for their own economic self-interest. It just wasn’t the largest short-term economic self-interest possible. And it was a sound strategy.

“In Durham they would fight with tenants every 20 years [in the 1500s] and come to a new deal, but eventually that evolves into these sophisticated mechanisms, the discounting tables,” Deringer adds. “And you get standardization. By about 1700, it seems like these procedures are used everywhere.”

Thus, as Deringer writes, “mathematical tables for setting fines were not so much instruments of a capitalist transformation as the linchpin holding together what remained of an older system of customary obligations stretched nearly to breaking by macroeconomic forces.”

Once discounting was widely introduced, it never went away. Deringer’s Journal of Modern History article is part of a larger book project he is currently pursuing, about discounting in many facets of modern life.

Deringer was able to piece together the history of discounting in 17th-century England thanks in part to archival clues. For instance, Durham University owns a 1686 discounting book self-described as an update to Acroyd’s work; that copy was owned by a Durham Cathedral administrator in the 1700s. Of the 11 existing copies of Acroyd’s work, two are at Canterbury Cathedral and Lincoln Cathedral.

Hints like that helped Deringer recognize that church leaders were very interested in discounting; his further research helped him see that this chapter in the history of discounting is not merely about finance; it also opens a new window into the turbulent 1600s.

“I never expected to be researching church finances, I didn’t expect it to have anything to do with the countryside, landlord-tenant relationships, and tenant law,” Deringer says. “I was seeing this as an interesting example of a story about bottom-line economic calculation, and it wound up being more about this effort to use calculation to resolve social tensions.” 

Exotic black holes could be a byproduct of dark matter

For every kilogram of matter that we can see — from the computer on your desk to distant stars and galaxies — there are 5 kilograms of invisible matter that suffuse our surroundings. This “dark matter” is a mysterious entity that evades all forms of direct observation yet makes its presence felt through its invisible pull on visible objects.

Fifty years ago, physicist Stephen Hawking offered one idea for what dark matter might be: a population of black holes, which might have formed very soon after the Big Bang. Such “primordial” black holes would not have been the goliaths that we detect today, but rather microscopic regions of ultradense matter that would have formed in the first quintillionth of a second following the Big Bang and then collapsed and scattered across the cosmos, tugging on surrounding space-time in ways that could explain the dark matter that we know today.

Now, MIT physicists have found that this primordial process also would have produced some unexpected companions: even smaller black holes with unprecedented amounts of a nuclear-physics property known as “color charge.”

These smallest, “super-charged” black holes would have been an entirely new state of matter, which likely evaporated a fraction of a second after they spawned. Yet they could still have influenced a key cosmological transition: the time when the first atomic nuclei were forged. The physicists postulate that the color-charged black holes could have affected the balance of fusing nuclei, in a way that astronomers might someday detect with future measurements. Such an observation would point convincingly to primordial black holes as the root of all dark matter today.

“Even though these short-lived, exotic creatures are not around today, they could have affected cosmic history in ways that could show up in subtle signals today,” says David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “Within the idea that all dark matter could be accounted for by black holes, this gives us new things to look for.”

Kaiser and his co-author, MIT graduate student Elba Alonso-Monsalve, have published their study today in the journal Physical Review Letters.

A time before stars

The black holes that we know and detect today are the product of stellar collapse, when the center of a massive star caves in on itself to form a region so dense that it can bend space-time such that anything — even light — gets trapped within. Such “astrophysical” black holes can be anywhere from a few times as massive as the sun to many billions of times more massive.

“Primordial” black holes, in contrast, can be much smaller and are thought to have formed in a time before stars. Before the universe had even cooked up the basic elements, let alone stars, scientists believe that pockets of ultradense, primordial matter could have accumulated and collapsed to form microscopic black holes that could have been so dense as to squeeze the mass of an asteroid into a region as small as a single atom. The gravitational pull from these tiny, invisible objects scattered throughout the universe could explain all the dark matter that we can’t see today.

If that were the case, then what would these primordial black holes have been made from? That’s the question Kaiser and Alonso-Monsalve took on with their new study.

“People have studied what the distribution of black hole masses would be during this early-universe production but never tied it to what kinds of stuff would have fallen into those black holes at the time when they were forming,” Kaiser explains.

Super-charged rhinos

The MIT physicists looked first through existing theories for the likely distribution of black hole masses as they were first forming in the early universe.

“Our realization was, there’s a direct correlation between when a primordial black hole forms and what mass it forms with,” Alonso-Monsalve says. “And that window of time is absurdly early.”

She and Kaiser calculated that primordial black holes must have formed within the first quintillionth of a second following the Big Bang. This flash of time would have produced “typical” microscopic black holes that were as massive as an asteroid and as small as an atom. It would have also yielded a small fraction of exponentially smaller black holes, with the mass of a rhino and a size much smaller than a single proton.

What would these primordial black holes have been made from? For that, they looked to studies exploring the composition of the early universe, and specifically, to the theory of quantum chromodynamics (QCD) — the study of how quarks and gluons interact.

Quarks and gluons are the fundamental building blocks of protons and neutrons — elementary particles that combined to forge the basic elements of the periodic table. Immediately following the Big Bang, physicists estimate, based on QCD, that the universe was an immensely hot plasma of quarks and gluons that then quickly cooled and combined to produce protons and neutrons.

The researchers found that, within the first quintillionth of a second, the universe would still have been a soup of free quarks and gluons that had yet to combine. Any black holes that formed in this time would have swallowed up the untethered particles, along with an exotic property known as “color charge” — a state of charge that only uncombined quarks and gluons carry.

“Once we figured out that these black holes form in a quark-gluon plasma, the most important thing we had to figure out was, how much color charge is contained in the blob of matter that will end up in a primordial black hole?” Alonso-Monsalve says.

Using QCD theory, they worked out the distribution of color charge that should have existed throughout the hot, early plasma. Then they compared that to the size of a region that would collapse to form a black hole in the first quintillionth of a second. It turns out there wouldn’t have been much color charge in most typical black holes at the time, as they would have formed by absorbing a huge number of regions that had a mix of charges, which would have ultimately added up to a “neutral” charge.

But the smallest black holes would have been packed with color charge. In fact, they would have contained the maximum amount of any type of charge allowed for a black hole, according to the fundamental laws of physics. Whereas such “extremal” black holes have been hypothesized for decades, until now no one had discovered a realistic process by which such oddities actually could have formed in our universe.

The super-charged black holes would have quickly evaporated, but possibly only after the time when the first atomic nuclei began to form. Scientists estimate that this process started around one second after the Big Bang, which would have given extremal black holes plenty of time to disrupt the equilibrium conditions that would have prevailed when the first nuclei began to form. Such disturbances could potentially affect how those earliest nuclei formed, in ways that might some day be observed.

“These objects might have left some exciting observational imprints,” Alonso-Monsalve muses. “They could have changed the balance of this versus that, and that’s the kind of thing that one can begin to wonder about.”

This research was supported, in part, by the U.S. Department of Energy. Alonso-Monsalve is also supported by a fellowship from the MIT Department of Physics. 

Nuh Gedik receives 2024 National Brown Investigator Award

Nuh Gedik, MIT’s Donner Professor of Physics, has been named a 2024 Ross Brown Investigator by the Brown Institute for Basic Sciences at Caltech.

One of eight awarded mid-career faculty working on fundamental challenges in the physical sciences, Gedik will receive up to $2 million over five years.

Gedik will use the award to develop a new kind of microscopy that images electrons photo-emitted from a surface while also measuring their energy and momentum. This microscope will make femtosecond movies of electrons to study the fascinating properties of two-dimensional quantum materials.  

Another awardee, professor of physics Andrea Young at the University of California Santa Barbara, was a 2011-14 Pappalardo Fellow at MIT in experimental condensed matter physics. 

The Brown Institute for Basic Sciences at Caltech was established in 2023 through a $400-million gift from entrepreneur, philanthropist, and Caltech alumnus Ross M. Brown, to support fundamental research in chemistry and physics. Initially created as the Investigator Awards in 2020, the award supports the belief that “scientific discovery is a driving force in the improvement of the human condition,” according to a news release from the Science Philanthropy Alliance.

A total of 13 investigators were recognized in the program’s first three years. Now that the Brown Investigator Award has found a long-term home at Caltech, the intent is to recognize a minimum of eight investigators each year. 

Other previous awardees with MIT connections include MIT professor of chemistry Mircea Dincă as well as physics alumni Waseem S. Bakr ’05, ’06, MNG ’06 of Princeton University; David Hsieh of Caltech, who is another former Pappalardo Fellow; Munira Khalil PhD ’04 and Mark Rudner PhD ’08 of the University of Washington; and Tanya Zelevinsky ’99 of Columbia University.

Mouth-based touchpad enables people living with paralysis to interact with computers

When Tomás Vega SM ’19 was 5 years old, he began to stutter. The experience gave him an appreciation for the adversity that can come with a disability. It also showed him the power of technology.

“A keyboard and a mouse were outlets,” Vega says. “They allowed me to be fluent in the things I did. I was able to transcend my limitations in a way, so I became obsessed with human augmentation and with the concept of cyborgs. I also gained empathy. I think we all have empathy, but we apply it according to our own experiences.”

Vega has been using technology to augment human capabilities ever since. He began programming when he was 12. In high school, he helped people manage disabilities including hand impairments and multiple sclerosis. In college, first at the University of California at Berkeley and then at MIT, Vega built technologies that helped people with disabilities live more independently.

Today Vega is the co-founder and CEO of Augmental, a startup deploying technology that lets people with movement impairments seamlessly interact with their personal computational devices.

Augmental’s first product is the MouthPad, which allows users to control their computer, smartphone, or tablet through tongue and head movements. The MouthPad’s pressure-sensitive touch pad sits on the roof of the mouth, and, working with a pair of motion sensors, translates tongue and head gestures into cursor scrolling and clicks in real time via Bluetooth.

“We have a big chunk of the brain that is devoted to controlling the position of the tongue,” Vega explains. “The tongue comprises eight muscles, and most of the muscle fibers are slow-twitch, which means they don’t fatigue as quickly. So, I thought why don’t we leverage all of that?”

People with spinal cord injuries are already using the MouthPad every day to interact with their favorite devices independently. One of Augmental’s users, who is living with quadriplegia and studying math and computer science in college, says the device has helped her write math formulas and study in the library — use cases where other assistive speech-based devices weren’t appropriate.

“She can now take notes in class, she can play games with her friends, she can watch movies or read books,” Vega says. “She is more independent. Her mom told us that getting the MouthPad was the most significant moment since her injury.”

That’s the ultimate goal of Augmental: to improve the accessibility of technologies that have become an integral part of our lives.

“We hope that a person with a severe impairment can be as competent using a phone or tablet as somebody using their hands,” Vega says.

Making computers more accessible

In 2012, as a first-year student at UC Berkeley, Vega met his eventual Augmental co-founder, Corten Singer. That year, he told Singer he was determined to join the Media Lab as a graduate student, something he achieved four years later when he joined the Media Lab’s Fluid Interfaces research group run by Pattie Maes, MIT’s Germeshausen Professor of Media Arts and Sciences.

“I only applied to one program for grad school, and that was the Media Lab,” Vega says. “I thought it was the only place where I could do what I wanted to do, which is augmenting human ability.”

At the Media Lab, Vega took classes in microfabrication, signal processing, and electronics. He also developed wearable devices to help people access information online, improve their sleep, and regulate their emotions.

“At the Media Lab, I was able to apply my engineering and neuroscience background to build stuff, which is what I love doing the most,” Vega says. “I describe the Media Lab as Disneyland for makers. I was able to just play, and to explore without fear.”

Vega had gravitated toward the idea of a brain-machine interface, but an internship at Neuralink made him seek out a different solution.

“A brain implant has the highest potential for helping people in the future, but I saw a number of limitations that pushed me from working on it right now,” Vega says. “One is the long timeline for development. I’ve made so many friends over the past years that needed a solution yesterday.”

At MIT, he decided to build a solution with all the potential of a brain implant but without the limitations.

In his last semester at MIT, Vega built what he describes as “a lollipop with a bunch of sensors” to test the mouth as a medium for computer interaction. It worked beautifully.

“At that point, I called Corten, my co-founder, and said, ‘I think this has the potential to change so many lives,’” Vega says. “It could also change the way humans interact with computers in the future.”

Vega used MIT resources including the Venture Mentoring Service, the MIT I-Corps program, and received crucial early funding from MIT’s E14 Fund. Augmental was officially born when Vega graduated from MIT at the end of 2019.

Augmental generates each MouthPad design using a 3D model based on a scan of the user’s mouth. The team then 3-D prints the retainer using dental-grade materials and adds the electronic components.

With the MouthPad, users can scroll up, down, left, and right by sliding their tongue. They can also right click by doing a sipping gesture and left click by pressing on their palate. For people with less control of their tongue, bites, clenches, and other gestures can be used, and people with more neck control can use head-tracking to move the cursor on their screen.

“Our hope is to create an interface that is multimodal, so you can choose what works for you,” Vega says. “We want to be accommodating to every condition.”

Scaling the MouthPad

Many of Augmental’s current users have spinal cord injuries, with some users unable to move their hands and others unable to move their heads. Gamers and programmers have also used the device. The company’s most frequent users interact with the MouthPad every day for up to nine hours.

“It’s amazing because it means that it has really seamlessly integrated into their lives, and they are finding lots of value in our solution,” Vega says.

Augmental is hoping to gain U.S. Food and Drug Administration clearance over the next year to help users do things like control wheelchairs and robotic arms. FDA clearance will also unlock insurance reimbursements for users, which will make the product more accessible.

Augmental is already working on the next version of its system, which will respond to whispers and even more subtle movements of internal speech organs.

“That’s crucial to our early customer segment because a lot of them have lost or have impaired lung function,” Vega says.

Vega is also encouraged by progress in AI agents and the hardware that goes with them. No matter how the digital world evolves, Vega believes Augmental can be a tool that can benefit everyone.

“What we hope to provide one day is an always-available, robust, and private interface to intelligence,” Vega says. “We think that this is the most expressive, wearable, hands-free operating system that humans have created.”

Reducing carbon emissions from long-haul trucks

People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.

The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.

For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.

Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.

Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.

Defining the challenge: A realistic drive cycle, plus diesel values to beat

The MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — “drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.

To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.

The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.

Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.

The battery electric option

When considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.

To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.

The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.

Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.

Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.

“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.

Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.

In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.

However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”

For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.

A promising option: hydrogen

As the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?

To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”

But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.

Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.

LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.

LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.

The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.

But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.

Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.

A better strategy: onboard release of hydrogen from LOHCs

Given the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.

In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.

Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.

Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.

But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.

Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.

If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.

Comparing the options on cost and emissions

To compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.

First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.

Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.

Testing in the lab

“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.

To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.

The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.

In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.

Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science.