3 Questions: Paloma Duong on the complexities of Cuban culture

3 Questions: Paloma Duong on the complexities of Cuban culture

As a state run by a Communist Party, Cuba appears set apart from many of its neighbors in the Americas. One thing lost as a result, to a large extent, is a nuanced understanding of the perspectives of Cuban citizens. MIT’s Paloma Duong, an associate professor in the program in Comparative Media Studies/Writing, has helped fill this void with a new book that closely examines contemporary media — especially online communities and music — to look at what Cubans think about the contemporary world and what outsiders think about Cuba. The book, “Portable Postsocialisms: New Cuban Mediascapes after the End of History,” has just been published by the University of Texas Press. MIT News spoke with Duong about her work.

Q: What is the book about?

A: The book looks at a specific moment in Cuban history, the first two decades of the 21st century, as a case study of the relationship between culture, politics, and emergent media technologies. This is a greater moment of access to the internet and digital media technologies. The 1990s are known as the “Special Period” in Cuba, a decade of economic collapse and disorientation. Yet while the turn of the 21st century is this moment of profound change, images of a Cuba frozen in time endure.

One of the book’s focal points is to delve into the cultural and political discourses of change and continuity produced in this new media context. What is this telling us about Cubans’ experience of postsocialism — that is, the moment when the old referents of socialism still exist in everyday experience but socialism as a radical project of social transformation no longer appears as a viable collective goal? And, in turn, what can this tell us about the more general global experience concerning the demise of and desire for socialist utopias in this time period?

That question also requires a look at how global narratives and images about Cuba circulate. The symbolic weight of Cuba as the last bastion of socialism, as inspiration or cautionary tale existing outside of historical time, is one of them. I examine Cuba as a traveling media object invested with competing political desires. Even during the Prohibition Era in the U.S. you can already hear and see Cuba as a provider of transgressive desires to the American imagination in songs and advertising from that time.

Top-down narratives are routinely imposed on Cubans, either by their own government or by foreign observers exoticizing Cubans. I wanted to understand how Cubans were narrating their own experience of change. But I also wanted to recognize the international impact of the Cuban Revolution of 1959 and account for how its global constituents experienced its denouement. 

Q: The book looks at Cuban culture with reference to music, fashion, online communities, and more. Why did you decide to explore all these cultural artifacts?

A: Because I was looking at both Cubans’ accounts of postsocialism, and at Cuba as an object of imagination traveling around the world, it seemed to me impossible to just choose one medium. The way we construct our images of the world, and ourselves, is intrinsically multimedia. We don’t just get all our information from literature, or film, or news media alone. Instead, I focus on specific narratives and images of change — of womanhood, of economic reform, of Internet access, and so on — looking at how they are reproduced or contested across media practices and cultural objects.

I use the term “portable” in different ways to describe these operations. A song, for instance, can be portable in many ways. Digital and especially streaming media open new circuits of music exchange and consumption. But the aesthetic experience of a song is itself a portable one; it lingers and remains with you. And whether analyzing songs, advertising, memes, or more, I study objects and practices that allow us to see the double status of Cuba, as a symbol and as an experience.

In this sense the book is about Cuba, but it is also about ourselves. We tend to look at Cuba through a Cold War framework that casts the country as an exception with respect to former socialist countries, to Latin America, to the capitalist world. But what happens if we look at Cuba as [also] participating in that world, not as an exception but as a particular experience of broader transformations? I’m not saying Cuba is the same as everywhere else. But the premise of the book is that Cuba is not an exceptional place outside of history. In fact, I argue that the narrative of its exceptionality is the key to understanding our shared historical moment and the political dimensions of our cultural and media practices.

Q: How would you say this approach sits with reference to other studies of modern Cuba?

A: There are other, more traditional scholarly ways of looking at Cuba. Some perspectives emphasize the liberal individual confronting an authoritarian state, foregrounding repression and censorship. Others focus instead on the Cuban nation-state as resisting global markets and transnational capital.

There are merits to these perspectives. But when only those perspectives predominate we miss the ways in which both the state and markets might dispossess everyday citizens. In looking at the cultural responses of people, you see citizens picking up on the fact that the global markets are leaving them behind, that the state is leaving them behind. They are not getting either what the state promises, which is social welfare, or what the markets promise, which is upward mobility. The book shows how abandoning Cold War frameworks of analysis, and how taking into account the ways in which cultural and media practices shape our political experiences, can offer a new understanding of Cuba but also of our own global present.

A new test could predict how heart attack patients will respond to mechanical pumps

A new test could predict how heart attack patients will respond to mechanical pumps

Every year, around 50,000 people in the United States experience cardiogenic shock — a life-threatening condition, usually caused by a severe heart attack, in which the heart can’t pump enough blood for the body’s needs.

Many of these patients end up receiving help from a mechanical pump that can temporarily help the heart pump blood until it recovers enough to function on its own. However, in nearly half of these patients, the extra help leads to an imbalance between the left and right ventricles, which can pose danger to the patient.

In a new study, MIT researchers have discovered why that imbalance occurs, and identified factors that make it more likely. They also developed a test that doctors could use to determine whether this dysfunction will occur in a particular patient, which could give doctors more confidence when deciding whether to use these pumps, known as ventricular assist devices (VADs).

“As we improve the mechanistic understanding of how these technologies interact with the native physiology, we can improve device utility. And if we have more algorithms and metrics-based guidance, that will ease use for clinicians. This will both improve outcomes across these patients and increase use of these devices more broadly,” says Kimberly Lamberti, an MIT graduate student and the lead author of the study.

Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science and the director of MIT’s Institute for Medical Engineering and Science (IMES), is the senior author of the paper, which appears today in Science Translational Medicine. Steven Keller, an assistant professor of medicine at Johns Hopkins School of Medicine, is also an author of the paper.

Edelman notes that “the beauty of this study is that it uses pathophysiologic insight and advanced computational analyses to provide clinicians with straightforward guidelines as to how to deal with the exploding use of these valuable mechanical devices. We use these devices increasingly in our sickest patients and now have greater strategies as to how to optimize their utility.”

Imbalance in the heart

To treat patients who are experiencing cardiogenic shock, a percutaneous VAD can be inserted through the arteries until it is positioned across the aortic valve, where it helps to pump blood out of the left ventricle. The left ventricle is responsible for pumping blood to most of the organs of the body, while the right ventricle pumps blood to the lungs.

In most cases, the device may be removed after a week or so, once the heart is able to pump on its own. While effective for many patients, in some people the devices can disrupt the coordination and balance between the right and left ventricles, which contract and relax synchronously. Studies have found that this disruption occurs in up to 43 percent of patients who receive VADs.

“The left and right ventricles are highly coupled, so as the device disrupts flow through the system, that can unmask or induce right heart failure in many patients,” Lamberti says. “Across the field it’s well-known that this is a concern, but the mechanism that’s creating that is unclear, and there are limited metrics to predict which patients will experience it.”

In this study, the researchers wanted to figure out why this failure occurs, and come up with a way to help doctors predict whether it will happen for a given patient. If doctors knew that the right heart would also need support, they could implant another VAD that helps the right ventricle.

“What we were trying to do with this study was predict any issues earlier in the patient’s course, so that action can be taken before that extreme state of failure has been reached,” Lamberti says.

To do that, the researchers studied the devices in an animal model of heart failure. A VAD was implanted in the left ventricle of each animal, and the researchers analyzed several different metrics of heart function as the pumping speed of the device was increased and decreased.

The researchers found that the most important factor in how the right ventricle responded to VAD implantation was how well the pulmonary vascular system — the network of vessels that carries blood between the heart and lungs — adapted to changes in blood volume and flow induced by the VAD.

This system was best able to handle that extra flow if it could adjust its resistance (the slowing of steady blood flow through the vessels) and compliance (the slowing of large pulses of blood volume into the vessels).

“We found that in the healthy state, compliance and resistance could change pretty rapidly to accommodate the changes in volume due to the device. But with progressive disease, that ability to adapt becomes diminished,” Lamberti says.

A dynamic test

The researchers also showed that measuring this pulmonary vascular compliance and its adaptability could offer a way to predict how a patient will respond to left ventricle assistance. Using a dataset of eight patients who had received a left VAD, the researchers found that those measurements correlated with the right heart state, therefore predicting how well the patients adapted to the device, validating the findings from the animal study.

To do this test, doctors would need to implant the device as usual and then ramp up the speed while measuring the compliance of the pulmonary vascular system. The researchers determined a metric that can assess this compliance by using just the VAD itself and a pulmonary artery catheter that is commonly implanted in these patients.

“We created this way to dynamically test the system while simultaneously maintaining support of the heart,” Lamberti says. “Once the device is initiated, this quick test could be run, which would inform clinicians of whether the patient might need right heart support.”

The researchers now hope to expand these findings with additional animal studies and continue collaboration with manufacturers of these devices in the future, in hopes of running clinical studies to evaluate whether this test would provide information that would be valuable for doctors.

“Right now, there are few metrics being used to predict device tolerance. Device selection and decision-making is most often based on experiential evidence from the physicians at each institution. Having this understanding will hopefully allow physicians to determine which patients will be intolerant to device support and provide guidance for how to best treat each patient based on right heart state,” Lamberti says.

The research was funded by the National Heart, Lung and Blood Institute; the National Institute of General Medical Sciences; and Abiomed.

Anantha Chandrakasan named MIT’s inaugural chief innovation and strategy officer

Anantha Chandrakasan named MIT’s inaugural chief innovation and strategy officer

Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, has been named as MIT’s first chief innovation and strategy officer, effective immediately. He will continue to serve as dean of engineering, a role he has held since 2017.

As chief innovation and strategy officer, Chandrakasan will work closely with MIT President Sally Kornbluth to advance the ambitious agenda that she has laid out in the first year of her presidency. He will collaborate with key stakeholders across MIT, as well as external partners, to launch initiatives and new collaborations in support of these strategic priorities.

“I was immediately impressed by Anantha’s can-do attitude and his clear interest in working with us to develop and advance our priorities for the Institute,” President Kornbluth says. “With his signature energy, creativity, and enthusiasm, he has a gift for organizing complex initiatives and ideas and making sure they move forward with alacrity. Combined with his strategic insight, deep knowledge across many subject areas, and terrific record in raising funds for important ideas, Anantha is uniquely suited to serve MIT in this new role, and I’m delighted he has agreed to take it on.”

In his new role, Chandrakasan will help develop and implement plans to advance research, education, and innovation in areas that President Kornbluth has identified as her top priorities — such as climate change and sustainability, artificial intelligence, and the life sciences. He will also play a leading role in efforts to secure the resources needed for MIT researchers to pursue bold work in these key areas.

“I am thrilled and honored to help advance President Kornbluth’s vision for MIT in this new role,” Chandrakasan says. “Working closely with faculty, staff, and students across the Institute, I am excited to help shape and launch initiatives that will accelerate research and innovation on some of the world’s most urgent needs. My hope is to enable our researchers with the support, resources, and infrastructure they need to maximize the impact of their work.”

Working closely with MIT’s existing programs in entrepreneurship, Chandrakasan will develop strategies to accelerate innovation across the Institute. These efforts will aim to grow and support these programs, while identifying new opportunities to support student and faculty entrepreneurs and maximize their impact.

In addition to examining ways to advance research, entrepreneurship, and collaborations, Chandrakasan will work with Provost Cynthia Barnhart and Chancellor Melissa Nobles to advance new educational initiatives. This will include developing new programs and tracks to optimize students’ preparation for a variety of career paths.

“In many ways, this role is a natural extension of the significant work Anantha has already been doing to help shape strategic priorities on an Institute level,” Barnhart says. “All of MIT stands to benefit from his extensive experience launching and building new programs and initiatives.”

As dean of engineering since 2017, Chandrakasan has implemented a variety of interdisciplinary programs, creating new models for how academia and industry can work together to accelerate the pace of research. This has resulted in the launch of initiatives including the MIT Climate and Sustainability Consortium, the MIT-IBM Watson AI Lab, the MIT-Takeda Program, the MIT and Accenture Convergence Initiative, the MIT Mobility Initiative, the MIT Quest for Intelligence, the MIT AI Hardware Program, the MIT-Northpond Program, the MIT Faculty Founder Initiative, and the MIT-Novo Nordisk Artificial Intelligence Postdoctoral Fellows Program.

Chandrakasan has also played a role as dean in establishing a variety of initiatives beyond the School of Engineering. He was instrumental in the 2018 founding of the Schwarzman College of Computing, the most significant structural change to MIT in nearly 70 years. He also has served in leadership roles on MIT Fast Forward, an Institute-wide plan for addressing climate change; as the inaugural chair of the Abdul Latif Jameel Clinic for Machine Learning in Health; and as the co-chair of the academic workstream for MIT’s Task Force 2021. Before becoming dean, Chandrakasan led an Institute-wide working group to guide the development of policies and procedures related to MIT’s 2016 launch of The Engine, and also served on The Engine’s inaugural board.

Chandrakasan has focused as dean on fostering a sense of community within MIT’s largest school. He has launched several programs to give students and staff a more active role in shaping the initiatives and operations of the school, including the Staff Advice & Implementation Committee, the undergraduate Student Advisory Group, the Graduate Student Advisory Group (GradSage), the Gender Equity Committee, and the MIT School of Engineering Postdoctoral Fellowship Program for Engineering Excellence. Working closely with GradSage, Chandrakasan has also played a role in establishing the Daniel J. Riccio Graduate Engineering Leadership Program.

Prior to becoming dean in 2017, Chandrakasan served for six years as head of the Department of Electrical Engineering and Computer Science (EECS), MIT’s largest academic department. As department head, he led the development of initiatives that continue to have an impact across MIT. He created Rising Stars in EECS, an academic career workshop that rotates amongst various universities and has become a model for similar efforts in other disciplines. Under his leadership, EECS also launched the SuperUROP program as well as Start6, which has since become StartMIT, a program supporting students interested in entrepreneurship.

MIT researchers remotely map crops, field by field

MIT researchers remotely map crops, field by field

Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

“It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). Wang, who is one of the new shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT says, “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

Ground truth

Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

“What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

Cropped image

In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

“Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

“In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

“There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

“What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.”

With just a little electricity, MIT researchers boost common catalytic reactions

With just a little electricity, MIT researchers boost common catalytic reactions

A simple technique that uses small amounts of energy could boost the efficiency of some key chemical processing reactions, by up to a factor of 100,000, MIT researchers report. These reactions are at the heart of petrochemical processing, pharmaceutical manufacturing, and many other industrial chemical processes.

The surprising findings are reported today in the journal Science, in a paper by MIT graduate student Karl Westendorff, professors Yogesh Surendranath and Yuriy Roman-Leshkov, and two others.

“The results are really striking,” says Surendranath, a professor of chemistry and chemical engineering. Rate increases of that magnitude have been seen before but in a different class of catalytic reactions known as redox half-reactions, which involve the gain or loss of an electron. The dramatically increased rates reported in the new study “have never been observed for reactions that don’t involve oxidation or reduction,” he says.

The non-redox chemical reactions studied by the MIT team are catalyzed by acids. “If you’re a first-year chemistry student, probably the first type of catalyst you learn about is an acid catalyst,” Surendranath says. There are many hundreds of such acid-catalyzed reactions, “and they’re super important in everything from processing petrochemical feedstocks to making commodity chemicals to doing transformations in pharmaceutical products. The list goes on and on.”

“These reactions are key to making many products we use daily,” adds Roman-Leshkov, a professor of chemical engineering and chemistry.

But the people who study redox half-reactions, also known as electrochemical reactions, are part of an entirely different research community than those studying non-redox chemical reactions, known as thermochemical reactions. As a result, even though the technique used in the new study, which involves applying a small external voltage, was well-known in the electrochemical research community, it had not been systematically applied to acid-catalyzed thermochemical reactions.

People working on thermochemical catalysis, Surendranath says, “usually don’t consider” the role of the electrochemical potential at the catalyst surface, “and they often don’t have good ways of measuring it. And what this study tells us is that relatively small changes, on the order of a few hundred millivolts, can have huge impacts — orders of magnitude changes in the rates of catalyzed reactions at those surfaces.”

“This overlooked parameter of surface potential is something we should pay a lot of attention to because it can have a really, really outsized effect,” he says. “It changes the paradigm of how we think about catalysis.”

Chemists traditionally think about surface catalysis based on the chemical binding energy of molecules to active sites on the surface, which influences the amount of energy needed for the reaction, he says. But the new findings show that the electrostatic environment is “equally important in defining the rate of the reaction.”

The team has already filed a provisional patent application on parts of the process and is working on ways to apply the findings to specific chemical processes. Westendorff says their findings suggest that “we should design and develop different types of reactors to take advantage of this sort of strategy. And we’re working right now on scaling up these systems.”

While their experiments so far were done with a two-dimensional planar electrode, most industrial reactions are run in three-dimensional vessels filled with powders. Catalysts are distributed through those powders, providing a lot more surface area for the reactions to take place. “We’re looking at how catalysis is currently done in industry and how we can design systems that take advantage of the already existing infrastructure,” Westendorff says.

Surendranath adds that these new findings “raise tantalizing possibilities: Is this a more general phenomenon? Does electrochemical potential play a key role in other reaction classes as well? In our mind, this reshapes how we think about designing catalysts and promoting their reactivity.”

Roman-Leshkov adds that “traditionally people who work in thermochemical catalysis would not associate these reactions with electrochemical processes at all. However, introducing this perspective to the community will redefine how we can integrate electrochemical characteristics into thermochemical catalysis. It will have a big impact on the community in general.”

While there has typically been little interaction between electrochemical and thermochemical catalysis researchers, Surendranath says, “this study shows the community that there’s really a blurring of the line between the two, and that there is a huge opportunity in cross-fertilization between these two communities.”

Westerndorff adds that to make it work, “you have to design a system that’s pretty unconventional to either community to isolate this effect.” And that helps explain why such a dramatic effect had never been seen before. He notes that even their paper’s editor asked them why this effect hadn’t been reported before. The answer has to do with “how disparate those two ideologies were before this,” he says. “It’s not just that people don’t really talk to each other. There are deep methodological differences between how the two communities conduct experiments. And this work is really, we think, a great step toward bridging the two.”

In practice, the findings could lead to far more efficient production of a wide variety of chemical materials, the team says. “You get orders of magnitude changes in rate with very little energy input,” Surendranath says. “That’s what’s amazing about it.”

The findings, he says, “build a more holistic picture of how catalytic reactions at interfaces work, irrespective of whether you’re going to bin them into the category of electrochemical reactions or thermochemical reactions.” He adds that “it’s rare that you find something that could really revise our foundational understanding of surface catalytic reactions in general. We’re very excited.”

“This research is of the highest quality,” says Costas Vayenas, a professor of engineering at the university of Patras, in Greece, who was not associated with the study. The work “is very promising for practical applications, particularly since it extends previous related work in redox catalytic systems,” he says.

The team included MIT postdoc Max Hulsey PhD ’22 and graduate student Thejas Wesley PhD ’23, and was supported by the Air Force Office of Scientific Research and the U.S. Department of Energy Basic Energy Sciences.

Hitchhiking cancer vaccine makes progress in the clinic

Hitchhiking cancer vaccine makes progress in the clinic

Therapeutic cancer vaccines are an appealing strategy for treating malignancies. In theory, when a patient is injected with peptide antigens — protein fragments from mutant proteins only expressed by tumor cells — T cells learn to recognize and attack cancer cells expressing the corresponding protein. By teaching the patient’s own immune system to attack cancer cells, these vaccines ideally would not only eliminate tumors but prevent them from recurring. 

In practice, however, effective cancer vaccines have not materialized, despite decades of research.  

“There has been a lot of work to make cancer vaccines more effective,” says Darrell Irvine, a professor in the MIT departments of Biological Engineering and Materials Science and Engineering and a member of the Koch Institute for Integrative Cancer Research at MIT. “But even in mouse and other models, they typically only provoke a weak immune response. And once those vaccines are tested in a clinical setting, their efficacy evaporates.” 

New hope may now be on the horizon. A vaccine based on a novel approach developed by Irvine and colleagues at MIT, and refined by researchers at Elicio Therapeutics, an MIT spinout that Irvine founded to translate experiments into treatment, is showing promising results in clinical trials — including Phase 1 data suggesting the vaccine could serve as a viable option for the treatment of pancreatic and other cancers.

Formulating a question 

When Haipeng Liu joined Irvine’s laboratory as a postdoc almost 15 years ago, he wanted to dive into the problem of why cancer vaccines have failed to deliver on their promise. He discovered that one important reason peptide vaccines for cancer and other diseases tend not to elicit a strong immune response is because they do not travel in sufficient quantities to lymph nodes, where populations of teachable T cells are concentrated. He knew that attempts to target peptides to the lymph nodes had been imprecise: Even when delivered with nanoparticles or attached to antibodies for lymphatic immune cells, too many vaccine peptides were taken up by the wrong cells in the tissues or never even made it to the lymph nodes.  

But Liu, now an associate professor of chemical engineering and materials science at Wayne State University, also had a simple, unanswered question: If vaccine peptides did not make it to the lymph nodes, where did they go? 

In the pursuit of an answer, Liu and his Irvine Lab colleagues would make discoveries crucial to trafficking peptides to the lymph nodes and developing a vaccine that provoked surprisingly strong immune responses in mice. That vaccine, now in the hands of Irvine Lab spinout Elicio Therapeutics, Inc., has produced early clinical results showing a similarly strong immune response in human patients. 

Liu began with testing peptide vaccines in mouse models, finding that peptides injected in the skin or muscle generally rapidly leak into the bloodstream, where they are diluted and degraded rather than traveling to the lymph nodes. He tried bulking up and protecting the peptide vaccine by enclosing it within a micellar nanoparticle. This type of nanoparticle is composed of “amphiphilic” molecules, with hydrophilic heads that, in a water-based solution, encase a payload attached to its hydrophobic lipid tails. Liu tested two versions, one that locked the micellar molecules together to securely enclose the peptide vaccine and another, the control, that did not. Despite all the sophisticated chemistry that went into the locked micellar nanoparticles, they induced a weak immune response. Liu was crushed.  

Irvine, however, was elated. The loosely bound control micelles produced the strongest immune response he had ever seen. Liu had hit on a potential solution — just not the one he expected. 

Formulating a vaccine 

While Liu was working on micellar nanoparticles, he had also been delving into the biology of the lymph node. He learned that after removing a tumor, surgeons use a small blue dye to image lymph nodes to determine the extent of metastasis. Contrary to expectation raised by the dye molecule’s small molecular weight, it does not vanish into the bloodstream after administration. Instead, the dye binds to albumin, the most common protein in blood and tissue fluids, and tracks reliably to the lymph nodes.  

The amphiphiles in Liu’s control group behaved similarly to the imaging dye. Once injected into the tissue, the “loose” micelles were broken up by albumin, which then carried the peptide payload just where it needed to go.  

Taking the imaging dye as a model, the lab began to develop a vaccine that used lipid tails to bind their peptide chains to lymph node-targeting albumin molecules. 

Once their albumin-hitchhiking vaccine was assembled, they tested it in mouse models of HIV, melanoma, and cervical cancer. In the resulting 2014 study, they observed that peptides modified to bind albumin produced a T cell response that was five to 10 times greater than the response to peptides alone.  

In later work, Irvine lab researchers were able to generate even larger immune responses. In one study, the Irvine Lab paired a cancer-targeted vaccine with CAR T cell therapy. CAR T has been used to treat blood cancers such as leukemia successfully but has not worked well for solid tumors, which suppress T cells in their immediate vicinity. The vaccine and CAR T cell therapy together dramatically increased antitumor T cell populations and the number of T cells that successfully invaded the tumor. The combination resulted in the elimination of 60% of solid tumors in mice, while CAR T cell therapy alone had almost no effect.

A model for patient impact 

By 2016, Irvine was ready to begin translating the vaccine from lab bench experiments to a patient-ready treatment, spinning out a new company, Elicio. 

“We made sure we were setting a high bar in the lab,” said Irvine. “In addition to leveraging albumin biology that is the same in mouse and humans, we aimed for and achieved 10-, 30-, 40-fold greater responses in the animal model relative to other gold standard vaccine approaches, and this gave us hope that these results would translate to greater immune responses in patients.” 

At Elicio, Irvine’s vaccine has evolved into a platform combining lipid-linked peptides with an immune adjuvant—no CAR T cells required. In 2021, the company began a clinical trial, AMPLIFY-201, of a vaccine named ELI-002, targeting cancers with mutations in the KRAS gene, with a focus on pancreatic ductal adenocarcinoma (PDAC). The vaccine has the potential to fill an urgent need in cancer treatment: PDAC accounts for 90% of pancreatic cancers, is highly aggressive, and has limited options for effective treatment. KRAS mutations drive 90-95% of all PDAC cases, but there are several variations that must be individually targeted for effective treatment. Elicio’s cancer vaccine has the potential to target up to seven KRAS variants at once covering 88% of PDAC cases. The company has initially tested a version that targets two, and Phase 1 and 2 studies of the version targeting all seven KRAS mutants are ongoing. 

Data published last month in Nature Medicine from the Phase 1 clinical trial suggests that an effective therapeutic cancer vaccine could be on the horizon. The robust responses seen in the Irvine Lab’s mouse models have so far translated to the 25 patients (20 pancreatic, 5 colorectal) in the trial: 84% of patients showed an average 56-fold increase in the number of antitumor T cells, with complete elimination of blood biomarkers of residual tumor in 24%. Patients who had a strong immune response saw an 86% reduction in the risk of cancer progression or death. The vaccine was tolerated well by patients, with no serious side effects.  

“The reason I joined Elicio was, in part, because my father had KRAS-mutated colorectal cancer,” said Christopher Haqq, executive vice president, head of research and development, and chief medical officer at Elicio. “His journey made me realize the enormous need for new therapy for KRAS-mutated tumors. It gives me hope that we are on the right path to be able to help people just like my dad and many others.” 

In the next phase of the PDAC clinical trial, Elicio is currently testing the formulation of the vaccine that targets seven KRAS mutations. The company has plans to address other KRAS-driven cancers, such as colorectal and non-small cell lung cancers. Peter DeMuth PhD ’13, a former graduate student in the Irvine Lab and now chief scientific officer at Elicio, credits the Koch Institute’s research culture with shaping the evolution of the vaccine and the company.  

“The model adopted by the KI to bring together basic science and engineering while encouraging collaboration at the intersection of complementary disciplines was critical to shaping my view of innovation and passion for technology that can deliver real-world impact,” he recalls. “This proved to be a very special ecosystem for me and many others to cultivate an engineering mindset while building a comprehensive interdisciplinary knowledge of immunology, applied chemistry, and materials science. These themes have become central to our work at Elicio.” 

Funding for research on which Elicio’s vaccine platform is based was provided, in part, by a Koch Institute Quinquennial Cancer Research Fellowship, the Marble Center for Cancer Nanomedicine, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.

This story was updated on Feb. 16 to clarify the goal of a vaccine currently in clinical trials.

Stitch3D is powering a new wave of 3D data collaboration

Stitch3D is powering a new wave of 3D data collaboration

Workers are increasingly using 3D files to do things like assess construction projects, understand damage from natural disasters, map out crime scenes, and more. But as the importance of 3D files has grown, the problems associated with sharing, analyzing, and even viewing them have become more apparent.

The issue is that many popular cloud service providers aren’t compatible with 3D files. That means in order to preview a 3D scan, users need to download the files onto a desktop 3D app — for example, the app for popular computer-aided design software AutoCAD. It also makes collaborating on 3D files difficult unless people are huddled around the same computer.

Now Stitch3D, founded by Clark Yuan MBA ’22, is helping workers get the most out of 3D data with a cloud platform that allows users to manage, analyze, and share 3D files of any size and format. The company’s suite of tools lets workers collaborate on 3D files, visualize their data on any browser or mobile device, and even layer 3D scans onto real-world maps.

“Think of Stitch3D as three different layers of technology,” Yuan says. “The base layer is similar to DropBox — a secure way to share files. On top of that we have a web browser-based 3D viewer that can render 3D data efficiently and apply analysis to that. We can measure distance, height, slope angle, volume, etc. The third layer, which is coming out later this year, is a mobile application that allows you to tap into any smartphone that has light detection and ranging (lidar) sensors embedded into it.”

Stitch3D is currently working with land and aerial surveyors, architects, and construction firms. In the longer term, Yuan believes 3D data is poised to go mainstream. That’s because 3D sensors continue getting cheaper and more ubiquitous, which should bring a wave of new 3D use cases.

“We see so many enabling technologies coming up around 3D data,” Yuan says. “Our bet is that in the next one to three years, 3D data is really going to start taking off.”

Tech with a mission

On July 12, 2020, the U.S. Navy’s USS Bonhomme Richard ship caught fire and burned for four days in a San Diego port. In order to investigate the fire and assess the damage, the Navy conducted 3D scans of the ship. But officers had no easy way of sharing the scans with other agencies.

Yuan, who served in the U.S. Army for seven years until 2019, was among the people the Navy asked to help. He first went through an accelerator with the Navy in 2020, where he formulated the idea for a cloud-based 3D sharing system. In September of that year, he entered the MBA program at the MIT Sloan School of Management, where he took as many entrepreneurship classes as he could.

“I had two years at MIT to mature the idea,” Yuan recalls.

During that time, he received guidance from the Venture Mentoring Service (VMS), participated in the MIT $100K Pitch Competition, and received financial support from MIT Sandbox.

“The Sandbox funding was huge because we could put it toward building prototypes and cloud computing services,” Yuan says. “As far as I know, no other school has that kind of structured program where you can pitch your idea and apply for nondilutive funding.”

Yuan also participated in the Industrial Liaison Program’s (ILP) Startup Exchange (STEX) accelerator, which gave him some important industry connections early on.

The year of the USS Bonhomme Richard fire, 2020, was also the year the iPhone 12 debuted as the first phone equipped with a lidar sensor. In the years since, the costs of 3D sensors and the things that carry them, like specialized cameras and drones, have continued to fall, making 3D data generation simple. Yuan sees the trend as an opportunity for the industry.

“The value of 3D is slowly being recognized in the consumer world, whether for going to IKEA to scan a couch and see if it fits in your living room or using a virtual reality headset, there’s just so much you can do with 3D data.”

Helping 3D data go viral

Stitch3D’s platform can create 3D models from scans instantly and provide a number of high-end analytics useful for different industries. Surveyors, for instance, wanted Stitch3D’s platform to provide measurements and angles from their scans. Stitch3D can also connect 3D data to satellite imagery from sources like Google Earth to provide context and points of reference. The platform can also visualize feature classes like buildings, vegetation, and water.

“We’re not just helping with sharing and viewing 3D files,” Yuan says. “We’re also trying to help derive business insights from the data. We see these easy-to-use yet powerful analytics tools as a key driver of our long-term success once [large cloud platforms] start focusing on 3D data.”

Stitch3D began by working with land surveyors, who have been using 3D technology for decades, but it has since gotten interest from law enforcement agencies, insurers, and construction firms in addition to its work with the U.S. Navy, Air Force, and Department of Defense. Yuan believes the number of industries it works in will continue to grow as 3D data becomes more common.

“Once sharing 3D data becomes as easy as sharing a URL link, which you can embed in an email or a LinkedIn post, we hope that our technology will help accelerate the proliferation of 3D data,” Yuan says.

In the longer term, as 3D data matures, Yuan believes the sky’s the limit for the industry.

“The cool thing about 3D, and what gets us excited, is that really anything that you take a photo or a video of right now, you can substitute with 3D,” Yuan says. “Right now, people are taking tons of pictures for every traffic accident, crime scene, etc. But if you can go in and quickly do a laser scan, which might only take two minutes, you don’t have to worry about missing a picture or needing to zoom into some specific video frame to get all the details. If you’re talking about traffic accidents, that means emergency responders can focus on responding to the emergency rather than trying to preserve and document evidence for insurance purposes.”

This tiny, tamper-proof ID tag can authenticate almost anything

This tiny, tamper-proof ID tag can authenticate almost anything

A few years ago, MIT researchers invented a cryptographic ID tag that is several times smaller and significantly cheaper than the traditional radio frequency tags (RFIDs) that are often affixed to products to verify their authenticity.

This tiny tag, which offers improved security over RFIDs, utilizes terahertz waves, which are smaller and have much higher frequencies than radio waves. But this terahertz tag shared a major security vulnerability with traditional RFIDs: A counterfeiter could peel the tag off a genuine item and reattach it to a fake, and the authentication system would be none the wiser.

The researchers have now surmounted this security vulnerability by leveraging terahertz waves to develop an antitampering ID tag that still offers the benefits of being tiny, cheap, and secure.

They mix microscopic metal particles into the glue that sticks the tag to an object, and then use terahertz waves to detect the unique pattern those particles form on the item’s surface. Akin to a fingerprint, this random glue pattern is used to authenticate the item, explains Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on the antitampering tag.

“These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern,” adds Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group in the Research Laboratory of Electronics.

The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.

Because the terahertz tag is so cheap to produce, it could be implemented throughout a massive supply chain. And its tiny size enables the tag to attach to items too small for traditional RFIDs, such as certain medical devices.

The paper, which will be presented at the IEEE Solid State Circuits Conference, is a collaboration between Han’s group and the Energy-Efficient Circuits and Systems Group of Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer, dean of the MIT School of Engineering, and the Vannevar Bush Professor of EECS. Co-authors include EECS graduate students Xibi Chen, Maitryi Ashok, and Jaeyeon Won.

Preventing tampering

This research project was partly inspired by Han’s favorite car wash. The business stuck an RFID tag onto his windshield to authenticate his car wash membership. For added security, the tag was made from fragile paper so it would be destroyed if a less-than-honest customer tried to peel it off and stick it on a different windshield.

But that is not a terribly reliable way to prevent tampering. For instance, someone could use a solution to dissolve the glue and safely remove the fragile tag.

Rather than authenticating the tag, a better security solution is to authenticate the item itself, Han says. To achieve this, the researchers targeted the glue at the interface between the tag and the item’s surface.

Their antitampering tag contains a series of minuscule slots that enable terahertz waves to pass through the tag and strike microscopic metal particles that have been mixed into the glue.

Terahertz waves are small enough to detect the particles, whereas larger radio waves would not have enough sensitivity to see them. Also, using terahertz waves with a 1-millimeter wavelength allowed the researchers to make a chip that does not need a larger, off-chip antenna.

After passing through the tag and striking the object’s surface, terahertz waves are reflected, or backscattered, to a receiver for authentication. How those waves are backscattered depends on the distribution of metal particles that reflect them.

The researchers put multiple slots onto the chip so waves can strike different points on the object’s surface, capturing more information on the random distribution of particles.

“These responses are impossible to duplicate, as long as the glue interface is destroyed by a counterfeiter,” Han says.

A vendor would take an initial reading of the antitampering tag once it was stuck onto an item, and then store those data in the cloud, using them later for verification.

AI for authentication

But when it came time to test the antitampering tag, Lee ran into a problem: It was very difficult and time-consuming to take precise enough measurements to determine whether two glue patterns are a match.

He reached out to a friend in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and together they tackled the problem using AI. They trained a machine-learning model that could compare glue patterns and calculate their similarity with more than 99 percent accuracy.

“One drawback is that we had a limited data sample for this demonstration, but we could improve the neural network in the future if a large number of these tags were deployed in a supply chain, giving us a lot more data samples,” Lee says.

The authentication system is also limited by the fact that terahertz waves suffer from high levels of loss during transmission, so the sensor can only be about 4 centimeters from the tag to get an accurate reading. This distance wouldn’t be an issue for an application like barcode scanning, but it would be too short for some potential uses, such as in an automated highway toll booth. Also, the angle between the sensor and tag needs to be less than 10 degrees or the terahertz signal will degrade too much.

They plan to address these limitations in future work, and hope to inspire other researchers to be more optimistic about what can be accomplished with terahertz waves, despite the many technical challenges, says Han.

“One thing we really want to show here is that the application of the terahertz spectrum can go well beyond broadband wireless. In this case, you can use terahertz for ID, security, and authentication. There are a lot of possibilities out there,” he adds.

This work is supported, in part, by the U.S. National Science Foundation and the Korea Foundation for Advanced Studies.

The MIT Press announces Grant Program for Diverse Voices recipients for 2024

The MIT Press announces Grant Program for Diverse Voices recipients for 2024

Launched in 2021, the Grant Program for Diverse Voices from the MIT Press provides direct support for new work by authors who bring excluded or chronically underrepresented perspectives to the fields in which the press publishes, which include the sciences, arts, and humanities.

Recipients are selected after submitting a book proposal and completing a successful peer review. Grants can support a variety of needs, including research travel, copyright permission fees, parental/family care, developmental editing, and other costs associated with the research and writing process. 

For 2024, the press will support five projects, including “Our Own Language: The Power of Kreyòl and Other Native Languages for Liberation and Justice in Haiti and Beyond,” by MIT professor of linguistics Michel DeGraff. The book will provide a much-needed reassessment of what learning might look like in Kreyòl-based, as opposed to French-language, classrooms in Haiti. 

Additionally, Kimberly Juanita Brown has been selected for “Black Elegies,” which will be the second book in the “On Seeing” series, which is published in simultaneous print and expanded digital formats. Brown says, “I am thrilled to be a recipient of the Grant Program for Diverse Voices. This award is an investment in the work that we do; work that responds to sites of inquiry that deserve illumination.”

“The recipients of this year’s grant program have produced exceptional proposals that surface new ideas, voices, and perspectives within their respective fields,” says Amy Brand, director and publisher, the MIT Press. “We are proud to lend our support and look forward to publishing these works in the near future.”

Recipients for 2024 include: 

“Black Elegies,” by Kimberly Juanita Brown

“Black Elegies” explores the art of mourning in contemporary cultural productions. Structured around the sensorial, the book moves through sight, sound, and touch in order to complicate what Okwui Enwezor calls the “national emergency of black grief.” Using fiction, photography, music, film, and poetry, “Black Elegies” delves into explorations of mourning that take into account the multiple losses sustained by black subjects, from forced migration and enslavement to bodily violations, imprisonment, and death. “Black Elegies” is in the “On Seeing” series and will be published in collaboration with Brown University Digital Publications.

Kimberly Juanita Brown is the inaugural director of the Institute for Black Intellectual and Cultural Life at Dartmouth College, where she is also an associate professor of English and creative writing. She is the author of “The Repeating Body: Slavery’s Visual Resonance in the Contemporary” and “Mortevivum.”

“Our Own Language: The Power of Kreyòl and Other Native Languages for Liberation and Justice in Haiti and Beyond,” by Michel DeGraff

Kreyòl is the only language spoken by all Haitians in Haiti. Yet, most schoolchildren in Haiti are still being taught with manuals written in a language they do not speak — French. DeGraff challenges and corrects the assumptions and errors in the linguistics discipline that regard Creole languages as inferior, and puts forth what learning might look like in Kreyòl-based classrooms in Haiti. Published in a dual-language edition,“Our Own Language” will use Haiti and Kreyòl as a case study of linguistic and educational justice for human rights, liberation, sovereignty, and nation building.

Michel DeGraff is an MIT professor of linguistics, co-founder and co-director of the MIT-Haiti Initiative, founding member of Akademi Kreyòl Ayisyen, and in 2022 was named a fellow of the Linguistic Society of America. 

“Glitchy Vision: A Feminist History of the Social Photo,” by Amanda K. Greene

“Glitchy Vision” examines how new photographic social media cultures can change human bodies through the glitches they introduce into quotidian habits of feeling and seeing. Focusing on glitchiness provides new, needed vantages on the familiar by troubling the typical trajectories of bodies and technologies. Greene’s research operates at the nexus of visual culture, digital studies, and the health humanities, attending especially to the relationship between new media and chronic pain and vulnerability. Shining a light on an underserved area of analysis, her scholarship focuses on how illness, pain, and disability are encountered and “read” in everyday life.

Amanda Greene is a researcher at the Center for Bioethics and Social Sciences in Medicine at the University of Michigan.

“Data by Design: A Counterhistory of Data Visualization, 1789-1900,” by Silas Munro, et al.

“Data by Design: A Counterhistory of Data Visualization, 1789-1900” excavates the hidden history of data visualization through evocative argument and bold visual detail. Developed by the project team of Lauren F. Klein with Tanvi Sharma, Jay Varner, Nicholas Yang, Dan Jutan, Jianing Fu, Anna Mola, Zhou Fang, Marguerite Adams, Shiyao Li, Yang Li, and Silas Munro, “Data by Design” is both an interactive website and a lavishly illustrated book expertly adapted for print by Munro. The project interweaves cultural-critical analyses of historical visualization examples, culled from archival research, with new visualizations. 

Silas Munro is founder of the LGBTQ+ and BIPOC (Black, Indigenous, and people of color)-owned graphic design studio Polymode, based in Los Angeles and Raleigh, North Carolina. Munro is faculty co-chair for the Museum of Fine Arts Program in Graphic Design at the Vermont College of Fine Arts.

“Attention is Discovery: The Life and Work of Henrietta Leavitt,” by Anna Von Mertens

“Attention is Discovery” is a layered portrait of Henrietta Leavitt, the woman who laid the foundation for modern cosmology. Through her attentive study of the two-dimensional surface of thousands of glass plates, Leavitt revealed a way to calculate the distance to faraway stars and envision a previously inconceivable three-dimensional universe. In this compelling story of an underrecognized female scientist, Leavitt’s achievement, long subsumed under the headlining work of Edwin Hubble, receives its due spotlight. 

Anna Von Mertens received her MFA from the California College of the Arts and her BA from Brown University.