A bioinspired capsule can pump drugs directly into the walls of the GI tract

Inspired by the way that squids use jets to propel themselves through the ocean and shoot ink clouds, researchers from MIT and Novo Nordisk have developed an ingestible capsule that releases a burst of drugs directly into the wall of the stomach or other organs of the digestive tract.

This capsule could offer an alternative way to deliver drugs that normally have to be injected, such as insulin and other large proteins, including antibodies. This needle-free strategy could also be used to deliver RNA, either as a vaccine or a therapeutic molecule to treat diabetes, obesity, and other metabolic disorders.

“One of the longstanding challenges that we’ve been exploring is the development of systems that enable the oral delivery of macromolecules that usually require an injection to be administered. This work represents one of the next major advances in that progression,” says Giovanni Traverso, director of the Laboratory for Translational Engineering and an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and the senior author of the study.

Traverso and his students at MIT developed the new capsule along with researchers at Brigham and Women’s Hospital and Novo Nordisk. Graham Arrick SM ’20 and Novo Nordisk scientists Drago Sticker and Aghiad Ghazal are the lead authors of the paper, which appears today in Nature.

Inspired by cephalopods

Drugs that consist of large proteins or RNA typically can’t be taken orally because they are easily broken down in the digestive tract. For several years, Traverso’s lab has been working on ways to deliver such drugs orally by encapsulating them in small devices that protect the drugs from degradation and then inject them directly into the lining of the digestive tract.

Most of these capsules use a small needle or set of microneedles to deliver drugs once the device arrives in the digestive tract. In the new study, Traverso and his colleagues wanted to explore ways to deliver these molecules without any kind of needle, which could reduce the possibility of any damage to the tissue.

To achieve that, they took inspiration from cephalopods. Squids and octopuses can propel themselves by filling their mantle cavity with water, then rapidly expelling it through their siphon. By changing the force of water expulsion and pointing the siphon in different directions, the animals can control their speed and direction of travel. The siphon organ also allows cephalopods to shoot jets of ink, forming decoy clouds to distract predators.

The researchers came up with two ways to mimic this jetting action, using compressed carbon dioxide or tightly coiled springs to generate the force needed to propel liquid drugs out of the capsule. The gas or spring is kept in a compressed state by a carbohydrate trigger, which is designed to dissolve when exposed to humidity or an acidic environment such as the stomach. When the trigger dissolves, the gas or spring is allowed to expand, propelling a jet of drugs out of the capsule.

In a series of experiments using tissue from the digestive tract, the researchers calculated the pressures needed to expel the drugs with enough force that they would penetrate the submucosal tissue and accumulate there, creating a depot that would then release drugs into the tissue.

“Aside from the elimination of sharps, another potential advantage of high-velocity columnated jets is their robustness to localization issues. In contrast to a small needle, which needs to have intimate contact with the tissue, our experiments indicated that a jet may be able to deliver most of the dose from a distance or at a slight angle,” Arrick says.

The researchers also designed the capsules so that they can target different parts of the digestive tract. One version of the capsule, which has a flat bottom and a high dome, can sit on a surface, such as the lining of the stomach, and eject drug downward into the tissue. This capsule, which was inspired by previous research from Traverso’s lab on self-orienting capsules, is about the size of a blueberry and can carry 80 microliters of drug.

The second version has a tube-like shape that allows it to align itself within a long tubular organ such as the esophagus or small intestine. In that case, the drug is ejected out toward the side wall, rather than downward. This version can deliver 200 microliters of drug.

Made of metal and plastic, the capsules can pass through the digestive tract and are excreted after releasing their drug payload.

Needle-free drug delivery

In tests in animals, the researchers showed that they could use these capsules to deliver insulin, a GLP-1 receptor agonist similar to the diabetes drug Ozempic, and a type of RNA called short interfering RNA (siRNA). This type of RNA can be used to silence genes, making it potentially useful in treating many genetic disorders.

They also showed that the concentration of the drugs in the animals’ bloodstream reached levels on the same order of magnitude as those seen when the drugs were injected with a syringe, and they did not detect any tissue damage.

The researchers envision that the ingestible capsule could be used at home by patients who need to take insulin or other injected drugs frequently. In addition to making it easier to administer drugs, especially for patients who don’t like needles, this approach also eliminates the need to dispose of sharp needles. The researchers also created and tested a version of the device that could be attached to an endoscope, allowing doctors to use it in an endoscopy suite or operating room to deliver drugs to a patient.

“This technology is a significant leap forward in oral drug delivery of macromolecule drugs like insulin and GLP-1 agonists. While many approaches for oral drug delivery have been attempted in the past, they tend to be poorly efficient in achieving high bioavailability. Here, the researchers demonstrate the ability to deliver bioavailability in animal models with high efficiency. This is an exciting approach which could be impactful for many biologics which are currently administered through injections or intravascular infusions,” says Omid Veiseh, a professor of bioengineering at Rice University, who was not involved in the research.

The researchers now plan to further develop the capsules, in hopes of testing them in humans.

The research was funded by Novo Nordisk, the Natural Sciences and Engineering Research Council of Canada, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and the U.S. Advanced Research Projects Agency for Health.

Preparing today for tomorrow’s AI regulations – AI News

AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered…

Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting in 2025

Undergraduates with family income below $200,000 can expect to attend MIT tuition-free starting next fall, thanks to newly expanded financial aid. Eighty percent of American households meet this income threshold.

And for the 50 percent of American families with income below $100,000, parents can expect to pay nothing at all toward the full cost of their students’ MIT education, which includes tuition as well as housing, dining, fees, and an allowance for books and personal expenses.

This $100,000 threshold is up from $75,000 this year, while next year’s $200,000 threshold for tuition-free attendance will increase from its current level of $140,000.

These new steps to enhance MIT’s affordability for students and families are the latest in a long history of efforts by the Institute to free up more resources to make an MIT education as affordable and accessible as possible. Toward that end, MIT has earmarked $167.3 million in need-based financial aid this year for undergraduate students — up some 70 percent from a decade ago.

“MIT’s distinctive model of education — intense, demanding, and rooted in science and engineering — has profound practical value to our students and to society,” MIT President Sally Kornbluth says. “As the Wall Street Journal recently reported, MIT is better at improving the financial futures of its graduates than any other U.S. college, and the Institute also ranks number one in the world for the employability of its graduates.” 

“The cost of college is a real concern for families across the board,” Kornbluth adds, “and we’re determined to make this transformative educational experience available to the most talented students, whatever their financial circumstances. So, to every student out there who dreams of coming to MIT: Don’t let concerns about cost stand in your way.”

MIT is one of only nine colleges in the US that does not consider applicants’ ability to pay as part of its admissions process and that meets the full demonstrated financial need ⁠for all undergraduates. MIT does not expect students on aid to take loans, and, unlike many other institutions, MIT does not provide an admissions advantage to the children of alumni or donors. Indeed, 18 percent of current MIT undergraduates are first-generation college students.

“We believe MIT should be the preeminent destination for the most talented students in the country interested in an education centered on science and technology, and accessible to the best students regardless of their financial circumstances,” says Stu Schmill, MIT’s dean of admissions and student financial services.

“With the need-based financial aid we provide today, our education is much more affordable now than at any point in the past,” adds Schmill, who graduated from MIT in 1986, “even though the ‘sticker price’ of MIT is higher now than it was when I was an undergraduate.”

Last year, the median annual cost paid by an MIT undergraduate receiving financial aid was $12,938⁠, allowing 87 percent of students in the Class of 2024 to graduate debt-free. Those who did borrow graduated with median debt of $14,844. At the same time, graduates benefit from the lifelong value of an MIT degree, with an average starting salary of $126,438 for graduates entering industry, according to MIT’s most recent survey of its graduating students.

MIT’s endowment — made up of generous gifts made by individual alumni and friends — allows the Institute to provide this level of financial aid, both now and into the future.

“Today’s announcement is a powerful expression of how much our graduates value their MIT experience,” Kornbluth says, “because our ability to provide financial aid of this scope depends on decades of individual donations to our endowment, from generations of MIT alumni and other friends. In effect, our endowment is an inter-generational gift from past MIT students to the students of today and tomorrow.”

What MIT families can expect in 2025

As noted earlier: Starting next fall, for families with income below $100,000, with typical assets, parents can expect to pay nothing for the full cost of attendance, which includes tuition, housing, dining, fees, and allowances for books and personal expenses.

For families with income from $100,000 to $200,000, with typical assets, parents can expect to pay on a sliding scale from $0 up to a maximum of around $23,970, which is this year’s total cost for MIT housing, dining, fees, and allowances for books and personal expenses.

Put another way, next year all MIT families with income below $200,000 can expect to contribute well below $27,146, which is the annual average cost for in-state students to attend and live on campus at public universities in the US, according to the Education Data Initiative. And even among families with income above $200,000, many still receive need-based financial aid from MIT, based on their unique financial circumstances. Families can use MIT’s online calculators to estimate the cost of attendance for their specific family.

This past summer, MIT’s faculty-led Committee on Undergraduate Admissions and Financial Aid was publicly charged by President Kornbluth with undertaking a review of the Institute’s admissions and financial aid policies, to ensure that MIT remains as fully accessible as possible to all students, regardless of their financial circumstances. The steps announced today are the first of these recommendations to be reviewed and adopted.

Salesforce launches AI platform for automated task management

Business Insider’s “CXO AI Playbook” looks at how firms are utilising AI to tackle challenges, scale operations, and plan for the future. The Playbook looks at stories from various industries to see what problems AI is solving, who’s driving these initiatives, and how it’s reshaping strategies….

TBIRD technology could help image black holes’ photon rings

In April 2019, a group of astronomers from around the globe stunned the world when they revealed the first image of a black hole — the monstrous accumulation of collapsed stars and gas that lets nothing escape, not even light. The image, which was of the black hole that sits at the core of a galaxy called Messier 87 (M87), revealed glowing gas around the center of the black hole. In March 2021, the same team produced yet another stunning image that showed the polarization of light around the black hole, revealing its magnetic field.

The “camera” that took both images is the Event Horizon Telescope (EHT), which is not one singular instrument but rather a collection of radio telescopes situated around the globe that work together to create high-resolution images by combining data from each individual telescope. Now, scientists are looking to extend the EHT into space to get an even sharper look at M87’s black hole. But producing the sharpest images in the history of astronomy presents a challenge: transmitting the telescope’s massive dataset back to Earth for processing. A small but powerful laser communications (lasercom) payload developed at MIT Lincoln Laboratory operates at the high data rates needed to image the aspects of interest of the black hole.   

Extending baseline distances into space

The EHT created the two existing images of M87’s black hole via interferometry — specifically, very long-baseline interferometry. Interferometry works by collecting light in the form of radio waves simultaneously with multiple telescopes in separate places on the globe and then comparing the phase difference of the radio waves at the various locations in order to pinpoint the direction of the source. By taking measurements with different combinations of the telescopes around the planet, the EHT collaboration — which included staff members at the Harvard-Smithsonian Center for Astrophysics (CfA) and MIT Haystack Observatory — essentially created an Earth-sized telescope in order to image the incredibly faint black hole 55 million light-years away from Earth.

With interferometry, the bigger the telescope, the better the resolution of the image. Therefore, in order to focus in on even finer characteristics of these black holes, a bigger instrument is needed. Details that astronomers hope to resolve include the turbulence of the gas falling into a black hole (which drives the accumulation of matter onto the black hole through a process called accretion) and a black hole’s shadow (which could be used to help pin down where the jet coming from M87 is drawing its energy from). The ultimate goal is to observe a photon ring (the place where light orbits closest before escaping) around the black hole. Capturing an image of the photon ring would enable scientists to put Albert Einstein’s general theory of relativity to the test.

Video thumbnail

Play video

The Black Hole Photon Ring
Video: Black Hole Explorer

With Earth-based telescopes, the farthest that two telescopes could be from one another is on opposite sides of the Earth, or about 13,000 kilometers apart. In addition to this maximum baseline distance, Earth-based instruments are limited by the atmosphere, which makes observing shorter wavelengths difficult. Earth’s atmospheric limitations can be overcome by extending the EHT’s baselines and putting at least one of the telescopes in space, which is exactly what the proposed CfA-led Black Hole Explorer (BHEX) mission aims to do.

One of the most significant challenges that comes with this space-based concept is transfer of information. The dataset to produce the first EHT image was so massive (totaling 4 petabytes) that the data had to be put on disks and shipped to a facility for processing. Gathering information from a telescope in orbit would be even more difficult; the team would need a system that can downlink data from the space telescope to Earth at approximately 100 gigabits per second (Gbps) in order to image the desired aspects of the black hole.

Enter TBIRD

Here is where Lincoln Laboratory comes in. In May 2023, the laboratory’s TeraByte InfraRed Delivery (TBIRD) lasercom payload achieved the fastest data transfer from space, transmitting at a rate of 200 Gbps — which is 1,000 times faster than typical satellite communication systems — from low Earth orbit (LEO).

NASA’s TeraByte InfraRed Delivery (TBIRD) lasercom demo completes mission
Video: MIT Lincoln Laboratory

“We developed a novel technology for high-volume data transport from space to ground,” says Jade Wang, assistant leader of the laboratory’s Optical and Quantum Communications Group. “In the process of developing that technology, we looked for collaborations and other potential follow-on missions that could leverage this unprecedented data capability. The BHEX is one such mission. These high data rates will enable scientists to image the photon ring structure of a black hole for the first time.”

A lasercom team led by Wang, in partnership with the CfA, is developing the long-distance, high-rate downlink needed for the BHEX mission in middle Earth orbit (MEO).

“Laser communications is completely upending our expectations for what astrophysical discoveries are possible from space,” says CfA astrophysicist Michael Johnson, principal investigator for the BHEX mission. “In the next decade, this incredible new technology will bring us to the edge of a black hole, creating a window into the region where our current understanding of physics breaks down.”

Though TBIRD is incredibly powerful, the technology needs some modifications to support the higher orbit that BHEX requires for its science mission. The small TBIRD payload (CubeSat) will be upgraded to a larger aperture size and higher transmit power. In addition, the TBIRD automatic request protocol — the error-control mechanism for ensuring data make it to Earth without loss due to atmospheric effects — will be adjusted to account for the longer round-trip times that come with a mission in MEO. Finally, the TBIRD LEO “buffer and burst” architecture for data delivery will shift to a streaming approach.

“With TBIRD and other lasercom missions, we have demonstrated that the lasercom technology for such an impactful science mission is available today,” Wang says. “Having the opportunity to contribute to an area of really interesting scientific discovery is an exciting prospect.”

The BHEX mission concept has been in development since 2019. Technical and concept studies for BHEX have been supported by the Smithsonian Astrophysical Observatory, the Internal Research and Development program at NASA Goddard Space Flight Center, the University of Arizona, and the ULVAC-Hayashi Seed Fund from the MIT-Japan Program at MIT International Science and Technology Initiatives. BHEX studies of lasercom have been supported by Fred Ehrsam and the Gordon and Betty Moore Foundation. 

Making a mark in the nation’s capital

Anoushka Bose ’20 spent the summer of 2018 as an MIT Washington program intern, applying her nuclear physics education to arms control research with a D.C. nuclear policy think tank.

“It’s crazy how much three months can transform people,” says Bose, now an attorney at the Department of Justice.

“Suddenly, I was learning far more than I had expected about treaties, nuclear arms control, and foreign relations,” adds Bose. “But once I was hooked, I couldn’t be stopped as that summer sparked a much broader interest in diplomacy and set me on a different path.”

Bose is one of hundreds of MIT undergraduates whose academic and career trajectories were influenced by their time in the nation’s capital as part of the internship program.

Leah Nichols ’00 is a former D.C. intern, and now executive director of George Mason University’s Institute for a Sustainable Earth. In 1998, Nichols worked in the office of U.S. Senator Max Baucus, D-Mont., developing options for protecting open space on private land.

“I really started to see how science and policy needed to interact in order to solve environmental challenges,” she says. “I’ve actually been working at that interface between science and policy ever since.”

Marking its 30th anniversary this year, the MIT Washington Summer Internship Program has shaped the lives of alumni, and expanded MIT’s capital in the capital city.

Bose believes the MIT Washington summer internship is more vital than ever.

“This program helps steer more technical expertise, analytical thinking, and classic MIT innovation into policy spaces to make them better-informed and better equipped to solve challenges,” she says. With so much at stake, she suggests, it is increasingly important “to invest in bringing the MIT mindset of extreme competence as well as resilience to D.C.”

MIT missionaries

Over the past three decades, students across MIT — whether studying aeronautics or nuclear engineering, management or mathematics, chemistry or computer science — have competed for and won an MIT Washington summer internship. Many describe it as a springboard into high-impact positions in politics, public policy, and the private sector.

The program was launched in 1994 by Charles Stewart III, the Kenan Sahin (1963) Distinguished Professor of Political Science, who still serves as the director.

“The idea 30 years ago was to make this a bit of a missionary program, where we demonstrate to Washington the utility of having MIT students around for things they’re doing,” says Stewart. “MIT’s reputation benefits because our students are unpretentious, down-to-earth, interested in how the world actually works, and dedicated to fixing things that are broken.”

The outlines of the program have remained much the same: A cohort of 15 to 20 students is selected from a pool of fall applicants. With the help of MIT’s Washington office, the students are matched with potential supervisors in search of technical and scientific talent. They travel in the spring to meet potential supervisors and receive a stipend and housing for the summer. In the fall, students take a course that Stewart describes as an “Oxbridge-type tutorial, where they contextualize their experiences and reflect on the political context of the place where they worked.”

Stewart remains as enthusiastic about the internship program as when he started and has notions for building on its foundations. His wish list includes running the program at other times of the year, and for longer durations. “Six months would really change and deepen the experience,” he says. He envisions a real-time tutorial while the students are in Washington. And he would like to draw more students from the data science world. “Part of the goal of this program is to hook non-obvious people into knowledge of the public policy realm,” he says.

Prized in Washington

MIT Vice Provost Philip Khoury, who helped get the program off the ground, praised Stewart’s vision for developing the initial idea.

“Charles understood why science- and technology-oriented students would be great beneficiaries of an experience in Washington and had something to contribute that other internship program students would not be able to do because of their prowess, their prodigious abilities in the technology-engineering-science world,” says Khoury.

Khoury adds that the program has benefited both the host organizations and the students.

“Members of Congress and senior staff who were developing policies prized MIT students, because they were powerful thinkers and workaholics, and students in the program learned that they really mattered to adults in Washington, wherever they went.”

David Goldston, director of the MIT Washington Office, says government is “kind of desperate for people who understand science and technology.” One example: The National Institute of Standards and Technology has launched an artificial intelligence safety division that is “almost begging for students to help conduct research and carry out the ever-expanding mission of worrying about AI issues,” he says.

Holly Krambeck ’06 MST/MCP, program manager of the World Bank Data Lab, can attest to this impact. She hired her first MIT summer intern, Chae Won Lee, in 2013, to analyze road crash data from the Philippines. “Her findings were so striking, we invited her to join the team on a mission to present her work to the government,” says Krambeck.

Subsequent interns have helped the World Bank demonstrate effective, low-cost, transit-fare collection systems; identify houses eligible for hurricane protection retrofits under World Bank loans; and analyze heatwave patterns in the Philippines to inform a lending program for mitigation measures.

“Every year, I’ve been so impressed by the maturity, energy, willingness to learn new skills, and curiosity of the MIT students,” says Krambeck. “At the end of each summer, we ask students to present their projects to World Bank staff, who are invariably amazed to learn that these are undergraduates and not PhD candidates!”

Career springboard

“It absolutely changed my career pathway,” says Samuel Rodarte Jr. ’13, a 2011 program alumnus who interned at the MIT Washington Office, where he tracked congressional hearings related to research at the Institute. Today, he serves as a legislative assistant to Senate Majority Leader Charles E. Schumer. An aerospace engineering and Latin American studies double major, Rodarte says the opportunity to experience policymaking from the inside came “at just the right time, when I was trying to figure out what I really wanted to do post-MIT.”

Miranda Priebe ’03 is director of the Center for Analysis of U.S. Grand Strategy for the Rand Corp. She briefs groups within the Pentagon, the U.S. Department of State, and the National Security Council, among others. “My job is to ask the big question: Does the United States have the right approach in the world in terms of advancing our interests with our capabilities and resources?”

Priebe was a physics major with an evolving interest in political science when she arrived in Washington in 2001 to work in the office of Senator Carl Levin, D-Mich., the chair of the Senate Armed Services Committee. “I was working really hard at MIT, but just hadn’t found my passion until I did this internship,” she says. “Once I came to D.C. I saw all the places I could fit in using my analytical skills — there were a million things I wanted to do — and the internship convinced me that this was the right kind of work for me.”

During her internship in 2022, Anushree Chaudhuri ’24, urban studies and planning and economics major, worked in the U.S. Department of Energy’s Building Technologies Office, where she hoped to experience day-to-day life in a federal agency — with an eye toward a career in high-level policymaking. She developed a web app to help local governments determine which census tracts qualified for environmental justice funds.

“I was pleasantly surprised to see that even as a lower-level civil servant you can make change if you know how to work within the system.” Chaudhuri is now a Marshall Scholar, pursuing a PhD at the University of Oxford on the socioeconomic impacts of energy infrastructure. “I’m pretty sure I want to work in the policy space long term,” she says.

A model of virtuosity

A crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and vocalist Camilla Bäckman, has performed with Rudess before. The other — an artificial intelligence model informally dubbed the jam_bot, which Rudess developed with an MIT team over the preceding several months — was making its public debut as a work in progress.

Throughout the show, Rudess and Bäckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess’ interactions with the jam_bot suggested a different and unfamiliar kind of exchange. During one duet inspired by Bach, Rudess alternated between playing a few measures and allowing the AI to continue the music in a similar baroque style. Each time the model took its turn, a range of expressions moved across Rudess’ face: bemusement, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, “That is a combination of a whole lot of fun and really, really challenging.”

Rudess is an acclaimed keyboardist — the best of all time, according to one Music Radar magazine poll — known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which embarks this fall on a 40th anniversary tour. He is also a solo artist whose latest album, “Permission to Fly,” was released on Sept. 6; an educator who shares his skills through detailed online tutorials; and the founder of software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.

Last spring, Rudess became a visiting artist with the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab’s Responsive Environments research group on the creation of new AI-powered music technology. Rudess’ main collaborators in the enterprise are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative AI (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light- and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a longtime Rudess fan. Paradiso arrived at the Media Lab in 1994 with a CV in physics and engineering and a sideline designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional datasets.

The researchers set out to develop a machine learning model channeling Rudess’ distinctive musical style and technique. In a paper published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision for what they call “symbiotic virtuosity:” for human and computer to duet in real-time, learning from each duet they perform together, and making performance-worthy new music in front of a live audience.

Rudess contributed the data on which Blanchard trained the AI model. Rudess also provided continuous testing and feedback, while Naseck experimented with ways of visualizing the technology for the audience.

“Audiences are used to seeing lighting, graphics, and scenic elements at many concerts, so we needed a platform to allow the AI to build its own relationship with the audience,” Naseck says. In early demos, this took the form of a sculptural installation with illumination that shifted each time the AI changed chords. During the concert on Sept. 21, a grid of petal-shaped panels mounted behind Rudess came to life through choreography based on the activity and future generation of the AI model.

“If you see jazz musicians make eye contact and nod at each other, that gives anticipation to the audience of what’s going to happen,” says Naseck. “The AI is effectively generating sheet music and then playing it. How do we show what’s coming next and communicate that?”

Naseck designed and programmed the structure from scratch at the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music to points moving in space. With the ability to spin and tilt its petals at speeds ranging from subtle to dramatic, the kinetic sculpture distinguished the AI’s contributions during the concert from those of the human performers, while conveying the emotion and energy of its output: swaying gently when Rudess took the lead, for example, or furling and unfurling like a blossom as the AI model generated stately chords for an improvised adagio. The latter was one of Naseck’s favorite moments of the show.

“At the end, Jordan and Camilla left the stage and allowed the AI to fully explore its own direction,” he recalls. “The sculpture made this moment very powerful — it allowed the stage to remain animated and intensified the grandiose nature of the chords the AI played. The audience was clearly captivated by this part, sitting at the edges of their seats.”

“The goal is to create a musical visual experience,” says Rudess, “to show what’s possible and to up the game.”

Musical futures

As the starting point for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang SM ’08, who joined the MIT faculty in September.

“Music transformers work in a similar way as large language models,” Blanchard explains. “The same way that ChatGPT would generate the most probable next word, the model we have would predict the most probable next notes.”

Blanchard fine-tuned the model using Rudess’ own playing of elements from bass lines to chords to melodies, variations of which Rudess recorded in his New York studio. Along the way, Blanchard ensured the AI would be nimble enough to respond in real-time to Rudess’ improvisations.

“We reframed the project,” says Blanchard, “in terms of musical futures that were hypothesized by the model and that were only being realized at the moment based on what Jordan was deciding.”

As Rudess puts it: “How can the AI respond — how can I have a dialogue with it? That’s the cutting-edge part of what we’re doing.”

Another priority emerged: “In the field of generative AI and music, you hear about startups like Suno or Udio that are able to generate music based on text prompts. Those are very interesting, but they lack controllability,” says Blanchard. “It was important for Jordan to be able to anticipate what was going to happen. If he could see the AI was going to make a decision he didn’t want, he could restart the generation or have a kill switch so that he can take control again.”

In addition to giving Rudess a screen previewing the musical decisions of the model, Blanchard built in different modalities the musician could activate as he plays — prompting the AI to generate chords or lead melodies, for example, or initiating a call-and-response pattern.

“Jordan is the mastermind of everything that’s happening,” he says.

What would Jordan do

Though the residency has wrapped up, the collaborators see many paths for continuing the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with his installation, through features like capacitive sensing. “We hope in the future we’ll be able to work with more of his subtle motions and posture,” Naseck says.

While the MIT collaboration focused on how Rudess can use the tool to augment his own performances, it’s easy to imagine other applications. Paradiso recalls an early encounter with the tech: “I played a chord sequence, and Jordan’s model was generating the leads. It was like having a musical ‘bee’ of Jordan Rudess buzzing around the melodic foundation I was laying down, doing something like Jordan would do, but subject to the simple progression I was playing,” he recalls, his face echoing the delight he felt at the time. “You’re going to see AI plugins for your favorite musician that you can bring into your own compositions, with some knobs that let you control the particulars,” he posits. “It’s that kind of world we’re opening up with this.”

Rudess is also keen to explore educational uses. Because the samples he recorded to train the model were similar to ear-training exercises he’s used with students, he thinks the model itself could someday be used for teaching. “This work has legs beyond just entertainment value,” he says.

The foray into artificial intelligence is a natural progression for Rudess’ interest in music technology. “This is the next step,” he believes. When he discusses the work with fellow musicians, however, his enthusiasm for AI often meets with resistance. “I can have sympathy or compassion for a musician who feels threatened, I totally get that,” he allows. “But my mission is to be one of the people who moves this technology toward positive things.”

“At the Media Lab, it’s so important to think about how AI and humans come together for the benefit of all,” says Paradiso. “How is AI going to lift us all up? Ideally it will do what so many technologies have done — bring us into another vista where we’re more enabled.”

“Jordan is ahead of the pack,” Paradiso adds. “Once it’s established with him, people will follow.”

Jamming with MIT

The Media Lab first landed on Rudess’ radar before his residency because he wanted to try out the Knitted Keyboard created by another member of Responsive Environments, textile researcher Irmandy Wickasono PhD ’24. From that moment on, “It’s been a discovery for me, learning about the cool things that are going on at MIT in the music world,” Rudess says.

During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess reviewed final projects in Paradiso’s course on electronic music controllers, the syllabus for which included videos of his own past performances. He brought a new gesture-driven synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include the co-creation of the video game “Guitar Hero.” Rudess also provided tips on improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with Stanford University researchers, with student musicians in the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio in the MIT Spatial Sound Lab. During his most recent trip to campus in September, he taught a masterclass for pianists in MIT’s Emerson/Harris Program, which provides a total of 67 scholars and fellows with support for conservatory-level musical instruction.

“I get a kind of rush whenever I come to the university,” Rudess says. “I feel the sense that, wow, all of my musical ideas and inspiration and interests have come together in this really cool way.”

Can robots learn from machine dreams?

For roboticists, one challenge towers above all others: generalization — the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans’ ability to provide it.

Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called “LucidSim,” uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

Video thumbnail

Play video

LucidSim: Can Robots Learn from Machine Dreams?
Video: MIT CSAIL

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world. “A fundamental challenge in robot learning has long been the ‘sim-to-real gap’ — the disparity between simulated training environments and the complex, unpredictable real world,” says MIT CSAIL postdoc Ge Yang, a lead researcher on LucidSim. “Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities.”

The multipronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

The birth of an idea: From burritos to breakthroughs

The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, Massachusetts. ​​“We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn’t have a pure vision-based policy to begin with,” says Alan Yu, an undergraduate student in electrical engineering and computer science (EECS) at MIT and co-lead author on LucidSim. “We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half-an-hour. That’s where we had our moment.”

To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren’t different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos that serve as little “experiences” for the robot, the scientists hacked together some image magic into another novel technique the team created, called “Dreams In Motion.” The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot’s perspective.

“We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days,” says Yu. “While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It’s exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments.”

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main test bed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area; also, color perception is critical. “Today, these robots still learn from real-world demonstrations,” says Yang. “Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment.”

Who’s the real expert?

The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: Robots trained by the expert struggled, succeeding only 15 percent of the time — and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88 percent. “And giving our robot more data monotonically improves its performance — eventually, the student becomes the expert,” says Yang.

“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” says Stanford University assistant professor of electrical engineering Shuran Song, who wasn’t involved in the research. “The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines — ones that learn to navigate our complex world without ever setting foot in it.

Yu and Yang wrote the paper with four fellow CSAIL affiliates: Ran Choi, an MIT postdoc in mechanical engineering; Yajvan Ravan, an MIT undergraduate in EECS; John Leonard, the Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering; and Phillip Isola, an MIT associate professor in EECS. Their work was supported, in part, by a Packard Fellowship, a Sloan Research Fellowship, the Office of Naval Research, Singapore’s Defence Science and Technology Agency, Amazon, MIT Lincoln Laboratory, and the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions. The researchers presented their work at the Conference on Robot Learning (CoRL) in early November.

Moshe Tanach, CEO and Co-Founder at NeuReality – Interview Series

Moshe Tanach is the CEO & co-founder of NeuReality. Before founding NeuReality, Moshe served as Director of Engineering at Marvell and Intel, where he led the development of complex wireless and networking products to mass production. He also served as AVP of R&D at DesignArt Networks…