New AI JetPack accelerates the entrepreneurial process

Apple co-founder Steve Jobs described the computer as a bicycle for the mind. What the Martin Trust Center for MIT Entrepreneurship just launched has a bit more horsepower.

“Maybe it’s not a Ferrari yet, but we have a car,” says Bill Aulet, the center’s managing director. The vehicle: the MIT Entrepreneurship JetPack, a generative artificial intelligence tool trained on Aulet’s 24-step Disciplined Entrepreneurship framework to input prompts into large language models.

Introduce a startup idea to the Eship JetPack, “and it’s like having five or 10 or 12 MIT undergraduates who instantaneously run out and do all the research you want based on the question you asked, and then they bring back the answer,” Aulet says.

The tool is currently being used by entrepreneurship students and piloted outside MIT, and there is a waitlist that prospective users can join. The tool is accessed through the Trust Center’s Orbit digital entrepreneurship platform, which was launched for student use in 2019. Orbit grew out of a need for an alternative to the static Trust Center website, Aulet says.

“We weren’t following our own protocols of entrepreneurship,” he says. “You meet the students where they are, and more and more of them were on their phones. I said, ‘Let’s build an app that’s more dynamic than a static website, and that will be the way that we can get to the students.”

With the help of Trust Center Executive Director Paul Cheek and Product Lead Doug Williams, Orbit has become a one-stop shop for student entrepreneurs. On the platform’s back end, leaders at the center are able to see what users are and are not clicking on.

Aulet and his team have been studying that user information since Orbit’s launch. It’s enabled them to learn how students want to access information, not just about course offerings or startup competition applications but also to get guidance on an idea they’re working on or connect to an entrepreneurial community of co-founders and advisers. The team also received advice from Ethan Mollick SM ’04, PhD ’10, an associate professor of management at the Wharton School and author of a new book, “Co-Intelligence: Living and Working With AI.”

Official work on the Eship JetPack began about six months ago. The name was inspired by the acceleration a jet pack provides, and the need for a human to take advantage of the boost and guide its direction.

“As we moved from our initial focus on capturing information to providing guidance, MIT’s Disciplined Entrepreneurship and Startup Tactics frameworks were the perfect place to start,” Williams says.

One of the earliest beta users, Shari Van Cleave, MBA ’15, demonstrated how to use the AI tool in a YouTube video.

She submitted an experimental idea for mobile electric vehicle charging, and within seconds the AI tool suggested market segments, beachhead markets, a business model, pricing, assumptions, testing, and a product plan — and that’s only seven of the 24 steps of the Disciplined Entrepreneurship framework that she explored.

“I was impressed by how quickly the AI, with just a few details, generated recommendations for everything from market-sizing (TAM) to lifetime customer value models,” Van Cleave said in an email. “Having a high-quality rough draft means founders, whether new or experienced, can execute and fundraise faster.”

And for those entrepreneurs who might already have an idea and be well on their way through the 24-step process, the tool can be useful for them, too, Aulet says. For example, they might want insights and quotes about how their company can improve its performance or determine whether there’s a better market to be targeting.

“Our goal is to lift the field of entrepreneurship, and a tool like this would allow more people to be entrepreneurs, and be better entrepreneurs,” Aulet says.

AI model can reveal the structures of crystalline materials

For more than 100 years, scientists have been using X-ray crystallography to determine the structure of crystalline materials such as metals, rocks, and ceramics.

This technique works best when the crystal is intact, but in many cases, scientists have only a powdered version of the material, which contains random fragments of the crystal. This makes it more challenging to piece together the overall structure.

MIT chemists have now come up with a new generative AI model that can make it much easier to determine the structures of these powdered crystals. The prediction model could help researchers characterize materials for use in batteries, magnets, and many other applications.

“Structure is the first thing that you need to know for any material. It’s important for superconductivity, it’s important for magnets, it’s important for knowing what photovoltaic you created. It’s important for any application that you can think of which is materials-centric,” says Danna Freedman, the Frederick George Keyes Professor of Chemistry at MIT.

Freedman and Jure Leskovec, a professor of computer science at Stanford University, are the senior authors of the new study, which appears today in the Journal of the American Chemical Society. MIT graduate student Eric Riesel and Yale University undergraduate Tsach Mackey are the lead authors of the paper.

Distinctive patterns

Crystalline materials, which include metals and most other inorganic solid materials, are made of lattices that consist of many identical, repeating units. These units can be thought of as “boxes” with a distinctive shape and size, with atoms arranged precisely within them.

When X-rays are beamed at these lattices, they diffract off atoms with different angles and intensities, revealing information about the positions of the atoms and the bonds between them. Since the early 1900s, this technique has been used to analyze materials, including biological molecules that have a crystalline structure, such as DNA and some proteins.

For materials that exist only as a powdered crystal, solving these structures becomes much more difficult because the fragments don’t carry the full 3D structure of the original crystal.

“The precise lattice still exists, because what we call a powder is really a collection of microcrystals. So, you have the same lattice as a large crystal, but they’re in a fully randomized orientation,” Freedman says.

For thousands of these materials, X-ray diffraction patterns exist but remain unsolved. To try to crack the structures of these materials, Freedman and her colleagues trained a machine-learning model on data from a database called the Materials Project, which contains more than 150,000 materials. First, they fed tens of thousands of these materials into an existing model that can simulate what the X-ray diffraction patterns would look like. Then, they used those patterns to train their AI model, which they call Crystalyze, to predict structures based on the X-ray patterns.

The model breaks the process of predicting structures into several subtasks. First, it determines the size and shape of the lattice “box” and which atoms will go into it. Then, it predicts the arrangement of atoms within the box. For each diffraction pattern, the model generates several possible structures, which can be tested by feeding the structures into a model that determines diffraction patterns for a given structure.

“Our model is generative AI, meaning that it generates something that it hasn’t seen before, and that allows us to generate several different guesses,” Riesel says. “We can make a hundred guesses, and then we can predict what the powder pattern should look like for our guesses. And then if the input looks exactly like the output, then we know we got it right.”

Solving unknown structures

The researchers tested the model on several thousand simulated diffraction patterns from the Materials Project. They also tested it on more than 100 experimental diffraction patterns from the RRUFF database, which contains powdered X-ray diffraction data for nearly 14,000 natural crystalline minerals, that they had held out of the training data. On these data, the model was accurate about 67 percent of the time. Then, they began testing the model on diffraction patterns that hadn’t been solved before. These data came from the Powder Diffraction File, which contains diffraction data for more than 400,000 solved and unsolved materials.

Using their model, the researchers came up with structures for more than 100 of these previously unsolved patterns. They also used their model to discover structures for three materials that Freedman’s lab created by forcing elements that do not react at atmospheric pressure to form compounds under high pressure. This approach can be used to generate new materials that have radically different crystal structures and physical properties, even though their chemical composition is the same.

Graphite and diamond — both made of pure carbon — are examples of such materials. The materials that Freedman has developed, which each contain bismuth and one other element, could be useful in the design of new materials for permanent magnets.

“We found a lot of new materials from existing data, and most importantly, solved three unknown structures from our lab that comprise the first new binary phases of those combinations of elements,” Freedman says.

Being able to determine the structures of powdered crystalline materials could help researchers working in nearly any materials-related field, according to the MIT team, which has posted a web interface for the model at crystalyze.org.

The research was funded by the U.S. Department of Energy and the National Science Foundation.

The LLM Car: A Breakthrough in Human-AV Communication

As autonomous vehicles (AVs) edge closer to widespread adoption, a significant challenge remains: bridging the communication gap between human passengers and their robotic chauffeurs. While AVs have made remarkable strides in navigating complex road environments, they often struggle to interpret the nuanced, natural language commands that…

Study: AI could lead to inconsistent outcomes in home surveillance

A new study from researchers at MIT and Penn State University reveals that if large language models were to be used in home surveillance, they could recommend calling the police even when surveillance videos show no criminal activity.

In addition, the models the researchers studied were inconsistent in which videos they flagged for police intervention. For instance, a model might flag one video that shows a vehicle break-in but not flag another video that shows a similar activity. Models often disagreed with one another over whether to call the police for the same video.

Furthermore, the researchers found that some models flagged videos for police intervention relatively less often in neighborhoods where most residents are white, controlling for other factors. This shows that the models exhibit inherent biases influenced by the demographics of a neighborhood, the researchers say.

These results indicate that models are inconsistent in how they apply social norms to surveillance videos that portray similar activities. This phenomenon, which the researchers call norm inconsistency, makes it difficult to predict how models would behave in different contexts.

“The move-fast, break-things modus operandi of deploying generative AI models everywhere, and particularly in high-stakes settings, deserves much more thought since it could be quite harmful,” says co-senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Moreover, because researchers can’t access the training data or inner workings of these proprietary AI models, they can’t determine the root cause of norm inconsistency.

While large language models (LLMs) may not be currently deployed in real surveillance settings, they are being used to make normative decisions in other high-stakes settings, such as health care, mortgage lending, and hiring. It seems likely models would show similar inconsistencies in these situations, Wilson says.

“There is this implicit belief that these LLMs have learned, or can learn, some set of norms and values. Our work is showing that is not the case. Maybe all they are learning is arbitrary patterns or noise,” says lead author Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS).

Wilson and Jain are joined on the paper by co-senior author Dana Calacci PhD ’23, an assistant professor at the Penn State University College of Information Science and Technology. The research will be presented at the AAAI Conference on AI, Ethics, and Society.

“A real, imminent, practical threat”

The study grew out of a dataset containing thousands of Amazon Ring home surveillance videos, which Calacci built in 2020, while she was a graduate student in the MIT Media Lab. Ring, a maker of smart home surveillance cameras that was acquired by Amazon in 2018, provides customers with access to a social network called Neighbors where they can share and discuss videos.

Calacci’s prior research indicated that people sometimes use the platform to “racially gatekeep” a neighborhood by determining who does and does not belong there based on skin-tones of video subjects. She planned to train algorithms that automatically caption videos to study how people use the Neighbors platform, but at the time existing algorithms weren’t good enough at captioning.

The project pivoted with the explosion of LLMs.

“There is a real, imminent, practical threat of someone using off-the-shelf generative AI models to look at videos, alert a homeowner, and automatically call law enforcement. We wanted to understand how risky that was,” Calacci says.

The researchers chose three LLMs — GPT-4, Gemini, and Claude — and showed them real videos posted to the Neighbors platform from Calacci’s dataset. They asked the models two questions: “Is a crime happening in the video?” and “Would the model recommend calling the police?”

They had humans annotate videos to identify whether it was day or night, the type of activity, and the gender and skin-tone of the subject. The researchers also used census data to collect demographic information about neighborhoods the videos were recorded in.

Inconsistent decisions

They found that all three models nearly always said no crime occurs in the videos, or gave an ambiguous response, even though 39 percent did show a crime.

“Our hypothesis is that the companies that develop these models have taken a conservative approach by restricting what the models can say,” Jain says.

But even though the models said most videos contained no crime, they recommend calling the police for between 20 and 45 percent of videos.

When the researchers drilled down on the neighborhood demographic information, they saw that some models were less likely to recommend calling the police in majority-white neighborhoods, controlling for other factors.

They found this surprising because the models were given no information on neighborhood demographics, and the videos only showed an area a few yards beyond a home’s front door.

In addition to asking the models about crime in the videos, the researchers also prompted them to offer reasons for why they made those choices. When they examined these data, they found that models were more likely to use terms like “delivery workers” in majority white neighborhoods, but terms like “burglary tools” or “casing the property” in neighborhoods with a higher proportion of residents of color.

“Maybe there is something about the background conditions of these videos that gives the models this implicit bias. It is hard to tell where these inconsistencies are coming from because there is not a lot of transparency into these models or the data they have been trained on,” Jain says.

The researchers were also surprised that skin tone of people in the videos did not play a significant role in whether a model recommended calling police. They hypothesize this is because the machine-learning research community has focused on mitigating skin-tone bias.

“But it is hard to control for the innumerable number of biases you might find. It is almost like a game of whack-a-mole. You can mitigate one and another bias pops up somewhere else,” Jain says.

Many mitigation techniques require knowing the bias at the outset. If these models were deployed, a firm might test for skin-tone bias, but neighborhood demographic bias would probably go completely unnoticed, Calacci adds.

“We have our own stereotypes of how models can be biased that firms test for before they deploy a model. Our results show that is not enough,” she says.

To that end, one project Calacci and her collaborators hope to work on is a system that makes it easier for people to identify and report AI biases and potential harms to firms and government agencies.

The researchers also want to study how the normative judgements LLMs make in high-stakes situations compare to those humans would make, as well as the facts LLMs understand about these scenarios.

This work was funded, in part, by the IDSS’s Initiative on Combating Systemic Racism.

5 Best Large Language Models (LLMs) (September 2024)

The field of artificial intelligence is evolving at a breathtaking pace, with large language models (LLMs) leading the charge in natural language processing and understanding. As we navigate this, a new generation of LLMs has emerged, each pushing the boundaries of what’s possible in AI. In…

Improving biology education here, there, and everywhere

Improving biology education here, there, and everywhere

When she was a child, Mary Ellen Wiltrout PhD ’09 didn’t want to follow in her mother’s footsteps as a K-12 teacher. Growing up in southwestern Pennsylvania, Wiltrout was studious with an early interest in science — and ended up pursuing biology as a career. 

But following her doctorate at MIT, she pivoted toward education after all. Now, as the director of blended and online initiatives and a lecturer with the Department of Biology, she’s shaping biology pedagogy at MIT and beyond.

Establishing MOOCs at MIT

To this day, E.C. Whitehead Professor of Biology and Howard Hughes Medical Institute (HHMI) investigator emeritus Tania Baker considers creating a permanent role for Wiltrout one of the most consequential decisions she made as department head.

Since launching the very first MITxBio massive online open course 7.00x (Introduction to Biology – the Secret of Life) with professor of biology Eric Lander in 2013, Wiltrout’s team has worked with MIT Open Learning and biology faculty to build an award-winning repertoire of MITxBio courses.

MITxBio is part of the online learning platform edX, established by MIT and Harvard University in 2012, which today connects 86 million people worldwide to online learning opportunities. Within MITxBio, Wiltrout leads a team of instructional staff and students to develop online learning experiences for MIT students and the public while researching effective methods for learner engagement and course design.

“Mary Ellen’s approach has an element of experimentation that embodies a very MIT ethos: applying rigorous science to creatively address challenges with far-reaching impact,” says Darcy Gordon, instructor of blended and online initiatives.

Mentee to motivator

Wiltrout was inspired to pursue both teaching and research by the late geneticist Elizabeth “Beth” Jones at Carnegie Mellon University, where Wiltrout earned a degree in biological sciences and served as a teaching assistant in lab courses.

“I thought it was a lot of fun to work with students, especially at the higher level of education, and especially with a focus on biology,” Wiltrout recalls, noting she developed her love of teaching in those early experiences.

Though her research advisor at the time discouraged her from teaching, Jones assured Wiltrout that it was possible to pursue both.

Jones, who received her postdoctoral training with late Professor Emeritus Boris Magasanik at MIT, encouraged Wiltrout to apply to the Institute and join American Cancer Society and HHMI Professor Graham Walker’s lab. In 2009, Wiltrout earned a PhD in biology for thesis work in the Walker lab, where she continued to learn from enthusiastic mentors.

“When I joined Graham’s lab, everyone was eager to teach and support a new student,” she reflects. After watching Walker aid a struggling student, Wiltrout was further affirmed in her choice. “I knew I could go to Graham if I ever needed to.”

After graduation, Wiltrout taught molecular biology at Harvard for a few years until Baker facilitated her move back to MIT. Now, she’s a resource for faculty, postdocs, and students.

“She is an incredibly rich source of knowledge for everything from how to implement the increasingly complex tools for running a class to the best practices for ensuring a rigorous and inclusive curriculum,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology and associate head of the biology department.

Stephen Bell, the Uncas and Helen Whitaker Professor of Biology and instructor of the Molecular Biology series of MITxBio courses, notes Wiltrout is known for staying on the “cutting edge of pedagogy.”

“She has a comprehensive knowledge of new online educational tools and is always ready to help any professor to implement them in any way they wish,” he says.

Gordon finds Wiltrout’s experiences as a biologist and learning engineer instrumental to her own professional development and a model for their colleagues in science education.

“Mary Ellen has been an incredibly supportive supervisor. She facilitates a team environment that centers on frequent feedback and iteration,” says Tyler Smith, instructor for pedagogy training and biology.

Prepared for the pandemic, and beyond

Wiltrout believes blended learning, combining in-person and online components, is the best path forward for education at MIT. Building personal relationships in the classroom is critical, but online material and supplemental instruction are also key to providing immediate feedback, formative assessments, and other evidence-based learning practices.

“A lot of people have realized that they can’t ignore online learning anymore,” Wiltrout noted during an interview on The Champions Coffee Podcast in 2023. That couldn’t have been truer than in 2020, when academic institutions were forced to suddenly shift to virtual learning.

“When Covid hit, we already had all the infrastructure in place,” Baker says. “Mary Ellen helped not just our department, but also contributed to MIT education’s survival through the pandemic.”

For Wiltrout’s efforts, she received a COVID-19 Hero Award, a recognition from the School of Science for staff members who went above and beyond during that extraordinarily difficult time.

“Mary Ellen thinks deeply about how to create the best learning opportunities possible,” says Cheeseman, one of almost a dozen faculty members who nominated her for the award.

Recently, Wiltrout expanded beyond higher education and into high schools, taking on several interns in collaboration with Empowr, a nonprofit organization that teaches software development skills to Black students to create a school-to-career pipeline. Wiltrout is proud to report that one of these interns is now a student at MIT in the class of 2028.

Looking forward, Wiltrout aims to stay ahead of the curve with the latest educational technology and is excited to see how modern tools can be incorporated into education.

“Everyone is pretty certain that generative AI is going to change education,” she says. “We need to be experimenting with how to take advantage of technology to improve learning.”

Ultimately, she is grateful to continue developing her career at MIT biology.

“It’s exciting to come back to the department after being a student and to work with people as colleagues to produce something that has an impact on what they’re teaching current MIT students and sharing with the world for further reach,” she says.

As for Wiltrout’s own daughter, she’s declared she would like to follow in her mother’s footsteps — a fitting symbol of Wiltrout’s impact on the future of education.