Consortium led by MIT, Harvard University, and Mass General Brigham spurs development of 408 MW of renewable energy

MIT is co-leading an effort to enable the development of two new large-scale renewable energy projects in regions with carbon-intensive electrical grids: Big Elm Solar in Bell County, Texas, came online this year, and the Bowman Wind Project in Bowman County, North Dakota, is expected to be operational in 2026. Together, they will add a combined 408 megawatts (MW) of new renewable energy capacity to the power grid. This work is a critical part of MIT’s strategy to achieve its goal of net-zero carbon emissions by 2026.

The Consortium for Climate Solutions, which includes MIT and 10 other Massachusetts organizations, seeks to eliminate close to 1 million metric tons of greenhouse gases each year — more than five times the annual direct emissions from MIT’s campus — by committing to purchase an estimated 1.3-million-megawatt hours of new solar and wind electricity generation annually.

“MIT has mobilized on multiple fronts to expedite solutions to climate change,” says Glen Shor, executive vice president and treasurer. “Catalyzing these large-scale renewable projects is an important part of our comprehensive efforts to reduce carbon emissions from generating energy. We are pleased to work in partnership with other local enterprises and organizations to amplify the impact we could achieve individually.”

The two new projects complement MIT’s existing 25-year power purchase agreement established with Summit Farms in 2016, which enabled the construction of a roughly 650-acre, 60 MW solar farm on farmland in North Carolina, leading to the early retirement of a coal-fired plant nearby. Its success has inspired other institutions to implement similar aggregation models.

A collective approach to enable global impact

MIT, Harvard University, and Mass General Brigham formed the consortium in 2020 to provide a structure to accelerate global emissions reductions through the development of large-scale renewable energy projects — accelerating and expanding the impact of each institution’s greenhouse gas reduction initiatives. As the project’s anchors, they collectively procured the largest volume of energy through the aggregation.  

The consortium engaged with PowerOptions, a nonprofit energy-buying consortium, which offered its members the opportunity to participate in the projects. The City of Cambridge, Beth Israel Lahey, Boston Children’s Hospital, Dana-Farber Cancer Institute, Tufts University, the Mass Convention Center Authority, the Museum of Fine Arts, and GBH later joined the consortium through PowerOptions. 
 
The consortium vetted over 125 potential projects against its rigorous project evaluation criteria. With faculty and MIT stakeholder input on a short list of the highest-ranking projects, it ultimately chose Bowman Wind and Big Elm Solar. Collectively, these two projects will achieve large greenhouse gas emissions reductions in two of the most carbon-intensive electrical grid regions in the United States and create clean energy generation sources to reduce negative health impacts.

“Enabling these projects in regions where the grids are most carbon-intensive allows them to have the greatest impact. We anticipate these projects will prevent two times more emissions per unit of generated electricity than would a similar-scale project in New England,” explains Vice President for Campus Services and Stewardship Joe Higgins.

By all consortium institutions making significant 15-to-20-year financial commitments to buy electricity, the developer was able to obtain critical external project financing to build the projects. Owned and operated by Apex Clean Energy, the projects will add new renewable electricity to the grid equivalent to powering 130,000 households annually, displacing over 950,000 metric tons of greenhouse gas emissions each year from highly carbon-intensive power plants in the region. 

Complementary decarbonization work underway 

In addition to investing in offsite renewable energy projects, many consortium members have developed strategies to reduce and eliminate their own direct emissions. At MIT, accomplishing this requires transformative change in how energy is generated, distributed, and used on campus. Efforts underway include the installation of solar panels on campus rooftops that will increase renewable energy generation four-fold by 2026; continuing to transition our heat distribution infrastructure from steam-based to hot water-based; utilizing design and construction that minimizes emissions and increases energy efficiency; employing AI-enabled sensors to optimize temperature set points and reduce energy use in buildings; and converting MIT’s vehicle fleet to all-electric vehicles while adding more electric car charging stations.

The Institute has also upgraded the Central Utilities Plant, which uses advanced co-generation technology to produce power that is up to 20 percent less carbon-intensive than that from the regional power grid. MIT is charting the course toward a next-generation district energy system, with a comprehensive planning initiative to revolutionize its campus energy infrastructure. The effort is exploring leading-edge technology, including industrial-scale heat pumps, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen derived from renewable sources as solutions to achieve full decarbonization of campus operations by 2050.

“At MIT, we are focused on decarbonizing our own campus as well as the role we can play in solving climate at the largest of scales, including supporting a cleaner grid in line with the call to triple renewables globally by 2030. By enabling these large-scale renewable projects, we can have an immediate and significant impact of reducing emissions through the urgently needed decarbonization of regional power grids,” says Julie Newman, MIT’s director of sustainability.  

A vision for U.S. science success

White House science advisor Arati Prabhakar expressed confidence in U.S. science and technology capacities during a talk on Wednesday about major issues the country must tackle.

“Let me start with the purpose of science and technology and innovation, which is to open possibilities so that we can achieve our great aspirations,” said Prabhakar, who is the director of the Office of Science and Technology Policy (OSTP) and a co-chair of the President’s Council of Advisors on Science and Technology (PCAST). 

“The aspirations that we have as a country today are as great as they have ever been,” she added.

Much of Prabhakar’s talk focused on three major issues in science and technology development: cancer prevention, climate change, and AI. In the process, she also emphasized the necessity for the U.S. to sustain its global leadership in research across domains of science and technology, which she called “one of America’s long-time strengths.”

“Ever since the end of the Second World War, we said we’re going in on basic research, we’re going to build our universities’ capacity to do it, we have an unparalleled basic research capacity, and we should always have that,” said Prabhakar.

“We have gotten better, I think, in recent years at commercializing technology from our basic research,” Prabhakar added, noting, “Capital moves when you can see profit and growth.” The Biden administration, she said, has invested in a variety of new ways for the public and private sector to work together to massively accelerate the movement of technology into the market.

Wednesday’s talk drew a capacity audience of nearly 300 people in MIT’s Wong Auditorium and was hosted by the Manufacturing@MIT Working Group. The event included introductory remarks by Suzanne Berger, an Institute Professor and a longtime expert on the innovation economy, and Nergis Mavalvala, dean of the School of Science and an astrophysicist and leader in gravitational-wave detection.

Introducing Mavalvala, Berger said the 2015 announcement of the discovery of gravitational waves “was the day I felt proudest and most elated to be a member of the MIT community,” and noted that U.S. government support helped make the research possible. Mavalvala, in turn, said MIT was “especially honored” to hear Prabhakar discuss leading-edge research and acknowledge the role of universities in strengthening the country’s science and technology sectors.

Prabhakar has extensive experience in both government and the private sector. She has been OSTP director and co-chair of PCAST since October of 2022. She served as director of the Defense Advanced Research Projects Agency (DARPA) from 2012 to 2017 and director of the National Institute of Standards and Technology (NIST) from 1993 to 1997.

She has also held executive positions at Raychem and Interval Research, and spent a decade at the investment firm U.S. Venture Partners. An engineer by training, Prabhakar earned a BS in electrical engineering from Texas Tech University in 1979, an MA in electrical engineering from Caltech in 1980, and a PhD in applied physics from Caltech in 1984.

Among other remarks about medicine, Prabhakar touted the Biden administration’s “Cancer Moonshot” program, which aims to cut the cancer death rate in half over the next 25 years through multiple approaches, from better health care provision and cancer detection to limiting public exposure to carcinogens. We should be striving, Prabhakar said, for “a future in which people take good health for granted and can get on with their lives.”

On AI, she heralded both the promise and concerns about technology, saying, “I think it’s time for active steps to get on a path to where it actually allows people to do more and earn more.”

When it comes to climate change, Prabhakar said, “We all understand that the climate is going to change. But it’s in our hands how severe those changes get. And it’s possible that we can build a better future.” She noted the bipartisan infrastructure bill signed into law in 2021 and the Biden administration’s Inflation Reduction Act as important steps forward in this fight.

“Together those are making the single biggest investment anyone anywhere on the planet has ever made in the clean energy transition,” she said. “I used to feel hopeless about our ability to do that, and it gives me tremendous hope.”

After her talk, Prabhakar was joined onstage for a group discussion with the three co-presidents of the MIT Energy and Climate Club: Laurentiu Anton, a doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA candidate at the MIT Sloan School of Management; and Thomas Lee, a doctoral candidate in MIT’s Institute for Data, Systems, and Society.

Asked about the seemingly sagging public confidence in science today, Prabhakar offered a few thoughts.

“The first thing I would say is, don’t take it personally,” Prabhakar said, noting that any dip in public regard for science is less severe than the diminished public confidence in other institutions.

Adding some levity, she observed that in polling about which occupations are regarded as being desirable for a marriage partner to have, “scientist” still ranks highly.

“Scientists still do really well on that front, we’ve got that going for us,” she quipped.

More seriously, Prabhakar observed, rather than “preaching” at the public, scientists should recognize that “part of the job for us is to continue to be clear about what we know are the facts, and to present them clearly but humbly, and to be clear that we’re going to continue working to learn more.” At the same time, she continued, scientists can always reinforce that “oh, by the way, facts are helpful things that can actually help you make better choices about how the future turns out. I think that would be better in my view.”

Prabhakar said that her White House work had been guided, in part, by one of the overarching themes that President Biden has often reinforced.

“He thinks about America as a nation that can be described in a single word, and that word is ‘possibilities,’” she said. “And that idea, that is such a big idea, it lights me up. I think of what we do in the world of science and technology and innovation as really part and parcel of creating those possibilities.”

Ultimately, Prabhakar said, at all times and all points in American history, scientists and technologists must continue “to prove once more that when people come together and do this work … we do it in a way that builds opportunity and expands opportunity for everyone in our country. I think this is the great privilege we all have in the work we do, and it’s also our responsibility.”

Big tech’s AI spending hits new heights

In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way. Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for…

Catherine Wolfram: High-energy scholar

In the mid 2000s, Catherine Wolfram PhD ’96 reached what she calls “an inflection point” in her career. After about a decade of studying U.S. electricity markets, she had come to recognize that “you couldn’t study the energy industries without thinking about climate mitigation,” as she puts it.

At the same time, Wolfram understood that the trajectory of energy use in the developing world was a massively important part of the climate picture. To get a comprehensive grasp on global dynamics, she says, “I realized I needed to start thinking about the rest of the world.”

An accomplished scholar and policy expert, Wolfram has been on the faculty at Harvard University, the University of California at Berkeley — and now MIT, where she is the William Barton Rogers Professor in Energy. She has also served as deputy assistant secretary for climate and energy economics at the U.S. Treasury.

Yet even leading experts want to keep learning. So, when she hit that inflection point, Wolfram started carving out a new phase of her research career.

“One of the things I love about being an academic is, I could just decide to do that,” Wolfram says. “I didn’t need to check with a boss. I could just pivot my career to being more focused to thinking about energy in the developing world.”

Over the last decade, Wolfram has published a wide array of original studies about energy consumption in the developing world. From Kenya to Mexico to South Asia, she has shed light on the dynamics of economics growth and energy consumption — while spending some of that time serving the government too. Last year, Wolfram joined the faculty of the MIT Sloan School of Management, where her work bolsters the Institute’s growing effort to combat climate change.

Studying at MIT

Wolfram largely grew up in Minnesota, where her father was a legal scholar, although he moved to Cornell University around the time she started high school. As an undergraduate, she majored in economics at Harvard University, and after graduation she worked first for a consultant, then for the Massachusetts Department of Public Utilities, the agency regulating energy rates. 

In the latter job, Wolfram kept noticing that people were often citing the research of an MIT scholar named Paul Joskow (who is now the Elizabeth and James Killian Professor of Economics Emeritus in MIT’s Department of Economics) and Richard Schmalensee (a former dean of the MIT Sloan School of Management and now the Howard W. Johnson Professor of Management Emeritus). Seeing how consequential economics research could be for policymaking, Wolfram decided to get a PhD in the field and was accepted into MIT’s doctoral program.

“I went into graduate school with an unusually specific view of what I wanted to do,” Wolfram says. “I wanted to work with Paul Joskow and Dick Schmalensee on electricity markets, and that’s how I wound up here.”

At MIT, Wolfram also ended up working extensively with Nancy Rose, the Charles P. Kindleberger Professor of Applied Economics and a former head of the Department of Economics, who helped oversee Wolfram’s thesis; Rose has extensively studied market regulation as well.

Wolfram’s dissertation research largely focused on price-setting behavior in the U.K.’s newly deregulated electricity markets, which, it turned out, applied handily to the U.S., where a similar process was taking place. “I was fortunate because this was around the time California was thinking about restructuring, as it was known,” Wolfram says. She spent four years on the faculty at Harvard, then moved to UC Berkeley. Wolfram’s studies have shown that deregulation has had some medium-term benefits, for instance in making power plants operate more efficiently.

Turning on the AC

By around 2010, though, Wolfram began shifting her scholarly focus in earnest, conducting innovative studies about energy in the developing world. One strand of her research has centered on Kenya, to better understand how more energy access for people without electricity might fit into growth in the developing world.

In this case, Wolfram’s perhaps surprising conclusion is that electrification itself is not a magic ticket to prosperity; people without electricity are more eager to adopt it when they have a practical economic need for it. Meanwhile, they have other essential needs that are not necessarily being addressed.

“The 800 million people in the world who don’t have electricity also don’t have access to good health care or running water,” Wolfram says. “Giving them better housing infrastructure is important, and harder to tackle. It’s not clear that bringing people electricity alone is the single most useful thing from a development perspective. Although electricity is a super-important component of modern living.”

Wolfram has even delved into topics such as air conditioner use in the developing world — an important driver of energy use. As her research shows, many countries, with a combined population far bigger than the U.S., are among the fastest-growing adopters of air conditioners and have an even greater need for them, based on their climates. Adoption of air conditioning within those countries also is characterized by marked economic inequality.

From early 2021 until late 2022, Wolfram also served in the administration of President Joe Biden, where her work also centered on global energy issues. Among other things, Wolfram was part of the team working out a price-cap policy for Russian oil exports, a concept that she thinks could be applied to many other products globally. Although, she notes, working with countries heavily dependent on exporting energy materials will always require careful engagement.

“We need to be mindful of that dependence and importance as we go through this massive effort to decarbonize the energy sector and shift it to a whole new paradigm,” Wolfram says.

At MIT again

Still, she notes, the world does need a whole new energy paradigm, and fast. Her arrival at MIT overlaps with the emergence of a new Institute-wide effort, the Climate Project at MIT, that aims to accelerate and scale climate solutions and good climate policy, including through the new Climate Policy Center at MIT Sloan. That kind of effort, Wolfram says, matters to her.

“It’s part of why I’ve come to MIT,” Wolfram says. “Technology will be one part of the climate solution, but I do think an innovative mindset, how can we think about doing things better, can be productively applied to climate policy.” On being at MIT, she adds: “It’s great, it’s awesome. One of the things that pleasantly surprised me is how tight-knit and friendly the MIT faculty all are, and how many interactions I’ve had with people from other departments.”

Wolfram has also been enjoying her teaching at MIT, and will be offering a large class in spring 2025, 15.016 (Climate and Energy in the Global Economy), that she debuted this past academic year.

“It’s super fun to have students from around the world, who have personal stories and knowledge of energy systems in their countries and can contribute to our discussions,” she says.

When it comes to tackling climate change, many things seem daunting. But there is still a world of knowledge to be acquired while we try to keep the planet from overheating, and Wolfram has a can-do attitude about learning more and applying those lessons.

“We’ve made a lot of progress,” Wolfram says. “But we still have a lot more to do.”

MIT researchers develop an efficient way to train more reliable AI agents

Fields ranging from robotics to medicine to political science are attempting to train AI systems to make meaningful decisions of all kinds. For example, using an AI system to intelligently control traffic in a congested city could help motorists reach their destinations faster, while improving safety or sustainability.

Unfortunately, teaching an AI system to make good decisions is no easy task.

Reinforcement learning models, which underlie these AI decision-making systems, still often fail when faced with even small variations in the tasks they are trained to perform. In the case of traffic, a model might struggle to control a set of intersections with different speed limits, numbers of lanes, or traffic patterns.

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them.

The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city.

By focusing on a smaller number of intersections that contribute the most to the algorithm’s overall effectiveness, this method maximizes performance while keeping the training cost low.

The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent.

“We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,” says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).

She is joined on the paper by lead author Jung-Hoon Cho, a CEE graduate student; Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science (EECS); and Sirui Li, an IDSS graduate student. The research will be presented at the Conference on Neural Information Processing Systems.

Finding a middle ground

To train an algorithm to control traffic lights at many intersections in a city, an engineer would typically choose between two main approaches. She can train one algorithm for each intersection independently, using only that intersection’s data, or train a larger algorithm using data from all intersections and then apply it to each one.

But each approach comes with its share of downsides. Training a separate algorithm for each task (such as a given intersection) is a time-consuming process that requires an enormous amount of data and computation, while training one algorithm for all tasks often leads to subpar performance.

Wu and her collaborators sought a sweet spot between these two approaches.

For their method, they choose a subset of tasks and train one algorithm for each task independently. Importantly, they strategically select individual tasks which are most likely to improve the algorithm’s overall performance on all tasks.

They leverage a common trick from the reinforcement learning field called zero-shot transfer learning, in which an already trained model is applied to a new task without being further trained. With transfer learning, the model often performs remarkably well on the new neighbor task.

“We know it would be ideal to train on all the tasks, but we wondered if we could get away with training on a subset of those tasks, apply the result to all the tasks, and still see a performance increase,” Wu says.

To identify which tasks they should select to maximize expected performance, the researchers developed an algorithm called Model-Based Transfer Learning (MBTL).

The MBTL algorithm has two pieces. For one, it models how well each algorithm would perform if it were trained independently on one task. Then it models how much each algorithm’s performance would degrade if it were transferred to each other task, a concept known as generalization performance.

Explicitly modeling generalization performance allows MBTL to estimate the value of training on a new task.

MBTL does this sequentially, choosing the task which leads to the highest performance gain first, then selecting additional tasks that provide the biggest subsequent marginal improvements to overall performance.

Since MBTL only focuses on the most promising tasks, it can dramatically improve the efficiency of the training process.

Reducing training costs

When the researchers tested this technique on simulated tasks, including controlling traffic signals, managing real-time speed advisories, and executing several classic control tasks, it was five to 50 times more efficient than other methods.

This means they could arrive at the same solution by training on far less data. For instance, with a 50x efficiency boost, the MBTL algorithm could train on just two tasks and achieve the same performance as a standard method which uses data from 100 tasks.

“From the perspective of the two main approaches, that means data from the other 98 tasks was not necessary or that training on all 100 tasks is confusing to the algorithm, so the performance ends up worse than ours,” Wu says.

With MBTL, adding even a small amount of additional training time could lead to much better performance.

In the future, the researchers plan to design MBTL algorithms that can extend to more complex problems, such as high-dimensional task spaces. They are also interested in applying their approach to real-world problems, especially in next-generation mobility systems.

The research is funded, in part, by a National Science Foundation CAREER Award, the Kwanjeong Educational Foundation PhD Scholarship Program, and an Amazon Robotics PhD Fellowship.

The Friday Roundup – Gear Envy & Royalty Free Music Sites

Does Gear Matter? One Pro’s Perspective As you slowly descend into the living hell that is video creation and editing there is one factor that just will not go away. That factor is the endless marketing. You only have to do a search on the subject…

Ubitium Secures $3.7M to Revolutionize Computing with Universal RISC-V Processor

Ubitium, a semiconductor startup, has unveiled a groundbreaking universal processor that promises to redefine how computing workloads are managed. This innovative chip consolidates processing capabilities into a single, efficient unit, eliminating the need for specialized processors such as CPUs, GPUs, DSPs, and FPGAs. By breaking away…

Advancing urban tree monitoring with AI-powered digital twins

The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and no one is around to hear it, does it make a sound?”

What about AI-generated trees? They probably wouldn’t make a sound, but they will be critical nonetheless for applications such as adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.

“We’re bridging decades of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a new paper about Tree-D Fusion. “This allows us to not just identify trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of work in understanding how to build these 3D synthetic models; instead, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”

Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, such as the back side of trees that aren’t visible in street-view photos.

The technology’s practical applications extend far beyond mere observation. City planners could use Tree-D Fusion to one day peer into the future, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.

A tree grows in Brooklyn (and many other places)

The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.

Now, as cities worldwide grapple with rising temperatures, this research offers a new window into the future of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a global study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.

“Every time a street mapping vehicle passes through a city now, we’re not just taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a powerful lens to observe how environmental stresses shape tree health and growth patterns across their urban landscape.”

AI-based tree modeling has emerged as an ally in the quest for environmental justice: By mapping urban tree canopy in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not just studying urban forests — we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits branch out to all residents equally.

It’s a breeze

While Tree-D fusion marks some major “growth” in the field, trees can be uniquely challenging for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying in the wind, interweaving branches with neighbors, and constantly changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they can estimate the shape of the trees in the future, depending on the environmental conditions.

“What makes this work exciting is how it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand new approaches that can account for their dynamic nature, where even a gentle breeze can dramatically alter their structure from moment to moment.”

The team’s approach of creating rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps the most vexing is the “entangled tree problem;” when neighboring trees grow into each other, their intertwined branches create a puzzle that no current AI system can fully unravel.

The scientists see their dataset as a springboard for future innovations in computer vision, and they’re already exploring applications beyond street view imagery, looking to extend their approach to platforms like iNaturalist and wildlife camera traps.

“This marks just the beginning for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Together with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to use AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”

Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and four others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work is based on efforts supported by the United States Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings at the European Conference on Computer Vision this month. 

Drasi by Microsoft: A New Approach to Tracking Rapid Data Changes

Imagine managing a financial portfolio where every millisecond counts. A split-second delay could mean a missed profit or a sudden loss. Today, businesses in every sector rely on real-time insights. Finance, healthcare, retail, and cybersecurity, all need to react instantly to changes, whether it is an…