New hope for early pancreatic cancer intervention via AI-based risk prediction

New hope for early pancreatic cancer intervention via AI-based risk prediction

The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection. 

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models’ reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.

The two models  the “PRISM” neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The team’s comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold. 

Using AI to detect cancer risk is not a new phenomena algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. “The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field,” says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. “The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models’ generalizability and interpretability.” 

“This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer,” says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. “This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.” 

Prismatic perspectives

The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. “Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option,” says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. “This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).”

The CSAIL group’s close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. “The hypothesis was that these records contained hidden clues — subtle signs and symptoms that could act as early warning signals of pancreatic cancer,” she adds. “This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.”

Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.

One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians’ understanding of risk factors associated with pancreatic cancer. 

The path forward

Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model’s applicability to international datasets and integrating additional biomarkers for more refined risk assessment.

“A subsequent aim for us is to facilitate the models’ implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload,” says Jia. “A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.” 

Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team.

Researchers improve blood tests’ ability to detect and monitor cancer

Researchers improve blood tests’ ability to detect and monitor cancer

Tumors constantly shed DNA from dying cells, which briefly circulates in the patient’s bloodstream before it is quickly broken down. Many companies have created blood tests that can pick out this tumor DNA, potentially helping doctors diagnose or monitor cancer or choose a treatment.

The amount of tumor DNA circulating at any given time, however, is extremely small, so it has been challenging to develop tests sensitive enough to pick up that tiny signal. A team of researchers from MIT and the Broad Institute of MIT and Harvard has now come up with a way to significantly boost that signal, by temporarily slowing the clearance of tumor DNA circulating in the bloodstream.

The researchers developed two different types of injectable molecules that they call “priming agents,” which can transiently interfere with the body’s ability to remove circulating tumor DNA from the bloodstream. In a study of mice, they showed that these agents could boost DNA levels enough that the percentage of detectable early-stage lung metastases leapt from less than 10 percent to above 75 percent.

This approach could enable not only earlier diagnosis of cancer, but also more sensitive detection of tumor mutations that could be used to guide treatment. It could also help improve detection of cancer recurrence.

“You can give one of these agents an hour before the blood draw, and it makes things visible that previously wouldn’t have been. The implication is that we should be able to give everybody who’s doing liquid biopsies, for any purpose, more molecules to work with,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.

Bhatia is one of the senior authors of the new study, along with J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT and a member of the Koch Institute and the Ragon Institute of MGH, MIT, and Harvard and Viktor Adalsteinsson, director of the Gerstner Center for Cancer Diagnostics at the Broad Institute.

Carmen Martin-Alonso PhD ’23, MIT and Broad Institute postdoc Shervin Tabrizi, and Broad Institute scientist Kan Xiong are the lead authors of the paper, which appears today in Science.

Better biopsies

Liquid biopsies, which enable detection of small quantities of DNA in blood samples, are now used in many cancer patients to identify mutations that could help guide treatment. With greater sensitivity, however, these tests could become useful for far more patients. Most efforts to improve the sensitivity of liquid biopsies have focused on developing new sequencing technologies to use after the blood is drawn.

While brainstorming ways to make liquid biopsies more informative, Bhatia, Love, Adalsteinsson, and their trainees came up with the idea of trying to increase the amount of DNA in a patient’s bloodstream before the sample is taken.

“A tumor is always creating new cell-free DNA, and that’s the signal that we’re attempting to detect in the blood draw. Existing liquid biopsy technologies, however, are limited by the amount of material you collect in the tube of blood,” Love says. “Where this work intercedes is thinking about how to inject something beforehand that would help boost or enhance the amount of signal that is available to collect in the same small sample.”

The body uses two primary strategies to remove circulating DNA from the bloodstream. Enzymes called DNases circulate in the blood and break down DNA that they encounter, while immune cells known as macrophages take up cell-free DNA as blood is filtered through the liver.

The researchers decided to target each of these processes separately. To prevent DNases from breaking down DNA, they designed a monoclonal antibody that binds to circulating DNA and protects it from the enzymes.

“Antibodies are well-established biopharmaceutical modalities, and they’re safe in a number of different disease contexts, including cancer and autoimmune treatments,” Love says. “The idea was, could we use this kind of antibody to help shield the DNA temporarily from degradation by the nucleases that are in circulation? And by doing so, we shift the balance to where the tumor is generating DNA slightly faster than is being degraded, increasing the concentration in a blood draw.”

The other priming agent they developed is a nanoparticle designed to block macrophages from taking up cell-free DNA. These cells have a well-known tendency to eat up synthetic nanoparticles.

“DNA is a biological nanoparticle, and it made sense that immune cells in the liver were probably taking this up just like they do synthetic nanoparticles. And if that were the case, which it turned out to be, then we could use a safe dummy nanoparticle to distract those immune cells and leave the circulating DNA alone so that it could be at a higher concentration,” Bhatia says.

Earlier tumor detection

The researchers tested their priming agents in mice that received transplants of cancer cells that tend to form tumors in the lungs. Two weeks after the cells were transplanted, the researchers showed that these priming agents could boost the amount of circulating tumor DNA recovered in a blood sample by up to 60-fold.

Once the blood sample is taken, it can be run through the same kinds of sequencing tests now used on liquid biopsy samples. These tests can pick out tumor DNA, including specific sequences used to determine the type of tumor and potentially what kinds of treatments would work best.

Early detection of cancer is another promising application for these priming agents. The researchers found that when mice were given the nanoparticle priming agent before blood was drawn, it allowed them to detect circulating tumor DNA in blood of 75 percent of the mice with low cancer burden, while none were detectable without this boost.

“One of the greatest hurdles for cancer liquid biopsy testing has been the scarcity of circulating tumor DNA in a blood sample,” Adalsteinsson says. “It’s thus been encouraging to see the magnitude of the effect we’ve been able to achieve so far and to envision what impact this could have for patients.”

After either of the priming agents are injected, it takes an hour or two for the DNA levels to increase in the bloodstream, and then they return to normal within about 24 hours.

“The ability to get peak activity of these agents within a couple of hours, followed by their rapid clearance, means that someone could go into a doctor’s office, receive an agent like this, and then give their blood for the test itself, all within one visit,” Love says. “This feature bodes well for the potential to translate this concept into clinical use.”

The researchers have launched a company called Amplifyer Bio that plans to further develop the technology, in hopes of advancing to clinical trials.

“A tube of blood is a much more accessible diagnostic than colonoscopy screening or even mammography,” Bhatia says. “Ultimately, if these tools really are predictive, then we should be able to get many more patients into the system who could benefit from cancer interception or better therapy.”

The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the Marble Center for Cancer Nanomedicine, the Gerstner Family Foundation, the Ludwig Center at MIT, the Koch Institute Frontier Research Program via the Casey and Family Foundation, and the Bridge Project, a partnership between the Koch Institute and the Dana-Farber/Harvard Cancer Center.

Meeting the clean energy needs of tomorrow

Meeting the clean energy needs of tomorrow

Yuri Sebregts, chief technology officer at Shell, succinctly laid out the energy dilemma facing the world over the rest of this century. On one hand, demand for energy is quickly growing as countries in the developing world modernize and the global population grows, with 100 gigajoules of energy per person needed annually to enable quality-of-life benefits and industrialization around the globe. On the other, traditional energy sources are quickly warming the planet, with the world already seeing the devastating effects of increasingly frequent extreme weather events. 

While the goals of energy security and energy sustainability are seemingly at odds with one another, the two must be pursued in tandem, Sebregts said during his address at the MIT Energy Initiative Fall Colloquium.

“An environmentally sustainable energy system that isn’t also a secure energy system is not sustainable,” Sebregts said. “And conversely, a secure energy system that is not environmentally sustainable will do little to ensure long-term energy access and affordability. Therefore, security and sustainability must go hand-in-hand. You can’t trade off one for the other.”

Sebregts noted that there are several potential pathways to help strike this balance, including investments in renewable energy sources, the use of carbon offsets, and the creation of more efficient tools, products, and processes. However, he acknowledged that meeting growing energy demands while minimizing environmental impacts is a global challenge requiring an unprecedented level of cooperation among countries and corporations across the world. 

“At Shell, we recognize that this will require a lot of collaboration between governments, businesses, and civil society,” Sebregts said. “That’s not always easy.”

Global conflict and global warming

In 2021, Sebregts noted, world leaders gathered in Glasgow, Scotland and collectively promised to deliver on the “stretch goal” of the 2015 Paris Agreement, which would limit global warming to 1.5 degrees Celsius — a level that scientists believe will help avoid the worst potential impacts of climate change. But, just a few months later, Russia invaded Ukraine, resulting in chaos in global energy markets and illustrating the massive impact that geopolitical friction can have on efforts to reduce carbon emissions.

“Even though global volatility has been a near constant of this century, the situation in Ukraine is proving to be a turning point,” Sebregts said. “The stress it placed on the global supply of energy, food, and other critical materials was enormous.”

In Europe, Sebregts noted, countries affected by the loss of Russia’s natural gas supply began importing from the Middle East and the United States. This, in turn, drove up prices. While this did result in some efforts to limit energy use, such as Europeans lowering their thermostats in the winter, it also caused some energy buyers to turn to coal. For instance, the German government approved additional coal mining to boost its energy security — temporarily reversing a decades-long transition away from the fuel. To put this into wider perspective, in a single quarter, China increased its coal generation capacity by as much as Germany had reduced its own over the previous 20 years.

The promise of electrification

Sebregts noted the strides being made toward electrification, which is expected to have a significant impact on global carbon emissions. To meet net-zero emissions (the point at which humans are adding no more carbon to the atmosphere than they are removing) by 2050, the share of electricity as a portion of total worldwide energy consumption must reach 37 percent by 2030, up from 20 percent in 2020, Sebregts said.

He pointed out that Shell has become one of the world’s largest electric vehicle charging companies, with more than 30,000 public charge points. By 2025, that number will increase to 70,000, and it is expected to soar to 200,000 by 2030. While demand and infrastructure for electric vehicles are growing, Sebregts said that the “real needle-mover” will be industrial electrification, especially in so-called “hard-to-abate” sectors.

This progress will depend heavily on global cooperation — Sebregts pointed out that China dominates the international market for many rare elements that are key components of electrification infrastructure. “It shouldn’t be a surprise that the political instability, shifting geopolitical tensions, and environmental and social governance issues are significant risks for the energy transition,” he said. “It is imperative that we reduce, control, and mitigate these risks as much as possible.”

Two possible paths

For decades, Sebregts said, Shell has created scenarios to help senior managers think through the long-term challenges facing the company. While Sebregts stressed that these scenarios are not predictions, they do take into account real-world conditions, and they are meant to give leaders the opportunity to grapple with plausible situations.

With this in mind, Sebregts outlined Shell’s most recent Energy Security Scenarios, describing the potential future consequences of attempts to balance growing energy demand with sustainability — scenarios that envision vastly different levels of global cooperation, with huge differences in projected results. 

The first scenario, dubbed “Archipelagos,” imagines countries pursuing energy security through self-interest — a fragmented, competitive process that would result in a global temperature increase of 2.2 degrees Celsius by the end of this century. The second scenario, “Sky 2050,” envisions countries around the world collaborating to change the energy system for their mutual benefit. This more optimistic scenario would see a much lower global temperature increase of 1.2 C by 2100.

“The good news is that in both scenarios, the world is heading for net-zero emissions at some point,” Sebregts said. “The difference is a question of when it gets there. In Sky 2050, it is the middle of the century. In Archipelagos, it is early in the next century.”

On the other hand, Sebregts added, the average global temperature will increase by more than 1.5 C for some period of time in either scenario. But, in the Archipelagos scenario, this overshoot will be much larger, and will take much longer to come down. “So, two very different futures,” Sebregts said. “Two very different worlds.”

The work ahead

Questioned about the costs of transitioning to a net-zero energy ecosystem, Sebregts said that it is “very hard” to provide an accurate answer. “If you impose an additional constraint … you’re going to have to add some level of cost,” he said. “But then, of course, there’s 30 years of technology development pathway that might counteract some of that.”

In some cases, such as air travel, Sebregts said, it will likely remain impractical to either rely on electrification or sequester carbon at the source of emission. Direct air capture (DAC) methods, which mechanically pull carbon directly from the atmosphere, will have a role to play in offsetting these emissions, he said. Sebregts predicted that the price of DAC could come down significantly by the middle of this century. “I would venture that a price of $200 to $250 a ton of CO2 by 2050 is something that the world would be willing to spend, at least in developed economies, to offset those very hard-to-abate instances.”

Sebregts noted that Shell is working on demonstrating DAC technologies in Houston, Texas, constructing what will become Europe’s largest hydrogen plant in the Netherlands, and taking other steps to profitably transition to a net-zero emissions energy company by 2050. “We need to understand what can help our customers transition quicker and how we can continue to satisfy their needs,” he said. “We must ensure that energy is affordable, accessible, and sustainable, as soon as possible.”

Reasoning and reliability in AI

Reasoning and reliability in AI

In order for natural language to be an effective form of communication, the parties involved need to be able to understand words and their context, assume that the content is largely shared in good faith and is trustworthy, reason about the information being shared, and then apply it to real-world scenarios. MIT PhD students interning with the MIT-IBM Watson AI Lab — Athul Paul Jacob SM ’22, Maohao Shen SM ’23, Victor Butoi, and Andi Peng SM ’23 — are working to attack each step of this process that’s baked into natural language models, so that the AI systems can be more dependable and accurate for users.

To achieve this, Jacob’s research strikes at the heart of existing natural language models to improve the output, using game theory. His interests, he says, are two-fold: “One is understanding how humans behave, using the lens of multi-agent systems and language understanding, and the second thing is, ‘How do you use that as an insight to build better AI systems?’” His work stems from the board game “Diplomacy,” where his research team developed a system that could learn and predict human behaviors and negotiate strategically to achieve a desired, optimal outcome.

“This was a game where you need to build trust; you need to communicate using language. You need to also play against six other players at the same time, which were very different from all the kinds of task domains people were tackling in the past,” says Jacob, referring to other games like poker and GO that researchers put to neural networks. “In doing so, there were a lot of research challenges. One was, ‘How do you model humans? How do you know whether when humans tend to act irrationally?’” Jacob and his research mentors — including Associate Professor Jacob Andreas and Assistant Professor Gabriele Farina of the MIT Department of Electrical Engineering and Computer Science (EECS), and the MIT-IBM Watson AI Lab’s Yikang Shen — recast the problem of language generation as a two-player game.

Using “generator” and “discriminator” models, Jacob’s team developed a natural language system to produce answers to questions and then observe the answers and determine if they are correct. If they are, the AI system receives a point; if not, no point is rewarded. Language models notoriously tend to hallucinate, making them less trustworthy; this no-regret learning algorithm collaboratively takes a natural language model and encourages the system’s answers to be more truthful and reliable, while keeping the solutions close to the pre-trained language model’s priors. Jacob says that using this technique in conjunction with a smaller language model could, likely, make it competitive with the same performance of a model many times bigger.  

Once a language model generates a result, researchers ideally want its confidence in its generation to align with its accuracy, but this frequently isn’t the case. Hallucinations can occur with the model reporting high confidence when it should be low. Maohao Shen and his group, with mentors Gregory Wornell, Sumitomo Professor of Engineering in EECS, and lab researchers with IBM Research Subhro Das, Prasanna Sattigeri, and Soumya Ghosh — are looking to fix this through uncertainty quantification (UQ). “Our project aims to calibrate language models when they are poorly calibrated,” says Shen. Specifically, they’re looking at the classification problem. For this, Shen allows a language model to generate free text, which is then converted into a multiple-choice classification task. For instance, they might ask the model to solve a math problem and then ask it if the answer it generated is correct as “yes, no, or maybe.” This helps to determine if the model is over- or under-confident.

Automating this, the team developed a technique that helps tune the confidence output by a pre-trained language model. The researchers trained an auxiliary model using the ground-truth information in order for their system to be able to correct the language model. “If your model is over-confident in its prediction, we are able to detect it and make it less confident, and vice versa,” explains Shen. The team evaluated their technique on multiple popular benchmark datasets to show how well it generalizes to unseen tasks to realign the accuracy and confidence of language model predictions. “After training, you can just plug in and apply this technique to new tasks without any other supervision,” says Shen. “The only thing you need is the data for that new task.”

Victor Butoi also enhances model capability, but instead, his lab team — which includes John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering in EECS; lab researchers Leonid Karlinsky and Rogerio Feris of IBM Research; and lab affiliates Hilde Kühne of the University of Bonn and Wei Lin of Graz University of Technology — is creating techniques to allow vision-language models to reason about what they’re seeing, and is designing prompts to unlock new learning abilities and understand key phrases.

Compositional reasoning is just another aspect of the decision-making process that we ask machine-learning models to perform in order for them to be helpful in real-world situations, explains Butoi. “You need to be able to think about problems compositionally and solve subtasks,” says Butoi, “like, if you’re saying the chair is to the left of the person, you need to recognize both the chair and the person. You need to understand directions.” And then once the model understands “left,” the research team wants the model to be able to answer other questions involving “left.”

Surprisingly, vision-language models do not reason well about composition, Butoi explains, but they can be helped to, using a model that can “lead the witness”, if you will. The team developed a model that was tweaked using a technique called low-rank adaptation of large language models (LoRA) and trained on an annotated dataset called Visual Genome, which has objects in an image and arrows denoting relationships, like directions. In this case, the trained LoRA model would be guided to say something about “left” relationships, and this caption output would then be used to provide context and prompt the vision-language model, making it a “significantly easier task,” says Butoi.

In the world of robotics, AI systems also engage with their surroundings using computer vision and language. The settings may range from warehouses to the home. Andi Peng and mentors MIT’s H.N. Slater Professor in Aeronautics and Astronautics Julie Shah and Chuang Gan, of the lab and the University of Massachusetts at Amherst, are focusing on assisting people with physical constraints, using virtual worlds. For this, Peng’s group is developing two embodied AI models — a “human” that needs support and a helper agent — in a simulated environment called ThreeDWorld. Focusing on human/robot interactions, the team leverages semantic priors captured by large language models to aid the helper AI to infer what abilities the “human” agent might not be able to do and the motivation behind actions of the “human,” using natural language. The team’s looking to strengthen the helper’s sequential decision-making, bidirectional communication, ability to understand the physical scene, and how best to contribute.

“A lot of people think that AI programs should be autonomous, but I think that an important part of the process is that we build robots and systems for humans, and we want to convey human knowledge,” says Peng. “We don’t want a system to do something in a weird way; we want them to do it in a human way that we can understand.”

Evidence that gamma rhythm stimulation can treat neurological disorders is emerging

Evidence that gamma rhythm stimulation can treat neurological disorders is emerging

A surprising MIT study published in Nature at the end of 2016 helped to spur interest in the possibility that light flickering at the frequency of a particular gamma-band brain rhythm could produce meaningful therapeutic effects for people with Alzheimer’s disease. In a new review paper in the Journal of Internal Medicine, the lab that led those studies takes stock of what a growing number of scientists worldwide have been finding out since then in dozens of clinical and lab benchtop studies.

Brain rhythms (also called brain “waves” or “oscillations”) arise from the synchronized network activity of brain cells and circuits as they coordinate to enable brain functions such as perception or cognition. Lower-range gamma-frequency rhythms, those around 40 cycles a second, or hertz (Hz), are particularly important for memory processes, and MIT’s research has shown that they are also associated with specific changes at the cellular and molecular level. The 2016 study and many others since then have produced evidence, initially in animals and more recently in humans, that various noninvasive means of enhancing the power and synchrony of 40Hz gamma rhythms helps to reduce Alzheimer’s pathology and its consequences.

“What started in 2016 with optogenetic and visual stimulation in mice has expanded to a multitude of stimulation paradigms, a wide range of human clinical studies with promising results, and is narrowing in on the mechanisms underlying this phenomenon,” write the authors including Li-Huei Tsai, Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

Though the number of studies and methods has increased and the data have typically suggested beneficial clinical effects, the article’s authors also clearly caution that the clinical evidence remains preliminary and that animal studies intended to discern how the approach works have been instructive, but not definitive.

“Research into the clinical potential of these interventions is still in its nascent stages,” the researchers, led by MIT postdoc Cristina Blanco-Duque, write in introducing the review. “The precise mechanisms underpinning the beneficial effects of gamma stimulation in Alzheimer’s disease are not yet fully elucidated, but preclinical studies have provided relevant insights.”

Preliminarily promising

The authors list and summarize results from 16 clinical studies published over the last several years. These employ gamma-frequency sensory stimulation (e.g., exposure to light, sound, tactile vibration, or a combination); transcranial alternating current stimulation (tACS), in which a brain region is stimulated via scalp electrodes; or transcranial magnetic stimulation (TMS), in which electric currents are induced in a brain region using magnetic fields. The studies also vary in their sample size, design, duration, and in what effects they assessed. Some of the sensory studies using light have tested different colors and different exact frequencies. And while some studies show that sensory stimulation appears to affect multiple regions in the brain, tACS and TMS are more regionally focused (though those brain regions still connect and interact with others).

Given the variances, the clinical studies taken together offer a blend of uneven but encouraging evidence, the authors write. Across clinical studies involving patients with Alzheimer’s disease, sensory stimulation has proven safe and well-tolerated. Multiple sensory studies have measured increases in gamma power and brain network connectivity. Sensory studies have also reported improvements in memory and/or cognition, as well as sleep. Some have yielded apparent physiological benefits such as reduction of brain atrophy, in one case, and changes in immune system activity in another. So far, sensory studies have not shown reductions in Alzheimer’s hallmark proteins, amyloid or tau.

Clinical studies stimulating 40Hz rhythms using tACS, ranging in sample size from only one to as many as 60, are the most numerous so far, and many have shown similar benefits. Most report benefits to cognition, executive function, and/or memory (depending sometimes on the brain region stimulated), and some have assessed that benefits endure even after treatment concludes. Some have shown effects on measures of tau and amyloid, blood flow, neuromodulatory chemical activity, or immune activity. Finally, a 40Hz stimulation clinical study using TMS in 37 patients found improvements in cognition, prevention of brain atrophy, and increased brain connectivity.

“The most important test for gamma stimulation is without a doubt whether it is safe and beneficial for patients,” the authors write. “So far, results from several small trials on sensory gamma stimulation suggest that it is safe, evokes rhythmic EEG brain responses, and there are promising signs for AD [Alzheimer’s disease] symptoms and pathology. Similarly, studies on transcranial stimulation report the potential to benefit memory and global cognitive function even beyond the end of treatment.”

Studying underlying mechanisms

In parallel, dozens more studies have shown significant benefits in mice including reductions in amyloid and tau, preservation of brain tissue, and improvements in memory. But animal studies also have offered researchers a window into the cellular and molecular mechanisms by which gamma stimulation might have these effects.

Before MIT’s original studies in 2016 and 2019, researchers had not attributed molecular changes in brain cells to changes in brain rhythms, but those and other studies have now shown that they affect not only the molecular state of neurons, but also the brain’s microglia immune cells, astrocyte cells that play key roles in regulating circulation, and indeed the brain’s vasculature system. A hypothesis of Tsai’s lab right now is that sensory gamma stimulation might promote the clearance of amyloid and tau via increased circulatory activity of brain fluids.

A hotly debated aspect of gamma stimulation is how it affects the electrical activity of neurons, and how pervasively. Studies indicate that inhibitory “interneurons” are especially affected, though, offering a clue about how increased gamma activity, and its physiological effects, might propagate.

“The field has generated tantalizing leads on how gamma stimulation may translate into beneficial effects on the cellular and molecular level,” the authors write.

Gamma going forward

As the authors make clear that more definitive clinical studies are needed, they note that at the moment, there are now 15 new clinical studies of gamma stimulation underway. Among these is a phase 3 clinical trial by the company Cognito Therapeutics, which has licensed MIT’s technology. That study plans to enroll hundreds of participants.

Meanwhile, some recent or new clinical and preclinical studies have begun looking at whether gamma stimulation may be applicable to neurological disorders other than Alzheimer’s, including stroke or Down syndrome. In experiments with mouse models, for example, an MIT team has been testing gamma stimulation’s potential to help with cognitive effects of chemotherapy, or “chemobrain.”

“Larger clinical studies are required to ascertain the long-term benefits of gamma stimulation,” the authors conclude. “In animal models the focus should be on delineating the mechanism of gamma stimulation and providing further proof of principle studies on what other applications gamma stimulation may have.”

In addition to Tsai and Blanco-Duque, the paper’s other authors are Diane Chan, Martin Kahn, and Mitch Murdock.

New State Of The Industry Report Provides Developer Thoughts On Layoffs, Acquisitions, A.I., And More

Introduction

Last year was a fantastic year for games, with standout triple-A releases like Baldur’s Gate 3, The Legend of Zelda: Tears of the Kingdom, Marvel’s Spider-Man 2, and great independent games like Jusant, Sea of Stars, and countless others. But in 2023 alone, more than 10,000 developers and people in games-adjacent industries were laid off. Plus, the unchecked rise of A.I. continued, and Unity burned developers with its controversial new game engine policies, not to name a few other not-so-great parts about 2023 – it was a great year for games but one of the worst for those who make them. 

Now, just as 2024 has begun, the Game Developers Conference has released its 12th annual State of the Game Industry report. Its data was collected from 3000 developers surveyed back in October of last year. GDC and GameDeveloper.com partnered with Omdia, a research firm, to dissect the data. In its State of the Game Industry 2024 report, which you can view in full here, developers share their thoughts on A.I., layoffs, social media, and its role in marketing, game engines, and more. 

“The most striking observation derived from job losses in the industry – naturally a pressing concern for many,” Omdia research director Dom Tait writes in the report. “Among the insightful developer comments on the subject was the following: ‘Studios grew too quickly during the pandemic.’ This statement is born out of games industry data, which shows a Covid-driven hump of extra revenue in 2020 and 2021, collectively totaling $50 billion over expected figures.

“But 2022 and 2023 showed a reversion to the spend treeline seen prior to 2020, thus this reduction in headcount is partly caused by companies belatedly adjusting to the new, less positive market reality. However, with the forecast returning to steady growth to 2027, this ought to present a more stable picture for employment levels in the future.” 

Below, we’ll break down some of the highlights of the State of the Game Industry 2024 report. 

Demographics

Demographics

Here are the ages of the 3000 developers surveyed for this report: 

  • 18 to 24: 9 percent
  • 25 to 34: 35 percent
  • 35 to 44: 33 percent
  • 45 to 54: 17 percent
  • 55 to 64: 5 percent
  • 65 or older: 1 percent

And here are the races/ethnicities of the 3000 developers surveyed for this report: 

  • White/Caucasian: 65 percent
  • Hispanic, Latino, or Spanish origin: 9 percent
  • East Asian: 7 percent
  • South or Southeast Asian: 5 percent
  • Black/African/Caribbean: 3 percent
  • Middle Eastern or North African: 1 percent
  • American Indian or Alaska Native: less than 1 percent
  • Native Hawaiian or Other Pacific Islander: less than 1 percent
  • Multiple ethnicities/not listed: 5 percent
  • Prefer not to answer: 5 percent

Here are the genders of those surveyed:

  • Men: 69 percent
  • Woman: 23 percent
  • Non-Binary: 5 percent
  • Not listed: less than 1 percent
  • Prefer not to answer: 3 percent

And here are the regions of the world where the developers surveyed reside: 

  • North America: 62 percent
  • Europe: 26 percent
  • Asia: 6 percent
  • South America: 3 percent
  • Australia/New Zealand: 3 percent
  • Africa: less than 1 percent
  • Not listed: 1 percent 

87 percent of game developers with 21 years or more of experience in the games industry surveyed for this year’s report are men, and 92 percent of those men are White. Asian men represent 15 percent of game developers with 21 years or more of experience, Hispanic, Latino, or Spanish-origin men make up 8 percent, and Black men make up 6 percent. White women represent 5 percent of game developers with 21 years or more of experience, as do Asian women. Zero Black women or Hispanic, Latino, or Spanish-origin women are represented in this category. 

The majority of those surveyed – 56 percent – have 10 or fewer years in the games industry. 

Platforms

Platforms

New State Of The Industry Report Provides Developer Thoughts On Layoffs, Acquisitions, A.I., And More

66 percent of developers surveyed said PC remains their platform of choice for developing current projects, and 57 percent said the same for developing upcoming projects. 

Here’s how the other platforms fair: 

  • PC: 66 percent
  • PlayStation 5: 35 percent
  • Xbox Series X/S: 34 percent
  • Android: 24 percent
  • iOS: 23 percent
  • Nintendo Switch: 18 percent
  • Xbox One: 18 percent
  • PlayStation 4: 16 percent
  • Mac: 16 percent
  • VR: 10 percent
  • Web browser: 10 percent
  • Nintendo Switch successor: 8 percent
  • Linux: 7 percent
  • Cloud services like Xbox Cloud Gaming, PlayStation Plus, etc.: 7 percent
  • AR: 4 percent
  • Tabletop: 3 percent
  • Media platforms like Netflix: 2 percent
  • UGC platforms like Roblox and Minecraft: 1 percent
  • Playdate: less than 1 percent
  • Other: 4 percent
  • Not involved in development: 13 percent

Layoffs

Layoffs

According to the report, 35 percent of developers surveyed have been impacted by layoffs personally or have worked at a company where layoffs occurred, with quality assurance testers affected most. 22 percent of QA developers said they were laid off in 2023, compared to 7 percent for all developers. Those in game development business and finance were affected the least, at 2 percent. 

However, more than half of those surveyed – 56 percent – expressed some level of concern that the place they work could be hit with layoffs in 2024. GDC says one-third of responders said they aren’t concerned about layoffs at their company at all. When asked about the rise of layoffs that gained widespread attention last year, developers cited “post-pandemic course correction, studio conglomeration, and economic uncertainty,” with some expressing a desire to unionize. 

A.I.

A.I.

When asked about A.I. and its rise in game development, 84 percent said they are somewhat or very concerned about the ethics of generative A.I., while 12 percent said they have no concerns with it. GDC notes that those working in business, marketing, and programming were more likely to say the use of A.I. would have a positive effect, while those on the creative side of development, such as narrative and quality assurance, were more likely to say it would have a negative impact. 

Developers noted in surveys that they are concerned generative A.I. could lead to more layoffs, while others worried about how it affects copyright infringement, especially in regards to how the training material this kind of A.I. uses is obtained. 

51 percent of developers said their companies have implemented some kind of workplace policy regarding the use of generative A.I., “with many of them saying their companies have made use optional,” GDC writes in a press release. 2 percent of responders said generative A.I. is mandatory in their workplace, and 12 percent said it’s not allowed. 

Triple-A studios were more likely to have policies regarding the use of generative A.I. in place compared to indie studios. 21 percent of triple-A developers said it’s banned at their workplace; 9 percent of indie developers said the same. 

However, 37 percent of indie developers said they are using generative A.I. compared to 21 percent at triple-A and double-A studios. 

Digital Downloads

Digital Downloads

51 percent of developers who responded to the survey said the game they’re currently developing will be a “digital premium game.” Here’s how other models fair: 

  • Digital premium game: 51 percent
  • Free to download: 32 percent
  • DLC/Updates: 24 percent
  • Physical premium game: 21 percent
  • Paid in-game items: 21 percent
  • Paid in-game currency: 19 percent
  • Inclusion in a paid subscription library like Xbox Game Pass: 10 percent
  • Paid item crates/gacha: 6 percent
  • Community-funded like Kickstarter: 6 percent
  • In-game product placement: 5 percent
  • Premium tier subscriptions like in Fallout 76: 4 percent
  • Blockchain-driven monetization: 3 percent
  • Other: 6 percent
  • Not involved in game development: 14 percent

Game Adaptations

Game Adaptations

HBO's The Last of Us Season 1 2 2024 2025 Pedro Pascal Bella Ramsey Abby Ellie Joel

10 percent of respondents said their company has a game that has been or is being adapted, while 20 percent have said their company has talked about it. 6 percent have been approached for an adaptation, while 2 percent have pitched an adaptation. 44 percent said they aren’t adapting a game, while 13 percent don’t know. 4 percent responded with N/A. 

63 percent of developers surveyed think film and TV adaptations are good for the game industry, 26 percent said maybe, 4 percent said no, and 7 percent had no opinion. 

Acquisitions

Acquisitions

According to GDC’s survey, 5 percent of developers believe the ongoing wave of acquisitions happening in games is good for the industry, down from 17 percent in last year’s report. 43 percent think it will have a negative impact, and 2 percent think it will have no impact; 42 percent responded with “mixed impact,” noting negative and positive feelings about it. 

Activision Blizzard Xbox Microsoft Acquisition Every Franchise IP Game Series

Game Development Engines

Game Development Engines

Epic’s Unreal Engine and Unity’s engine are the most-used game engines, according to the report; 33 percent of developers said one of these engines is their primary development engine. In third place (technically second since Unity and Unreal were tied ) were proprietary in-house engines (think EA’s Frostbite engine), with open-source engine Godot in fourth. 

Following Unity’s runtime fee fiasco that happened last year, one-third of developers surveyed said they considered switching engines within the past year (or that they have already done so); almost half said they haven’t considered switching. Developers cited Unity’s policies as the biggest motivator for switching, and 51 percent of responders said they were interested in switching from either Unity or Unreal to Godot. 

Accessibility

Accessibility

48 percent of developers who took part in the survey said their companies have implemented accessibility options into their games, which is up from 38 percent in 2023’s report. 27 percent of responders said their companies have implemented zero accessibility measures, which is down from 32 percent in last year’s report. 

The top accessibility measures include closed captioning, control remapping, and colorblind modes, according to the report. Other features include phobia accommodations, accessible hardware and controls, and content warnings. 

Social Media and Marketing

Social Media And Marketing

Developers said that social media and word-of-mouth are the “most-used marketing tools,” with 76 percent saying they utilize X (formally Twitter) the most compared to other platforms. However, GDC says many developers noted they aren’t happy with the state of X. When asked about how their approach to social media marketing has changed, 97 percent of developers touched on changes to X and expressed negative views about it and its owner, Elon Musk. 

Remote Work

Remote Work

26 percent of respondents said their company has some kind of mandatory return-to-office policy, be it a full-time return to the physical workplace or a hybrid schedule that includes remote/work-from-home. The other 74 percent of developers said they don’t have a return-to-office policy or make working in-office optional.

The report notes that 40 percent of triple-A – the largest group affected by these kinds of policies – said they have mandatory return-to-office rules, although the majority of this 40 percent said it’s a hybrid mix. 15 percent of indie developers and 28 percent of double-A developers work somewhere with a mandatory return-to-office policy. 

Developers with the option to work from home reported the most satisfaction with their work schedule, the report notes, while those with mandatory return-to-office policies in place reported the most dissatisfaction. 

You can check out GDC’s full State of the Industry 2024 report here for additional information on these topics and more. 


What is the most surprising statistic in this report to you? Let us know in the comments below!

MedLinks volunteers aid students in residence halls with minor medical issues

MedLinks volunteers aid students in residence halls with minor medical issues

For 30 years, MIT MedLinks liaisons have volunteered to support MIT students with first-line medical care. Living in each of MIT’s residence halls and in numerous fraternities, sororities, and independent living groups, MedLinks administer basic first aid, share over-the-counter medicines when needed, explain MIT Health’s policies and procedures, and often simply listen to classmates talk about their health and well-being. MedLinks also help build community and plan events that bring people in their residence halls together. Recent events include ice cream sundae building, canvas painting, and tie-dying T-shirts.

Students who need ibuprofen in the middle of the night, twist their ankle and need an ice pack, or just need some throat lozenges can knock on their MedLinks volunteer’s door to get help with any of these and a host of other medical matters.

Greg Baker, senior program manager for community wellness at MIT Health, says the 150 MedLinks volunteers play a crucial role in connecting students to MIT Health and a host of other services.

“There is a 12-hour training for new volunteers that includes a review of MIT Health’s clinical offerings, campus and community resources, the supplies they receive and in what situations they should or should not be distributed, as well as active listening, and caregiver burnout. We’re also lucky to have our campus partners host sessions to share more about their departments — including the Ombuds Office, DoingWell, Alcohol and Other Drug Services, Institute Discrimination and Harassment Response Office, DAPER [Department of Athletics, Physical Education and Recreation], and Student Mental Health and Counseling,” says Baker.

After a year as a MedLinks volunteer, students can become a MedLinks residential director (RD) after going through additional training. The RD coordinates monthly meetings and events with the other MedLinks in their living group, checks supplies, and along with other MedLinks submits reports to MIT Medical.

Em Ball and Maia DeMeyer are residential directors for Burton Connor and Random Hall, respectively. Ball, a junior majoring in chemistry who is originally from Iowa, became a MedLinks volunteer because she is interested in going to medical school when she completes her undergraduate studies.

“One of the best things about being an RD is meeting and helping people. I especially enjoy putting together our events. We just had a cupcake-decorating event, and the people who came had a great time and said they had fun. The ability to take a break for your mental health is undervalued and very important,” says Ball.

DeMeyer, a sophomore majoring in computer science and engineering who is originally from Washington state, became a MedLinks volunteer for similar reasons: “I like to take care of people. I would rather someone knock on my door in the middle of the night seeking help than ignore a medical problem. I also enjoy being a resource for our community because Random Hall is small; it feels like family there.”

Flu season tends to be busier than the rest of the year. Ball and DeMeyer often advise students when they should go to MIT Health, Student Support Services, or Urgent Care. They also interview potential MedLinks liaisons and help onboard them once they have completed training.

Baker observes, “They have a lot of responsibility, as Em is the RD for 14 other MedLinks volunteers and Maia is the RD for five other volunteers. We appreciate their help, as well as the help of all our MedLinks volunteers. We hold celebration dinners and give them small gifts of appreciation at year’s end.”

DeMeyer and Ball love their residential communities and still make time to sing with the MIT Centrifugues co-ed a cappella group, where Ball is co-music director and DeMeyer is the business manager. Ball is also a member of track and field and cross-country teams, and DeMeyer serves on several of Random Hall’s governing committees.

“I found my niche here at MIT and it feels like home. It’s challenging, and MIT pushes everyone to be their best, so I know I can prosper here,” says DeMeyer. Ball agrees, “MIT fits my personality. It’s a very supportive community.”

MIT students who are interested in learning more about the MedLinks program can visit the website for more information.

LiveU Live Sports Summit – Diving Deep into Case Studies & Demonstrati – Videoguys

LiveU Live Sports Summit – Diving Deep into Case Studies & Demonstrati – Videoguys

Discover the latest advancements and strategies in live sports production through the exclusive Live Sports Summit held at The Etihad Stadium in Manchester City. As industry leaders, including Manchester City FC, BBC, HBS, and Clubber TV, came together, they shared invaluable insights into leveraging LiveU technology for unparalleled live productions.

Content:

  1. Engaging Fragmented Audiences in Live Sports Explore effective techniques to engage fragmented audiences, ensuring your live sports content resonates with diverse viewer preferences.

  2. Staying Ahead of Accelerating Audience Demands Learn how top broadcasters and production houses keep pace with the ever-increasing demands of today’s dynamic audience, delivering content that captivates and satisfies.

  3. Scaling Niche Sports Coverage for Wider Impact Dive into successful case studies showcasing the scalability of niche sports coverage, extending your reach beyond traditional boundaries and attracting a broader audience.

  4. Crafting Compelling Stories for Sports Enthusiasts Uncover the art of storytelling in live sports production, with insights on how to create narratives that captivate and resonate with sports enthusiasts.

  5. Exploring New Revenue Streams in Sports Broadcasting Discover innovative approaches to identifying and exploring new revenue streams in the sports broadcasting industry, ensuring financial sustainability for your live productions.

Live Sports Summit not only provided a platform for industry leaders to share their experiences but also offered a glimpse into the future of live sports production. With a focus on LiveU technology, sustainability, and cost-efficiency, this summit is a game-changer for those looking to elevate their sports broadcasting endeavors.

Watch the full video below:

[embedded content]


DeepMind AlphaGeometry solves complex geometry problems

DeepMind, the UK-based AI lab owned by Google’s parent company Alphabet, has developed an AI system called AlphaGeometry that can solve complex geometry problems close to human Olympiad gold medalists.  In a new paper in Nature, DeepMind revealed that AlphaGeometry was able to solve 25 out…