Dragon Quest Monsters: The Dark Prince Review – A Surprisingly Common Experience – Game Informer

Dragon Quest Monsters: The Dark Prince Review – A Surprisingly Common Experience – Game Informer

Dragon Quest possesses so much history that any new game carries a degree of raised expectation. Dragon Quest Monsters: The Dark Prince delivers many of the conventions I’ve come to expect from the series: the vibrant opening song, the charismatic Slime, and the emotional storytelling I already associate with the franchise. But this game goes beyond those well-treaded territories, offering an intelligent and elegant yet simple approach to combat and dungeon design that makes it a solid spin-off experience.

In The Dark Prince, you play as Psaro, a half-human, half-monster boy who becomes a powerful monster wrangler because of a curse that makes fighting them with his own hands impossible. Wrangling is all about capturing monsters and controlling them during turn-based battles. As I progressed throughout the boy’s journey, I found stronger creatures to add to my roster. The game also has an online mode that lets you fight other players, which is a good way to test different group compositions. In my case, it took so long to find a match that my time was better spent adventuring solo.

Synthesizing new monsters by fusing two parent creatures is the best method to obtain a better team, and this system makes all your effort in previously maximizing weaker monsters worth it. Whenever you create a new monster, it’s possible to keep some of the skill points spent in the creatures you’re fusing. Through this system, I crafted some extremely powerful monsters, surpassing their regular versions found in the wild. This system pushes you towards excessive grinding, though. Whenever you fuse a new creature, it comes at level 1, regardless of its parents’ levels. In the final sections of the game, fusing a new monster at the wrong moment means spending a lot of time leveling up before getting back on track and trying to defeat a boss.

With the vast number of possible monster combinations you can create in The Dark Prince, I was surprised by how streamlined combat is. The game allows you to set up tactics that define whether a party member will focus on attacking enemies or healing other party members, for example. At the same time, it’s possible to order specific actions for each monster. However, outside of boss fights, engaging more strategically in battles rarely felt necessary. The system waters down so much of each encounter that I usually entered automatic mode and let the A.I. do the thinking. 

As the boy works on his craft, we learn about Psaro’s past and his journey alongside his friends, Rose and Toilen, to become strong enough to challenge his father. This is a classic, almost too familiar, premise, but even with the absence of heavily foreshadowed surprises or plot twists, The Dark Prince captivated me, making for a cozy adventure with the charm of an old-fashioned fairy tale. 

The game presents the same slow-paced introduction other Dragon Quest games have, making the first few hours a slog. However, I became slowly entangled in the story. Initially, I was progressing only to unlock new monsters, but I realized I was as excited about learning more about Psaro’s tale as I was about finding new creatures. Unfortunately, very few situations offer even a glimpse of what he’s thinking, and it never gave me a chance to understand the reasoning behind his acts better. In this aspect, the game’s respect for its roots hinders its capacity to develop an intriguing character with no option besides nodding or saying yes or no. 

While perfectly capable as a standalone title, The Dark Prince is directly connected to Dragon Quest IV. It gives us a chance to learn more about Psaro, a crucial figure in the older title, and also to look over some events related to the previous game from a different perspective.

[embedded content]

Psaro’s journey takes us through areas in Nadiria, a magical dimension with different regions called circles. Each circle splits into three tiers, with one final dungeon. Sadly, this structure makes for a repetitive and predictable pattern; after completing the first four circles, I knew exactly what to expect from every new region. These areas are made worse by noticeable dips in performance, as the framerate suffers considerably. While I could easily ignore these minor performance issues, the circles’ recursive design became more tiresome whenever I went for long gameplay sessions. 

On the other hand, the dungeons are the most surprising element of each circle. They all share a similar structure: many floors, a traversal gimmick, and a teleport before the boss room. While they might feel as repetitive as the circles, the puzzles inside each dungeon make them fun and varied. The developers found a solid balance between difficulty and enjoyment when designing them. The Dark Prince veers more toward traditional dungeon design, with treadmills you need to activate to advance or ladders and holes in the ground to get to the top of the building. Though most dungeons are forgettable, they offer a refreshing intellectual experience even without leaning on any design extravaganza. 

By rigidly following Dragon Quest traditions, we end up with flat, cartoonish characters who inhabit a repetitive, cyclical world. But The Dark Prince plays to its strengths to deliver a solid RPG experience with a cozy narrative seasoned by a long list of charismatic creatures and entertaining dungeons. 

Nanoparticle-delivered RNA reduces neuroinflammation in lab tests

Nanoparticle-delivered RNA reduces neuroinflammation in lab tests

Some Covid-19 vaccines safely and effectively used lipid nanoparticles (LNPs) to deliver messenger RNA to cells. A new MIT study shows that different nanoparticles could be used for a potential Alzheimer’s disease (AD) therapy. In tests in multiple mouse models and with cultured human cells, a newly tailored LNP formulation effectively delivered small interfering RNA (siRNA) to the brain’s microglia immune cells to suppress expression of a protein linked to excessive inflammation in Alzheimer’s disease.

In a prior study, the researchers showed that blocking the consequences of PU.1 protein activity helps to reduce Alzheimer’s disease-related neuroinflammation and pathology. The new open-access results, reported in the journal Advanced Materials, achieves a reduction in inflammation by directly tamping down expression of the Spi1 gene that encodes PU.1. More generally, the new study also demonstrates a new way to deliver RNA to microglia, which have been difficult to target so far.

Study co-senior author Li-Huei Tsai, the Picower professor of neuroscience and director of The Picower Institute for Learning and Memory and Aging Brain Initiative at MIT, says she hypothesized that LNPs might work as a way to bring siRNA into microglia because the cells, which clear waste in the brain, have a strong proclivity to uptake lipid molecules. She discussed this with Robert Langer, the David Koch Institute Professor, who is widely known for his influential work on nanoparticle drug delivery. They decided to test the idea of reducing PU.1 expression with an LNP-delivered siRNA.

“I still remember the day when I asked to meet with Bob to discuss the idea of testing LNPs as a payload to target inflammatory microglia,” says Tsai, a faculty member in the Department of Brain and Cognitive Sciences. “I am very grateful to The JPB Foundation, who supported this idea without any preliminary evidence.”

Langer Lab graduate student Jason Andresen and former Tsai Lab postdoc William Ralvenius led the work and are the study’s co-lead authors. Owen Fenton, a former Langer Lab postdoc who is now an assistant professor at the University of North Carolina’s Eshelman School of Pharmacy, is a co-corresponding author along with Tsai and Langer. Langer is a professor in the departments of Chemical Engineering and Biological Engineering, and the Koch Institute for Integrative Cancer Research.

Perfecting a particle

The simplest way to test whether siRNA could therapeutically suppress PU.1 expression would have been to make use of an already available delivery device, but one of the first discoveries in the study is that none of eight commercially available reagents could safely and effectively transfect cultured human microglia-like cells in the lab.

Instead, the team had to optimize an LNP to do the job. LNPs have four main components; by changing the structures of two of them, and by varying the ratio of lipids to RNA, the researchers were able to come up with seven formulations to try. Importantly, their testing included trying their formulations on cultured microglia that they had induced into an inflammatory state. That state, after all, is the one in which the proposed treatment is needed.

Among the seven candidates, one the team named “MG-LNP” stood out for its especially high delivery efficiency and safety of a test RNA cargo.

What works in a dish sometimes doesn’t work in a living organism, so the team next tested their LNP formulations’ effectiveness and safety in mice. Testing two different methods of injection, into the body or into the cerebrospinal fluid (CSF), they found that injection into the CSF ensured much greater efficacy in targeting microglia without affecting cells in other organs. Among the seven formulations, MG-LNP again proved the most effective at transfecting microglia. Langer said he believes this could potentially open new ways of treating certain brain diseases with nanoparticles someday. 

A targeted therapy

Once they knew MG-LNP could deliver a test cargo to microglia both in human cell cultures and mice, the scientists then tested whether using it to deliver a PU.1-suppressing siRNA could reduce inflammation in microglia. In the cell cultures, a relatively low dose achieved a 42 percent reduction of PU.1 expression (which is good because microglia need at least some PU.1 to live). Indeed, MG-LNP transfection did not cause the cells any harm. It also significantly reduced the transcription of the genes that PU.1 expression increases in microglia, indicating that it can reduce multiple inflammatory markers.

In all these measures, and others, MG-LNP outperformed a commercially available reagent called RNAiMAX that the scientists tested in parallel.

“These findings support the use of MG-LNP-mediated anti-PU.1 siRNA delivery as a potential therapy for neuroinflammatory diseases,” the researchers wrote.

The final set of tests evaluated MG-LNP’s performance delivering the siRNA in two mouse models of inflammation in the brain. In one, mice were exposed to LPS, a molecule that simulates infection and stimulates a systemic inflammation response. In the other model, mice exhibit severe neurodegeneration and inflammation when an enzyme called CDK5 becomes hyperactivated by a protein called p25.

In both models, injection of MG-LNPs carrying the anti-PU.1 siRNA reduced expression of PU.1 and inflammatory markers, much like in the cultured human cells.

“MG-LNP delivery of anti-PU.1 siRNA can potentially be used as an anti-inflammatory therapeutic in mice with systemic inflammation an in the CK-p25 mouse model of AD-like neuroinflammation,” the scientists concluded, calling the results a “proof-of-principle.” More testing will be required before the idea could be tried in human patients.

In addition to Andresen, Ralvenius, Langer, Tsai, and Owen, the paper’s other authors are Margaret Huston, Jay Penney, and Julia Maeve Bonner.

In addition to the The JPB Foundation and The Picower Institute for Learning and Memory, the Robert and Renee Belfer Family, Eduardo Eurnekian, Lester A. Gimpelson, Jay L. and Carroll Miller, the Koch Institute, the Swiss National Science Foundation, and the Alzheimer’s Association provided funding for the study.

“MIT can give you ‘superpowers’”

“MIT can give you ‘superpowers’”

Speaking at the virtual MITx MicroMasters Program Joint Completion Celebration last summer, Diogo da Silva Branco Magalhães described watching a Spider-Man movie with his 8-year-old son and realizing that his son thought MIT was a fictional entity that existed only in the Marvel universe.

“I had to tell him that MIT also exists in the real world, and that some of the programs are available online for everyone,” says da Silva Branco Magalhães, who earned his credential in the MicroMasters in Statistics and Data Science program. “You don’t need to be a superhero to participate in an MIT program, but MIT can give you ‘superpowers.’ In my case, the superpower that I was looking to acquire was a better understanding of the key technologies that are shaping the future of transportation.

Part of MIT Open Learning, the MicroMasters programs have drawn in almost 1.4 million learners, spanning nearly every country in the world. More than 7,500 people have earned their credentials across the MicroMasters programs, including: Statistics and Data Science; Supply Chain Management; Data, Economics, and Design of Policy; Principles of Manufacturing; and Finance. 

Earning his MicroMasters credential not only gave da Silva Branco Magalhães a strong foundation to tackle more complex transportation problems, but it also opened the door to pursuing an accelerated graduate degree via a Northwestern University online program.

Learners who earn their MicroMasters credentials gain the opportunity to apply to and continue their studies at a pathway school. The MicroMasters in Statistics and Data Science credential can be applied as credit for a master’s program at more than 30 universities, as well as MIT’s PhD Program in Social and Engineering Systems. Da Silva Branco Magalhães, originally from Portugal and now based in Australia, seized this opportunity and enrolled in Northwestern University’s Master’s in Data Science for MIT MicroMasters Credential Holders

The pathway to an enhanced career

The pathway model launched in 2016 with the MicroMasters in Supply Chain Management. Now, there are over 50 pathway institutions that offer more than 100 different programs for master’s degrees. With pathway institutions located around the world, MicroMasters credential holders can obtain master’s degrees from local residential or virtual programs, at a location convenient to them. They can receive credit for their MicroMasters courses upon acceptance, providing flexibility for online programs and also shortening the time needed on site for residential programs.

“The pathways expand opportunities for learners, and also help universities attract a broader range of potential students, which can enrich their programs,” says Dana Doyle, senior director for the MicroMasters Program at MIT Open Learning. “This is a tangible way we can achieve our mission of expanding education access.”

Da Silva Branco Magalhães began the MicroMasters in Statistics and Data Science program in 2020, ultimately completing the program in 2022.

“After having worked for 20 years in the transportation sector in various roles, I realized I was no longer equipped as a professional to deal with the new technologies that were set to disrupt the mobility sector,” says da Silva Branco Magalhães. “It became clear to me that data and AI were the driving forces behind new products and services such as autonomous vehicles, on-demand transport, or mobility as a service, but I didn’t really understand how data was being used to achieve these outcomes, so I needed to improve my knowledge.”

July 2023 MicroMasters Program Joint Completion Celebration for SCM, DEDP, PoM, SDS, and Fin
Video: MIT Open Learning

The MicroMasters in Statistics and Data Science was developed by the MIT Institute for Data, Systems, and Society and MITx. Credential holders are required to complete four courses equivalent to graduate-level courses in statistics and data science at MIT and a capstone exam comprising four two-hour proctored exams.

“The content is world-class,” da Silva Branco Magalhães says of the program. “Even the most complex concepts were explained in a very intuitive way. The exercises and the capstone exam are challenging and stimulating — and MIT-level — which makes this credential highly valuable in the market.”

Da Silva Branco Magalhães also found the discussion forum very useful, and valued conversations with his colleagues, noting that many of these discussions later continued after completion of the program.

Gaining analysis and leadership skills

Now in the Northwestern pathway program, da Silva Branco Magalhães finds that the MicroMasters in Statistics and Data Science program prepared him well for this next step in his studies. The nine-course, accelerated, online master’s program is designed to offer the same depth and rigor of Northwestern’s 12-course MS in Data Science program, aiming to help students build essential analysis and leadership skills that can be directly implemented into the professional realm. Students learn how to make reliable predictions using traditional statistics and machine learning methods.

Da Silva Branco Magalhães says he has appreciated the remote nature of the Northwestern program, as he started it in France and then completed the first three courses in Australia. He also values the high number of elective courses, allowing students to design the master’s program according to personal preferences and interests.

“I want to be prepared to meet the challenges and seize the opportunities that AI and data science technologies will bring to the professional realm,” he says. “With this credential, there are no limits to what you can achieve in the field of data science.”

AI & Big Data Expo: Unlocking the potential of AI on edge devices

In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them. During the discussion, Grande provided insightful perspectives on the current challenges, how Edge…

LucidDreamer: High-Fidelity Text-to-3D Generation via Interval Score Matching

The recent advancements in text-to-3D generative AI frameworks have marked a significant milestone in generative models. They pave the way for new possibilities in creating 3D assets across numerous real-world scenarios. Digital 3D assets now hold an indispensable place in our digital presence, enabling comprehensive visualization…

Mistral AI’s Latest Mixture of Experts (MoE) 8x7B Model

Mistral AI which is a Paris-based open-source model startup has challenged norms by releasing its latest large language model (LLM), MoE 8x7B, through a simple torrent link. This contrasts Google’s traditional approach with their Gemini release, sparking conversations and excitement within the AI community. Mistral AI’s…

Image recognition accuracy: An unseen challenge confounding today’s AI

Image recognition accuracy: An unseen challenge confounding today’s AI

Imagine you are scrolling through the photos on your phone and you come across an image that at first you can’t recognize. It looks like maybe something fuzzy on the couch; could it be a pillow or a coat? After a couple of seconds it clicks — of course! That ball of fluff is your friend’s cat, Mocha. While some of your photos could be understood in an instant, why was this cat photo much more difficult?

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers were surprised to find that despite the critical importance of understanding visual data in pivotal areas ranging from health care to transportation to household devices, the notion of an image’s recognition difficulty for humans has been almost entirely ignored. One of the major drivers of progress in deep learning-based AI has been datasets, yet we know little about how data drives progress in large-scale deep learning beyond that bigger is better.

In real-world applications that require understanding visual data, humans outperform object recognition models despite the fact that models perform well on current datasets, including those explicitly designed to challenge machines with debiased images or distribution shifts. This problem persists, in part, because we have no guidance on the absolute difficulty of an image or dataset. Without controlling for the difficulty of images used for evaluation, it’s hard to objectively assess progress toward human-level performance, to cover the range of human abilities, and to increase the challenge posed by a dataset.

To fill in this knowledge gap, David Mayo, an MIT PhD student in electrical engineering and computer science and a CSAIL affiliate, delved into the deep world of image datasets, exploring why certain images are more difficult for humans and machines to recognize than others. “Some images inherently take longer to recognize, and it’s essential to understand the brain’s activity during this process and its relation to machine learning models. Perhaps there are complex neural circuits or unique mechanisms missing in our current models, visible only when tested with challenging visual stimuli. This exploration is crucial for comprehending and enhancing machine vision models,” says Mayo, a lead author of a new paper on the work.

This led to the development of a new metric, the “minimum viewing time” (MVT), which quantifies the difficulty of recognizing an image based on how long a person needs to view it before making a correct identification. Using a subset of ImageNet, a popular dataset in machine learning, and ObjectNet, a dataset designed to test object recognition robustness, the team showed images to participants for varying durations from as short as 17 milliseconds to as long as 10 seconds, and asked them to choose the correct object from a set of 50 options. After over 200,000 image presentation trials, the team found that existing test sets, including ObjectNet, appeared skewed toward easier, shorter MVT images, with the vast majority of benchmark performance derived from images that are easy for humans.

The project identified interesting trends in model performance — particularly in relation to scaling. Larger models showed considerable improvement on simpler images but made less progress on more challenging images. The CLIP models, which incorporate both language and vision, stood out as they moved in the direction of more human-like recognition.

“Traditionally, object recognition datasets have been skewed towards less-complex images, a practice that has led to an inflation in model performance metrics, not truly reflective of a model’s robustness or its ability to tackle complex visual tasks. Our research reveals that harder images pose a more acute challenge, causing a distribution shift that is often not accounted for in standard evaluations,” says Mayo. “We released image sets tagged by difficulty along with tools to automatically compute MVT, enabling MVT to be added to existing benchmarks and extended to various applications. These include measuring test set difficulty before deploying real-world systems, discovering neural correlates of image difficulty, and advancing object recognition techniques to close the gap between benchmark and real-world performance.”

“One of my biggest takeaways is that we now have another dimension to evaluate models on. We want models that are able to recognize any image even if — perhaps especially if — it’s hard for a human to recognize. We’re the first to quantify what this would mean. Our results show that not only is this not the case with today’s state of the art, but also that our current evaluation methods don’t have the ability to tell us when it is the case because standard datasets are so skewed toward easy images,” says Jesse Cummings, an MIT graduate student in electrical engineering and computer science and co-first author with Mayo on the paper.

From ObjectNet to MVT

A few years ago, the team behind this project identified a significant challenge in the field of machine learning: Models were struggling with out-of-distribution images, or images that were not well-represented in the training data. Enter ObjectNet, a dataset comprised of images collected from real-life settings. The dataset helped illuminate the performance gap between machine learning models and human recognition abilities, by eliminating spurious correlations present in other benchmarks — for example, between an object and its background. ObjectNet illuminated the gap between the performance of machine vision models on datasets and in real-world applications, encouraging use for many researchers and developers — which subsequently improved model performance.

Fast forward to the present, and the team has taken their research a step further with MVT. Unlike traditional methods that focus on absolute performance, this new approach assesses how models perform by contrasting their responses to the easiest and hardest images. The study further explored how image difficulty could be explained and tested for similarity to human visual processing. Using metrics like c-score, prediction depth, and adversarial robustness, the team found that harder images are processed differently by networks. “While there are observable trends, such as easier images being more prototypical, a comprehensive semantic explanation of image difficulty continues to elude the scientific community,” says Mayo.

In the realm of health care, for example, the pertinence of understanding visual complexity becomes even more pronounced. The ability of AI models to interpret medical images, such as X-rays, is subject to the diversity and difficulty distribution of the images. The researchers advocate for a meticulous analysis of difficulty distribution tailored for professionals, ensuring AI systems are evaluated based on expert standards, rather than layperson interpretations.

Mayo and Cummings are currently looking at neurological underpinnings of visual recognition as well, probing into whether the brain exhibits differential activity when processing easy versus challenging images. The study aims to unravel whether complex images recruit additional brain areas not typically associated with visual processing, hopefully helping demystify how our brains accurately and efficiently decode the visual world.

Toward human-level performance

Looking ahead, the researchers are not only focused on exploring ways to enhance AI’s predictive capabilities regarding image difficulty. The team is working on identifying correlations with viewing-time difficulty in order to generate harder or easier versions of images.

Despite the study’s significant strides, the researchers acknowledge limitations, particularly in terms of the separation of object recognition from visual search tasks. The current methodology does concentrate on recognizing objects, leaving out the complexities introduced by cluttered images.

“This comprehensive approach addresses the long-standing challenge of objectively assessing progress towards human-level performance in object recognition and opens new avenues for understanding and advancing the field,” says Mayo. “With the potential to adapt the Minimum Viewing Time difficulty metric for a variety of visual tasks, this work paves the way for more robust, human-like performance in object recognition, ensuring that models are truly put to the test and are ready for the complexities of real-world visual understanding.”

“This is a fascinating study of how human perception can be used to identify weaknesses in the ways AI vision models are typically benchmarked, which overestimate AI performance by concentrating on easy images,” says Alan L. Yuille, Bloomberg Distinguished Professor of Cognitive Science and Computer Science at Johns Hopkins University, who was not involved in the paper. “This will help develop more realistic benchmarks leading not only to improvements to AI but also make fairer comparisons between AI and human perception.” 

“It’s widely claimed that computer vision systems now outperform humans, and on some benchmark datasets, that’s true,” says Anthropic technical staff member Simon Kornblith PhD ’17, who was also not involved in this work. “However, a lot of the difficulty in those benchmarks comes from the obscurity of what’s in the images; the average person just doesn’t know enough to classify different breeds of dogs. This work instead focuses on images that people can only get right if given enough time. These images are generally much harder for computer vision systems, but the best systems are only a bit worse than humans.”

Mayo, Cummings, and Xinyu Lin MEng ’22 wrote the paper alongside CSAIL Research Scientist Andrei Barbu, CSAIL Principal Research Scientist Boris Katz, and MIT-IBM Watson AI Lab Principal Researcher Dan Gutfreund. The researchers are affiliates of the MIT Center for Brains, Minds, and Machines.

The team is presenting their work at the 2023 Conference on Neural Information Processing Systems (NeurIPS).

Philip Erickson named director of MIT Haystack Observatory

Philip Erickson named director of MIT Haystack Observatory

Philip J. Erickson has been named the new director of MIT Haystack Observatory, effective Jan. 1, 2024. In leading the radio science observatory in Westford, Massachusetts, Erickson, who is currently Haystack’s associate director, succeeds longtime director Colin J. Lonsdale, who earlier this year shared his intent to step down.

Maria Zuber, MIT’s vice president for research, announced Erickson’s appointment today, saying, “Phil is an accomplished radio scientist and ionosphere-magnetosphere researcher with a strong track record of leadership within the Haystack research community and well beyond.”

“The observatory and its community of researchers are in excellent hands,” she added.

An interdisciplinary research center, MIT Haystack Observatory was built in 1961 as part of MIT Lincoln Laboratory and gained independent status in 1970. The Haystack mission is to develop technology for radio science applications, to study the structure of our galaxy and the larger universe, advance scientific knowledge of our planet and its space environment, and contribute to the education of future scientists and engineers. Research groups in astronomy, geodesy, geospace, and space technology are united by a focus on radio science. Located approximately 30 miles northwest of MIT’s Cambridge campus, the Haystack facility supports a number of large radio telescopes and antennas. Research and engineering projects encompass both local radio science and technology development as well as global leadership and collaboration in the field.

Erickson obtained his doctorate in space plasma physics from Cornell University in 1998; he began science and technical work at Haystack in 1995 and has served as head of its atmospheric and geospace sciences group since 2015. Erickson joined the Haystack director’s office as assistant director also in 2015 and was appointed associate director in 2020. He is the lead principal investigator for several projects, including the National Science Foundation-sponsored Millstone Hill Geospace Facility. He co-directs Haystack’s educational and public outreach programs as well as the observatory’s student research programs.

The observatory will continue to build upon its history of radio science innovation, Erickson says: “Haystack Observatory has long been recognized as a leader in the science and technology of radio and radar remote sensing for fundamental research on regions that stretch from our planet’s polar caps and upper atmosphere to the solar system, black holes, stars, galaxies, and even the structure of the early universe. I am excited to work with the dedicated and very talented Haystack staff to continue and expand our research in all these areas, for the benefit of humanity’s constant quest to expand knowledge of our world and universe.”

Within the geospace science field, which studies the coupled Earth-sun system, Erickson’s primary interests focus on fundamental physics and dynamics of the ionized and neutral portions of Earth’s upper atmosphere and surrounding magnetosphere, along with the observational technique of collective Thomson/incoherent scatter radar theory and practice.

In addition to the various leadership roles he holds within Haystack, Erickson is dedicated to fostering community engagement in the fields of geospace sciences and radio wave remote sensing; he serves as a member of numerous national and international groups, including the National Academies of Science, Engineering, and Medicine (NASEM) Committee on Radio Frequencies, and is a co-chair of the ionosphere-atmosphere panel in the 2024-33 NASEM Heliophysics Decadal Survey. He is a frequent journal reviewer and serves as editor for several peer-reviewed publications. Erickson is also an active member of the amateur radio community, with a focus on engaging operators across the United States in productive ionospheric citizen science.

Erickson succeeds Colin Lonsdale, who is stepping down as Haystack director after more than 15 years at the helm. Lonsdale, a radio astronomer, earned his doctorate degree in 1981 from the Jodrell Bank Observatory, a part of the Victoria University of Manchester in the U.K. He joined Haystack in 1986 and was named director in 2008. Lonsdale will be continuing at Haystack as a principal research scientist, focusing on research involving a range of topics including active galaxies, solar emissions at low radio frequencies, and concepts for innovative radio science space missions.

“Phil Erickson is an outstanding choice for the new director, and the observatory will prosper under his capable leadership,” says Lonsdale.