Infinity Nikki Preview – A Promising Dress Rehearsal – Game Informer

Infinity Nikki’s big PlayStation State of Play spotlight in May garnered an array of reactions. Fans of the popular Nikki series of mobile fashion games were excited to see the pink-haired fashionista make her triple-A debut. Unfamiliar viewers either dismissed it as an overly whimsical dress-up game or found themselves unexpectedly intrigued by its delightful and unique spin on an open-world experience. I count myself in that third camp. Something about the game’s premise stuck with me, but after playing several hours of a beta version, I’ve gone from admiring Infinity Nikki conceptually to being genuinely excited to wear the full experience. 

The adventure sees the titular Nikki and her furry best pal, Momo, preparing for a ball only to be unwillingly transported to the magical world of Miraland. A god-like being named Ena the Curator tells Nikki she’s been chosen to find and restore the Miracle Outfits, a collection of magical and powerful dresses, to uncover a divine truth. Along the way, Nikki also learns that she’s a Stylist. These are special people with the ability to find and create outfits anytime, anywhere. 

I begin the game with three Ability Dresses, special outfits bestowing a unique power. One is the blue dress/blonde hair ensemble seen in the State of Play trailer that allows me to perform a floating jump Princess Peach style. Not only is this good for crossing large gaps or bounding between rooftops, but Nikki can also perform a plummeting slam to defeat enemies or shatter fragile objects. A Purification dress is used for combat, letting Nikki fire orbs of cleansing energy to purify (not kill) demonically corrupted enemies called Esselings and collectibles. One cute dress lets Nikki groom certain animals to collect materials from them. Primary abilities like floating and purification are mapped to buttons while specialized dresses like the grooming or a bug-catching dress I unlock later can be activated via a selection wheel.

[embedded content]

Based on the beta slice, Miraland is vast and inviting, thanks to colorful and lush flower fields, rolling hills, and the quaint village of Florawish. Infinity Nikki looks as pretty as her wardrobe. There’s plenty to do and explore, whether it’s completing side quests or, the main hook, hunting for Whimstars. These special stars are used to unlock new items and perks from a skill tree and are scattered everywhere. While some can merely be found (with help from Momo’s special vision that can highlight and tag distant Whimstars), others have small challenges tied to them. Examples include defeating enemies or searching for hidden gold stars, such as an ornament atop an umbrella. You can also gain Whimstars by entering special portals that warp players to platforming/puzzle challenge rooms reminiscent of shrines from The Legend of Zelda: Breath of the Wild/Tears of the Kingdom, albeit simpler. I like gathering Whimstars as it captures the familiar joy of collecting stars in a 3D Mario game. 

Whimstars are spent unlocking new outfits and other items in a skill tree called the Heart of Infinity. However, this only makes the outfit available to be crafted – you still have to make it yourself. While exploring Miraland, you’ll gather materials like various fruits, flowers, threads, furs, and more to craft a desired ensemble, consisting of parts like hairstyles, upper and lower body wear, make-up, and accessories. Sketches for outfits can be found or earned by completing quests and other activities. 

The more outfit sketches you find and unlock, the higher your level as a Stylist rises. Early on, Nikki joins the Stylist Guild in Florawish. Here, she receives a tablet-like device called a Pear-Pal that keeps track of a litany of goals, such as defeating a quota of enemies or taking photos with an in-game camera. Completing goals raises your rank, which rewards money and materials to unlock more outfits. This should appeal to objective-oriented players, as there’s no shortage of meters to fill. With main story and side quests, Stylist Rank, and daily challenges, Infinity Nikki constantly tracks and rewards all aspects of play in a manner similar to games like Genshin Impact. 

Checking a box usually means getting new crafting materials or outfit sketches. This is an effective hook, as the Nikki series’ Bread and Butter is dressing up the heroine with a staggering collection of clothing options. You can outfit Nikki however you wish, regardless of any abilities tied to outfit pieces. If you really dig the bug-catching dress and want Nikki to rock that look 24/7, you can do that. With so many apparel options, players will likely spend a lot of time making Nikki look as girly, regal, or edgy as they see fit. 

Outside of roaming the scenic open world and collecting stars and materials, a main story quest adds some meaty narrative intrigue. Cutscenes are nicely rendered and have solid comedic writing at times. Characters like a talking dragon-like poet or moments like watching a girl riding a giant origami paper plane crash into and level a building add an oddball charm to the perpetually saccharine vibe. 

Infinity Nikki is chock full of charm at every turn. Even manipulating the world’s day/night cycle involves playing a cute Flappy Bird-esque minigame. Every corner aims to make you smile while completing myriad objectives, and earning new outfits provide effective dopamine hits. While I enjoy basking in Infinity Nikki’s cozy atmosphere, its cinematic teases of a grander mystery intrigued me even more. I still have little idea what to make of this world and how it works, but I want to learn more, and several lore books and other notes suggest what could be unexpectedly deep world-building. The game also features a multiplayer component, but it was not available in the beta, and developer Infold Games isn’t ready to discuss it yet. 

Most of all, having an open-world game that doesn’t feature overt violence is refreshing. I wouldn’t consider any obstacle I’ve encountered thus far to be challenging, but there’s an allure to just being in this world that’s hard to deny – everything is just so darn pleasant. I’ve got my outfit picked out, so hopefully, Infinity Nikki’s release won’t leave us waiting too much longer. 

Infinity Nikki is coming to PlayStation 5, PC, iOS, and Android. A closed PC public beta is now available for select registered players. 

Claude 3.5 Sonnet: Redefining the Frontiers of AI Problem-Solving

Creative problem-solving, traditionally seen as a hallmark of human intelligence, is undergoing a profound transformation. Generative AI, once believed to be just a statistical tool for word patterns, has now become a new battlefield in this arena. Anthropic, once an underdog in this arena, is now…

SoftBank launches healthcare venture with Tempus AI

SoftBank Group, the Japanese technology investment firm, has announced a strategic joint venture with Tempus AI, a company specialising in AI-driven medical data analysis and treatment recommendations. This partnership was revealed by SoftBank’s CEO, Masayoshi Son, during a briefing in Tokyo, marking another significant move in…

Tech war escalates: OpenAI shuts door on China

This week, OpenAI has decisively blocked access to its site from mainland China and Hong Kong, cutting off developers and companies from some of the most advanced AI technologies available today. OpenAI’s move is not surprising given the increasing geopolitical tensions and technology rivalry; however, it…

Microsoft details ‘Skeleton Key’ AI jailbreak

Microsoft has disclosed a new type of AI jailbreak attack dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all…

Study reveals why AI models that analyze medical images can be biased

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always perform well across all demographic groups, usually faring worse on women and people of color.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays — something that the most skilled radiologists can’t do.

That research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps” — that is, discrepancies in their ability to accurately diagnose images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper. MIT graduate student Yuzhe Yang is also a lead author of the paper, which appears today in Nature Medicine. Judy Gichoya, an associate professor of radiology and imaging sciences at Emory University School of Medicine, and Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, are also authors of the paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity — radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” — that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorizations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimize “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalized if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on — for example, only patients from the Beth Israel Deaconess Medical Center dataset.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyze patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal — that is, they do not make the best trade-off between overall and subgroup performance — in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

The research was funded by a Google Research Scholar Award, the Robert Wood Johnson Foundation Harold Amos Medical Faculty Development Program, RSNA Health Disparities, the Lacuna Fund, the Gordon and Betty Moore Foundation, the National Institute of Biomedical Imaging and Bioengineering, and the National Heart, Lung, and Blood Institute.

The Friday Roundup – Back to Video Basics and Camera Tips

8 Simple Editing Techniques and Concepts To Make Better Videos I thought I would kick off this week’s Friday Roundup with a bit of a “back to basics” tutorial. A lot of the editing tutorials that are kicking around these days seem to be covering a…

Leaning into the immune system’s complexity

At any given time, millions of T cells circulate throughout the human body, looking for potential invaders. Each of those T cells sports a different T cell receptor, which is specialized to recognize a foreign antigen.

To make it easier to understand how that army of T cells recognizes their targets, MIT Associate Professor Michael Birnbaum has developed tools that can be used to study huge numbers of these interactions at the same time.

Deciphering those interactions could eventually help researchers find new ways to reprogram T cells to target specific antigens, such as mutations found in a cancer patient’s tumor.

“T-cells are so diverse in terms of what they recognize and what they do, and there’s been incredible progress in understanding this on an example-by-example basis. Now, we want to be able to understand the entirety of this process with some of the same level of sophistication that we understand the individual pieces. And we think that once we have that understanding, then we can be much better at manipulating it to positively affect disease,” Birnbaum says.

This approach could lead to improvements in immunotherapy to treat cancer, as well as potential new treatments for autoimmune disorders such as type 1 diabetes, or infections such as HIV and Covid-19.

Tackling difficult problems

Birnbaum’s interest in immunology developed early, when he was a high school student in Philadelphia. His school offered a program allowing students to work in research labs in the area, so starting in tenth grade, he did research in an immunology lab at Fox Chase Cancer Center.

“I got exposed to some of the same things I study now, actually, and so that really set me on the path of realizing that this is what I wanted to do,” Birnbaum says.

As an undergraduate at Harvard University, he enrolled in a newly established major known as chemical and physical biology. During an introductory immunology course, Birnbaum was captivated by the complexity and beauty of the immune system. He went on to earn a PhD in immunology at Stanford University, where he began to study how T cells recognize their target antigens.

T cell receptors are protein complexes found on the surfaces of T cells. These receptors are made of gene segments that can be mixed and matched to form up to 1015 different sequences. When a T cell receptor finds a foreign antigen that it recognizes, it signals the T cell to multiply and begin the process of eliminating the cells that display that antigen.

As a graduate student, Birnbaum worked on building tools to study interactions between antigens and T cells at large scales. After finishing his PhD, he spent a year doing a postdoc in a neuroscience lab at Stanford, but quickly realized he wanted to get back to immunology.

In 2016, Birnbaum was hired as a faculty member in MIT’s Department of Biological Engineering and the Koch Institute for Integrative Cancer Research. He was drawn to MIT, he says, by the willingness of scientists and engineers at the Institute to work together to take on difficult, important problems.

“There’s a fearlessness to how people were willing to do that,” he says. “And the community, particularly the immunology community here, was second to none, both in terms of its quality, but also in terms of how supportive it was.”

Billions of targets

At MIT, Birnbaum’s lab focuses on T cell-antigen interactions, with the hope of eventually being able to reprogram those interactions to help fight diseases such as cancer. In 2022, he reported a new technique for analyzing these interactions at large scales.

Until then, most existing tools for studying the immune system were designed to allow for the study of a large pool of antigens exposed to one T cell (or B cell), or a large pool of immune cells encountering a small number of antigens. Birnbaum’s new method uses engineered viruses to present many different antigens to huge populations of immune cells, allowing researchers to screen huge libraries of both antigens and immune cells at the same time.

“The immune system works with millions of unique T cell receptors in each of us, and billions of possible antigen targets,” Birnbaum says. “In order to be able to really understand the immune system at scale, we spend a lot of time trying to build tools that can work at similar scales.”

This approach could enable researchers to eventually screen thousands of antigens against an entire population of B cells and T cells from an individual, which could reveal why some people naturally fight off certain viruses, such as HIV, better than others.

Using this method, Birnbaum also hopes to develop ways to reprogram T cells inside a patient’s body. Currently, T cell reprogramming requires T cells to be removed from a patient, genetically altered, and then reinfused into the patient. All of these steps could be skipped if instead the T cells were reprogrammed using the same viruses that Birnbaum’s screening technology uses. A company called Kelonia, co-founded by Birnbaum, is also working toward this goal.

To model T cell interactions at even larger scales, Birnbaum is now working with collaborators around the world to use artificial intelligence to make computational predictions of T cell-antigen interactions. The research team, which Birnbaum is leading, includes 12 labs from five countries, funded by Cancer Grand Challenges. The researchers hope to build predictive models that may help them design engineered T cells that could help treat many different diseases.

“The program is put together with a focus on whether these types of predictions are possible, but if they are, it could lead to much better understanding of what immunotherapies may work with different people. It could lead to personalized vaccine design, and it could lead to personalized T cell therapy design,” Birnbaum says.