HONOR CEO George Zhao Outlines AI Vision

In the bustling halls of MWC 2024, an event that garners the attention of the world’s tech enthusiasts, HONOR stands out with its innovative flair. At the helm, HONOR CEO George Zhao offers a glimpse into a future where artificial intelligence (AI) seamlessly integrates into our…

AI-Driven Healthcare Revolution: MWC Conference Insights

In an era where technology intertwines with every aspect of our lives, the realm of healthcare stands on the brink of a monumental transformation, poised at the heart of the AI-driven health revolution. The recent MWC Conference, short for Mobile World Congress, is the world’s largest…

Final Fantasy VII Rebirth Is The Second-Highest Rated Game In The Series On Metacritic

Final Fantasy VII Rebirth hits PlayStation 5 exclusively on February 29, but reviews for Square Enix’s latest entry in its Final Fantasy VII remake trilogy dropped yesterday. We enjoyed the game a lot, and you can read about why in Game Informer’s Final Fantasy VII Rebirth review. It turns out, a lot of reviewers enjoyed the game too, because at the time of this writing, it’s the second-highest rated game in the Final Fantasy series on Metacritic, as noted by video game sales analyst Benji-Sales on X (formerly Twitter). 

Currently sitting at a 93 on Metacritic, which aggregates various reviews into a single Metacritic score, only one other Final Fantasy game has been rated higher: Final Fantasy IX on PlayStation 1. It has a 94 on Metacritic.

As you can see above, with a 93 on Metacritic, Final Fantasy VII Rebirth joins a prestigious list of games in the series with a Metacritic score of 90 or higher, as pointed out by AndSpaceY on X. Now, it’s important to note that some Final Fantasy games are not rated on Metacritic as the aggregate site was not around when they released. For example, Final Fantasy VI is one of the most beloved entries in the franchise and arguably the best – released in 1994 on the SNES, Metacritic was not around at the time so it does not have a rating on the site. 

Still, Final Fantasy VII Rebirth rating so high is another example of how Square Enix’s Final Fantasy VII remake project is landing with both critics and fans alike. 

For more about the game, read Game Informer’s Final Fantasy VII Rebirth review and then check out our Final Fantasy VII Rebirth cover story hub for exclusive features, videos, and more. After that, check out Game Informer’s ranking of every mainline Final Fantasy game


Are you picking up Final Fantasy VII Rebirth on day one next week? Let us know in the comments below!

Death Stranding Documentary, ‘Hideo Kojima: Connecting Worlds,’ Streaming Now On Disney Plus

Death Stranding hit PlayStation 4 exclusively back in 2019 before launching on PC with a Director’s Cut in 2020. As we await Death Stranding 2: On The Beach, which is due out on PlayStation 5 in 2025, Disney Plus has dropped a documentary that might tide fans over. Titled Hideo Kojima: Connecting Worlds, this film “explores the creative process behind Hideo Kojima launching his independent studio up to the completion of Death Stranding.”

It was announced last year before debuting at the 2023 Tribeca Film Festival but starting today, you can stream it on Disney Plus. 

We haven’t yet watched it but given Kojima’s love of movies, we’re excited to see what a documentary about the man himself is like. The release of Hideo Kojima: Connecting Worlds follows the debut of Grounded II: The Making of the Last of Us Part II, which also provided a behind-the-scenes look at video game development. 

For more, read Game Informer’s Death Stranding review and then read about why Death Stranding Director’s Cut is worth another trek across America. After that, watch the latest trailer for Death Stranding 2: On The Beach, which is expectedly weird. 


Are you going to check out Hideo Kojima: Connecting Worlds this weekend? Let us know what you think of it in the comments below!

Check Out New Rise Of The Ronin Gameplay In New Behind-The-Scenes Video

Developer Team Ninja has released a new behind-the-scenes video for the upcoming PlayStation 5 exclusive, Rise of the Rōnin, and it features new gameplay showing off the game’s tough and fast action combat. Unsurprisingly, the game continues to look great. 

This new look at the game’s action features conversation from game producer and director Fumihiko Yasuda and animation lead Kosuke Wakamatsu discussing how Team Ninja approached the combat. They say it started with determining the different ideologies within Rōnin history, why certain individuals fought the way they did, and how that translates to a game aiming to be authentic to the Bakumatsu history. 

Check it the new Rise of the Rōnin gameplay for yourself below

[embedded content]

Yasuda and Wakamatsu discuss the Bakamutsu period of Japan’s history and how the introduction of Western weapons, like firearms, dramatically changed the way Rōnin went about battle. Instead of just katanas and up-close weaponry, firearms forced warriors to adapt to new types of ranged combat. This led to the use of firearms like pistols alongside swords, bayonets, and more. 

If the use of guns has you worried Rise of the Rōnin won’t deliver the fast and difficult up-close action Team Ninja is known for, fear not. “Action remains at the core of Rise of the Rōnin [and] it is the culmination of our previous work on action games at Team Ninja,” Yasuda says in the video. 

Wakamatsu adds, “Team Ninja’s action is known for its high speed and difficulty, [and] we feel it’s part of the players’ expectations as well.” 

Elsewhere in the video, the two developers discuss Rise of the Rōnin’s bond system, which allows NPC characters to join the protagonist’s journey. You can even switch to different characters for special moves. The video ends with another look at the counter system in the game’s combat. 

For more about the game, check out this Rise of the Rōnin gameplay trailer from last month, and then check out Game Informer’s preview thoughts on Rise of the Rōnin after talking with the team in December. 


Are you excited for Rise of the Rōnin? Let us know in the comments below!

Nintendo Direct Recap, Princess Peach: Showtime! Preview | All Things Nintendo

This week on All Things Nintendo, Brian is first joined by Kyle Hilliard to recap all the news out of the Nintendo Direct Partner Showcase, as well as some news items that happened outside of that livestream. Then, Kyle leaves, and Alex Van Aken joins the episode to deliver his hands-on impressions of Princess Peach: Showtime! and an eShop Gem of the Week.

[embedded content]

If you’d like to follow Brian on social media, you can do so on his Instagram/Threads @BrianPShea or Twitter @BrianPShea. You can follow Kyle on Twitter: @KyleMHilliard and BlueSky: @KyleHilliard. You can follow Alex Van Aken on Twitter @ItsVanAken.

The All Things Nintendo podcast is a weekly show where we celebrate, discuss, and break down all the latest games, news, and announcements from the industry’s most recognizable name. Each week, Brian is joined by different guests to talk about what’s happening in the world of Nintendo. Along the way, they’ll share personal stories, uncover hidden gems in the eShop, and even look back on the classics we all grew up with. A new episode hits every Friday!

Be sure to subscribe to All Things Nintendo on your favorite podcast platform. The show is available on Apple PodcastsSpotifyGoogle Podcasts, and YouTube.


00:00:00 – Introduction
00:01:45 – New Reports of Switch 2 Release Window
00:06:28 – Pokémon Presents Announced
00:11:47 – New Pokémon Concierge Episodes Coming
00:12:34 – Time Magazine Pokémon Covers
00:14:05 – Spirit Airlines Super Nintendo World Plane
00:16:18 – Mother 3 Coming to Japan Nintendo Switch Online
00:21:13 – Nintendo Direct Partner Showcase Recap
00:54:37 – Listener Email About Epic Universe Theme Park
00:59:59 – Princess Peach: Showtime Preview
01:24:09 – eShop Gem of the Week: Balatro


If you’d like to get in touch with the All Things Nintendo podcast, you can email AllThingsNintendo@GameInformer.com, messaging Brian on Instagram (@BrianPShea), or by joining the official Game Informer Discord server. You can do that by linking your Discord account to your Twitch account and subscribing to the Game Informer Twitch channel. From there, find the All Things Nintendo channel under “Community Spaces.”


For Game Informer’s other podcast, be sure to check out The Game Informer Show with hosts Alex Van Aken, Marcus Stewart, and Kyle Hilliard, which covers the weekly happenings of the video game industry!

Street Fighter 6 Ed Early Impressions

When it launched last June, Street Fighter 6  immediately earned critical acclaim, differentiating itself from its predecessor. With 18 fighters and a wealth of single-player content on offer, Street Fighter 6 gave players plenty to enjoy in its earliest days, but that’s not stopping the team from pursuing a post-launch content plan similar to the one that eventually righted the ship with Street Fighter V. Last July, the SFV debutant Rashid joined the roster, followed by new challenger AKI in September. During Capcom Cup X, we went hands on with Ed, the latest Street Fighter 6 DLC character, before he joins the fight this week.

Ed first appeared as a non-playable character in Street Fighter IV before joining the actual roster in the second wave of Street Fighter V’s DLC. Raised by Shadaloo as a potential replacement body for M. Bison, Ed can wield Psycho Power, which he uses to great effect in Street Fighter 6.

As a protege of Balrog, Ed uses a combination of boxing and Psycho Power. He can pull off a mix of close and far-range attacks, but his bread and butter feels like his mid-range offerings, including a medium kick that actually activates a low-to-high mid-range punch combo that opponents will need to react quickly in order to block both punches.

His special moves include a charging uppercut and a Psycho Power projectile that feels similar to Luke’s fireball, but again, his mid-range attacks are most effective, including a charged attack that can grab his opponent into the air, opening them up for a combo. You can also charge his heavy punch to pull off a Superman Punch that passes through the opponent and opens them up to additional attacks. Meanwhile, his Super Arts are also powerful, including one that throws a bevy of fast punches, one that launches a slow-moving, multi-hit projectile, and a cinematic Critical Art that ties the opponent up with ropes made of Psycho Power as Ed unloads on their body.

Ed is also fast, letting him get in and out. Cornering . He feels like he has a high skill ceiling, but even after playing him for just 45 minutes, I was learning devastating combos, particularly when I was able to corner my opponent.

Ed arrives in Street Fighter 6 on February 27 as part of the Year 1 Character Pass. That pass also includes the aforementioned Rashid and AKI, as well as the upcoming Akuma. For more on Street Fighter 6, be sure to check out our review and our exclusive coverage hub.

How to avoid a “winner’s curse” for social programs

Back in the 1980s, researchers tested a job-training program called JOBSTART in 13 U.S. cities. In 12 locations, the program had a minimal benefit. But in San Jose, California, results were good: After a few years, workers earned about $6,500 more annually than peers not participating in it. So, in the 1990s, U.S. Department of Labor researchers implemented the program in another 12 cities. The results were not replicated, however. The initial San Jose numbers remained an outlier.

This scenario could be a consequence of something scholars call the “winner’s curse.” When programs or policies or ideas get tested, even in rigorous randomized experiments, things that function well one time may perform worse the next time out. (The term “winner’s curse” also refers to high winning bids at an auction, a different, but related, matter.)

This winner’s curse presents a problem for public officials, private-sector firm leaders, and even scientists: In choosing something that has tested well, they may be buying into decline. What goes up will often come down.

“In cases where people have multiple options, they pick the one they think is best, often based on the results of a randomized trial,” says MIT economist Isaiah Andrews. “What you will find is that if you try that program again, it will tend to be disappointing relative to the initial estimate that led people to pick it.”

Andrews is co-author of a newly published study that examines this phenomenon and provides new tools to study it, which could also help people avoid it.  

The paper, “Inference on Winners,” appears in the February issue of the Quarterly Journal of Economics. The authors are Andrews, a professor in the MIT Department of Economics and an expert in econometrics, the statistical methods of the field; Toru Kitagawa, a professor of economics at Brown University; and Adam McCloskey, an associate professor of economics at the University of Colorado.

Distinguishing differences

The kind of winner’s curse addressed in this study dates back a few decades as a social science concept, and also comes up in the natural sciences: As the scholars note in the paper, the winner’s curse has been observed in genome-wide association studies, which attempt to link genes to traits.

When seemingly notable findings fail to hold up, there may be varying reasons for it. Sometimes experiments or programs are not all run the same way when people attempt to replicate them. At other times, random variation by itself can create this kind of situation.

“Imagine a world where all these programs are exactly equally effective,” Andrews says. “Well, by chance, one of them is going to look better, and you will tend to pick that one. What that means is you overestimated how effective it is, relative to the other options.” Analyzing the data well can help distinguish whether the outlier result was due to true differences in effectiveness or to random fluctuation.

To distinguish between these two possibilities, Andrews, Kitagawa, and McCloskey have developed new methods for analyzing results. In particular, they have proposed new estimators — a means of projecting results — which are “median unbiased.” That is, they are equally likely to over- and underestimate effectiveness, even in settings with a winner’s curse. The methods also produce confidence intervals that help quantify the uncertainty of these estimates. Additionally, the scholars propose “hybrid” inference approaches, which combine multiple methods of weighing research data, and, as they show, often yield more precise results than alternative methods.

With these new methods, Andrews, Kitagawa, and McCloskey establish firmer boundaries on the use of data from experiments — including confidence intervals, median unbiased estimates, and more. And to test their method’s viability, the scholars applied it to multiple instances of social science research, beginning with the JOBSTART experiment.

Intriguingly, of the different ways experimental results can become outliers, the scholars found that the San Jose result from JOBSTART was probably not just the result of random chance. The results are sufficiently different that there may have been differences in the way the program was administered, or in its setting, compared to the other programs.

The Seattle test

To further test the hybrid inference method, Andrews, Kitagawa, and McCloskey then applied it to another research issue: programs providing housing vouchers to help people move into neighborhoods where residents have greater economic mobility.

Nationwide economics studies have shown that some areas generate greater economic mobility than others, all things being equal. Spurred by these findings, other researchers collaborated with officials in King County, Washington, to develop a program to help voucher recipients move to higher-opportunity areas. However, predictions for the performance of such programs might be susceptible to a winner’s curse, since the level of opportunity in each neighborhood is imperfectly estimated.

Andrews, Kitagawa, and McCloskey thus applied the hybrid inference method to a test of this neighborhood-level data, in 50 “commuting zones” (essentially, metro areas) across the U.S. The hybrid method again helped them understand how certain the previous estimates were.

Simple estimates in this setting suggested that for children growing up in households at the 25th percentile of annual income in the U.S., housing relocation programs would create a 12.25 percentage-point gain in adult income. The hybrid inference method suggests there would instead be a 10.27 percentage-point gain — lower, but still a substantial impact.

Indeed, as the authors write in the paper, “even this smaller estimate is economically large,” and “we conclude that targeting tracts based on estimated opportunity succeeds in selecting higher-opportunity tracts on average.” At the same time, the scholars saw that their method does make a difference.

Overall, Andrews says, “the ways we measure uncertainty can actually become themselves unreliable.” That problem is compounded, he notes, “when the data tells us very little, but we’re wrongly overconfident and think the data is telling us a lot. … Ideally you would like something that is both reliable and telling us as much as possible.”

Support for the research was provided, in part, by the U.S. National Science Foundation, the Economic and Social Research Council of the U.K., and the European Research Council.

Doctors have more difficulty diagnosing disease when looking at images of darker skin

When diagnosing skin diseases based solely on images of a patient’s skin, doctors do not perform as well when the patient has darker skin, according to a new study from MIT researchers.

The study, which included more than 1,000 dermatologists and general practitioners, found that dermatologists accurately characterized about 38 percent of the images they saw, but only 34 percent of those that showed darker skin. General practitioners, who were less accurate overall, showed a similar decrease in accuracy with darker skin.

The research team also found that assistance from an artificial intelligence algorithm could improve doctors’ accuracy, although those improvements were greater when diagnosing patients with lighter skin.

While this is the first study to demonstrate physician diagnostic disparities across skin tone, other studies have found that the images used in dermatology textbooks and training materials predominantly feature lighter skin tones. That may be one factor contributing to the discrepancy, the MIT team says, along with the possibility that some doctors may have less experience in treating patients with darker skin.

“Probably no doctor is intending to do worse on any type of person, but it might be the fact that you don’t have all the knowledge and the experience, and therefore on certain groups of people, you might do worse,” says Matt Groh PhD ’23, an assistant professor at the Northwestern University Kellogg School of Management. “This is one of those situations where you need empirical evidence to help people figure out how you might want to change policies around dermatology education.”

Groh is the lead author of the study, which appears today in Nature Medicine. Rosalind Picard, an MIT professor of media arts and sciences, is the senior author of the paper.

Diagnostic discrepancies

Several years ago, an MIT study led by Joy Buolamwini PhD ’22 found that facial-analysis programs had much higher error rates when predicting the gender of darker skinned people. That finding inspired Groh, who studies human-AI collaboration, to look into whether AI models, and possibly doctors themselves, might have difficulty diagnosing skin diseases on darker shades of skin — and whether those diagnostic abilities could be improved.

“This seemed like a great opportunity to identify whether there’s a social problem going on and how we might want fix that, and also identify how to best build AI assistance into medical decision-making,” Groh says. “I’m very interested in how we can apply machine learning to real-world problems, specifically around how to help experts be better at their jobs. Medicine is a space where people are making really important decisions, and if we could improve their decision-making, we could improve patient outcomes.”

To assess doctors’ diagnostic accuracy, the researchers compiled an array of 364 images from dermatology textbooks and other sources, representing 46 skin diseases across many shades of skin.

Most of these images depicted one of eight inflammatory skin diseases, including atopic dermatitis, Lyme disease, and secondary syphilis, as well as a rare form of cancer called cutaneous T-cell lymphoma (CTCL), which can appear similar to an inflammatory skin condition. Many of these diseases, including Lyme disease, can present differently on dark and light skin.

The research team recruited subjects for the study through Sermo, a social networking site for doctors. The total study group included 389 board-certified dermatologists, 116 dermatology residents, 459 general practitioners, and 154 other types of doctors.

Each of the study participants was shown 10 of the images and asked for their top three predictions for what disease each image might represent. They were also asked if they would refer the patient for a biopsy. In addition, the general practitioners were asked if they would refer the patient to a dermatologist.

“This is not as comprehensive as in-person triage, where the doctor can examine the skin from different angles and control the lighting,” Picard says. “However, skin images are more scalable for online triage, and they are easy to input into a machine-learning algorithm, which can estimate likely diagnoses speedily.”

The researchers found that, not surprisingly, specialists in dermatology had higher accuracy rates: They classified 38 percent of the images correctly, compared to 19 percent for general practitioners.

Both of these groups lost about four percentage points in accuracy when trying to diagnose skin conditions based on images of darker skin — a statistically significant drop. Dermatologists were also less likely to refer darker skin images of CTCL for biopsy, but more likely to refer them for biopsy for noncancerous skin conditions.

“This study demonstrates clearly that there is a disparity in diagnosis of skin conditions in dark skin. This disparity is not surprising; however, I have not seen it demonstrated in the literature such a robust way. Further research should be performed to try and determine more precisely what the causative and mitigating factors of this disparity might be,” says Jenna Lester, an associate professor of dermatology and director of the Skin of Color Program at the University of California at San Francisco, who was not involved in the study.

A boost from AI

After evaluating how doctors performed on their own, the researchers also gave them additional images to analyze with assistance from an AI algorithm the researchers had developed. The researchers trained this algorithm on about 30,000 images, asking it to classify the images as one of the eight diseases that most of the images represented, plus a ninth category of “other.”

This algorithm had an accuracy rate of about 47 percent. The researchers also created another version of the algorithm with an artificially inflated success rate of 84 percent, allowing them to evaluate whether the accuracy of the model would influence doctors’ likelihood to take its recommendations.

“This allows us to evaluate AI assistance with models that are currently the best we can do, and with AI assistance that could be more accurate, maybe five years from now, with better data and models,” Groh says.

Both of these classifiers are equally accurate on light and dark skin. The researchers found that using either of these AI algorithms improved accuracy for both dermatologists (up to 60 percent) and general practitioners (up to 47 percent).

They also found that doctors were more likely to take suggestions from the higher-accuracy algorithm after it provided a few correct answers, but they rarely incorporated AI suggestions that were incorrect. This suggests that the doctors are highly skilled at ruling out diseases and won’t take AI suggestions for a disease they have already ruled out, Groh says.

“They’re pretty good at not taking AI advice when the AI is wrong and the physicians are right. That’s something that is useful to know,” he says.

While dermatologists using AI assistance showed similar increases in accuracy when looking at images of light or dark skin, general practitioners showed greater improvement on images of lighter skin than darker skin.

“This study allows us to see not only how AI assistance influences, but how it influences across levels of expertise,” Groh says. “What might be going on there is that the PCPs don’t have as much experience, so they don’t know if they should rule a disease out or not because they aren’t as deep into the details of how different skin diseases might look on different shades of skin.”

The researchers hope that their findings will help stimulate medical schools and textbooks to incorporate more training on patients with darker skin. The findings could also help to guide the deployment of AI assistance programs for dermatology, which many companies are now developing.

The research was funded by the MIT Media Lab Consortium and the Harold Horowitz Student Research Fund.