Researchers use large language models to help robots navigate

Researchers use large language models to help robots navigate

Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.

For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.

To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.

Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.

Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.

While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.

“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen Pan, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this approach.

Pan’s co-authors include his advisor, Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing, MIT director of the MIT-IBM Watson AI Lab, and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL); Philip Isola, an associate professor of EECS and a member of CSAIL; senior author Yoon Kim, an assistant professor of EECS and a member of CSAIL; and others at the MIT-IBM Watson AI Lab and Dartmouth College. The research will be presented at the Conference of the North American Chapter of the Association for Computational Linguistics.

Solving a vision problem with language

Since large language models are the most powerful machine-learning models available, the researchers sought to incorporate them into the complex task known as vision-and-language navigation, Pan says.

But such models take text-based inputs and can’t process visual data from a robot’s camera. So, the team needed to find a way to use language instead.

Their technique utilizes a simple captioning model to obtain text descriptions of a robot’s visual observations. These captions are combined with language-based instructions and fed into a large language model, which decides what navigation step the robot should take next.

The large language model outputs a caption of the scene the robot should see after completing that step. This is used to update the trajectory history so the robot can keep track of where it has been.

The model repeats these processes to generate a trajectory that guides the robot to its goal, one step at a time.

To streamline the process, the researchers designed templates so observation information is presented to the model in a standard form — as a series of choices the robot can make based on its surroundings.

For instance, a caption might say “to your 30-degree left is a door with a potted plant beside it, to your back is a small office with a desk and a computer,” etc. The model chooses whether the robot should move toward the door or the office.

“One of the biggest challenges was figuring out how to encode this kind of information into language in a proper way to make the agent understand what the task is and how they should respond,” Pan says.

Advantages of language

When they tested this approach, while it could not outperform vision-based techniques, they found that it offered several advantages.

First, because text requires fewer computational resources to synthesize than complex image data, their method can be used to rapidly generate synthetic training data. In one test, they generated 10,000 synthetic trajectories based on 10 real-world, visual trajectories.

The technique can also bridge the gap that can prevent an agent trained with a simulated environment from performing well in the real world. This gap often occurs because computer-generated images can appear quite different from real-world scenes due to elements like lighting or color. But language that describes a synthetic versus a real image would be much harder to tell apart, Pan says. 

Also, the representations their model uses are easier for a human to understand because they are written in natural language.

“If the agent fails to reach its goal, we can more easily determine where it failed and why it failed. Maybe the history information is not clear enough or the observation ignores some important details,” Pan says.

In addition, their method could be applied more easily to varied tasks and environments because it uses only one type of input. As long as data can be encoded as language, they can use the same model without making any modifications.

But one disadvantage is that their method naturally loses some information that would be captured by vision-based models, such as depth information.

However, the researchers were surprised to see that combining language-based representations with vision-based methods improves an agent’s ability to navigate.

“Maybe this means that language can capture some higher-level information than cannot be captured with pure vision features,” he says.

This is one area the researchers want to continue exploring. They also want to develop a navigation-oriented captioner that could boost the method’s performance. In addition, they want to probe the ability of large language models to exhibit spatial awareness and see how this could aid language-based navigation.

This research is funded, in part, by the MIT-IBM Watson AI Lab.

Jonathan Corbin, Founder & CEO of Maven AGI – Interview Series

Jonathan Corbin, is the Founder & CEO of Maven AGI. Previously, as the Global Vice President of Customer Success & Strategy at HubSpot, Jonathan led a team of approximately 1,000 customer success, partner success, and contract managers across multiple regions and verticals. His responsibilities included driving…

Metaphor: ReFantazio Preview – A New World Of Fantasy – Game Informer

Whenever a legendary trio of game creators known for a single franchise embark on a new IP together, it rightfully captures a lot of attention. That’s exactly what director Katsura Hashino, art director Shigenori Soejima, and composer Shoji Meguro, the trio best known for the beloved Persona series, is doing with Metaphor: ReFantazio. Though it derives inspiration from many of Persona’s most prominent elements, Metaphor is a wholly new experience disconnected from the Persona franchise.

Taking place in the fantasy kingdom of Euchronia, Metaphor tells the tale of a world riddled with prejudice. The main protagonist is of the Elda tribe, and during the earliest stages of my demo, his fairy companion, Gallica, expresses shock at how out in the open the prejudice against him was. The two then talk of a fabled world where true equality exists and fantasize about such a society. One of the things people love about Persona is its willingness to tackle challenging topics other games might veer away from, and Metaphor seems to carry that same quality into its narrative.

Metaphor: ReFantazio Preview – A New World Of Fantasy – Game Informer

“What we’ve tried to do in the past with Persona and in this game… both of them are a little bit different from each other,” Hashino says. “In the past for games like Persona, it’s not like our goal was to challenge difficult social problems. What we were really trying to achieve was, ‘We have a story about a young, kind of naïve person growing up and entering into the world of adulthood, and that’s not an easy thing to do. There are a lot of challenges you will face. There’s a lot of ways that you need to grow in order to do that. So by facing difficult problems in those characters’ lives, they can grow into being who they are and figure out who they want to be. That was our goal with Persona. So, it’s not like not like we have this problem we want to cover; we have this character we want to develop, and since it’s set in Japan, it was a method for us to explore these characters’ personalities, by having them grow up in a Japanese society and facing problems in that way.”

“However, Metaphor is not really so much about that sense of growth. It’s more about how we can explore the concept of human imagination and human feelings and thoughts, and how we can learn from these experiences to grow and be better people,” Hashino continues. “That’s what we are more looking at in this game. For Metaphor, what we’re trying to think of was going as broad as we possibly could. We are trying to achieve something where we’re talking about things that affect people of all times, all ages, everywhere in the world. That’s why we focus on this concept of fear and anxiety, because I don’t think there’s anybody at any time who hasn’t lived with some fear or anxiety.”

Combat feels like the next evolution of the turn-based system seen in the Persona games. Most encounters begin with a strike on the enemies in the field before entering the game’s primary turn-based combat mechanics. If you stealthily strike the enemies, you can land multiple hits on them, knocking their health down considerably; you can even kill weaker monsters without entering the turn-based battle.

Metaphor also incorporates a line-based formation system where you can choose which of your party is in the frontline and who hangs back. Those in the back take less damage, but their melee attacks are weaker, while those in the front often receive the brunt of the attack but can also land their own attacks at full strength. During my time with this system, I found the best results came from putting magic wielders and healers in the back, and the more physical warriors up front.

In battle, characters can also summon Persona-like creatures called Archetypes. These powerful entities leverage magic based on traditional fantasy tropes like Knight, Warrior, and Seeker. While they’re strong on their own, Archetypes can also perform team-up attacks called Synthesis. These moves allow one character to lend their strength to another to perform more powerful and affecting attacks. During a boss battle I played, I used Synthesis attacks to great effect, with some applying different elemental effects and others spreading out the damage to all enemies instead of just one.

Speaking of elements carried over from Persona, Metaphor uses similar UI as Persona 5 Royal, giving it an unmistakable style. Flashy menus and gorgeous art accentuate the trademark Soejima style, while the music by Meguro replaces his hip-hop and jazzy rock-inspired soundtrack from the Persona franchise with tracks that feel more inspired by war chants.

“When I first approached the design for this game, I thought, ‘I personally love fantasy. I will do my best to throw away everything I’ve done to date and just design a fantasy character and challenge myself with a new style,” Soejima says. “What kind of ended up happening was that it felt really fun – I had a lot of fun doing it – but I was coming up with something that was kind of an imitation of styles that I had seen. I was thinking, ‘Well, what can I bring to the fantasy genre? How can I add to it and use what I know, use my own style, and bring my own riff on it? So, that was part of what helped inform my design for it. A lot of time with Persona and the other games, we are making games set in the real world, but it’s not to try to make something that’s cool in a game; with my art, I was trying to make something that’s cool in the real world that people like and enjoy, and then bring it in to the medium of the game. For this one, as well, I didn’t want to just go, ‘Okay, what do people like about the fantasy genre? Let’s just make more of that.’ Instead, I tried to bring out more of what people think are cool from other areas and then put that into the game.”

Though I’ve always appreciated Persona’s real-world setting, Metaphor’s fantasy kingdom, narrative threads, and appropriately grotesque beasts pulled me in and made me excited to experience the next evolution of this team’s work. I’m excited to get my hands on the final version when it launches this October.

Making climate models relevant for local decision-makers

Making climate models relevant for local decision-makers

Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. 

Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. 

“It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. 

Traditional wisdom

In climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. 

“If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” 

Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. 

A little bit of both 

In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. 

Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. 

“If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” 

Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. 

Quantifying risk quickly

Being able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.

“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”

While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.

“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says. 

New algorithm discovers language just by watching videos

Mark Hamilton, an MIT PhD student in electrical engineering and computer science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), wants to use machines to understand how animals communicate. To do that, he set out first to create a system that can learn human language “from scratch.”

“Funny enough, the key moment of inspiration came from the movie ‘March of the Penguins.’ There’s a scene where a penguin falls while crossing the ice, and lets out a little belabored groan while getting up. When you watch it, it’s almost obvious that this groan is standing in for a four letter word. This was the moment where we thought, maybe we need to use audio and video to learn language.” says Hamilton. “Is there a way we could let an algorithm watch TV all day and from this figure out what we’re talking about?”

“Our model, ‘DenseAV,’ aims to learn language by predicting what it’s seeing from what it’s hearing, and vice-versa. For example, if you hear the sound of someone saying ‘bake the cake at 350’ chances are you might be seeing a cake or an oven. To succeed at this audio-video matching game across millions of videos, the model has to learn what people are talking about,” says Hamilton.

Once they trained DenseAV on this matching game, Hamilton and his colleagues looked at which pixels the model looked for when it heard a sound. For example, when someone says “dog,” the algorithm immediately starts looking for dogs in the video stream. By seeing which pixels are selected by the algorithm, one can discover what the algorithm thinks a word means.

Interestingly, a similar search process happens when DenseAV listens to a dog barking: It searches for a dog in the video stream. “This piqued our interest. We wanted to see if the algorithm knew the difference between the word ‘dog’ and a dog’s bark,” says Hamilton. The team explored this by giving the DenseAV a “two-sided brain.” Interestingly, they found one side of DenseAV’s brain naturally focused on language, like the word “dog,” and the other side focused on sounds like barking. This showed that DenseAV not only learned the meaning of words and the locations of sounds, but also learned to distinguish between these types of cross-modal connections, all without human intervention or any knowledge of written language.

One branch of applications is learning from the massive amount of video published to the internet each day: “We want systems that can learn from massive amounts of video content, such as instructional videos,” says Hamilton. “Another exciting application is understanding new languages, like dolphin or whale communication, which don’t have a written form of communication. Our hope is that DenseAV can help us understand these languages that have evaded human translation efforts since the beginning. Finally, we hope that this method can be used to discover patterns between other pairs of signals, like the seismic sounds the earth makes and its geology.” 

A formidable challenge lay ahead of the team: learning language without any text input. Their objective was to rediscover the meaning of language from a blank slate, avoiding using pre-trained language models. This approach is inspired by how children learn by observing and listening to their environment to understand language.

To achieve this feat, DenseAV uses two main components to process audio and visual data separately. This separation made it impossible for the algorithm to cheat, by letting the visual side look at the audio and vice versa. It forced the algorithm to recognize objects and created detailed and meaningful features for both audio and visual signals. DenseAV learns by comparing pairs of audio and visual signals to find which signals match and which signals do not. This method, called contrastive learning, doesn’t require labeled examples, and allows DenseAV to figure out the important predictive patterns of language itself.

One major difference between DenseAV and previous algorithms is that prior works focused on a single notion of similarity between sound and images. An entire audio clip like someone saying “the dog sat on the grass” was matched  to an entire image of a dog. This didn’t allow previous methods to discover fine-grained details, like the connection between the word “grass” and the grass underneath the dog. The team’s algorithm searches for and aggregates all the possible matches between an audio clip and an image’s pixels. This not only improved performance, but allowed the team to precisely localize sounds in a way that previous algorithms could not. “Conventional methods use a single class token, but our approach compares every pixel and every second of sound. This fine-grained method lets DenseAV make more detailed connections for better localization,” says Hamilton.

The researchers trained DenseAV on AudioSet, which includes 2 million YouTube videos. They also created new datasets to test how well the model can link sounds and images. In these tests, DenseAV outperformed other top models in tasks like identifying objects from their names and sounds, proving its effectiveness. “Previous datasets only supported coarse evaluations, so we created a dataset using semantic segmentation datasets. This helps with pixel-perfect annotations for precise evaluation of our model’s performance. We can prompt the algorithm with specific sounds or images and get those detailed localizations,” says Hamilton.

Due to the massive amount of data involved, the project took about a year to complete. The team says that transitioning to a large transformer architecture presented challenges, as these models can easily overlook fine-grained details. Encouraging the model to focus on these details was a significant hurdle.

Looking ahead, the team aims to create systems that can learn from massive amounts of video- or audio-only data. This is crucial for new domains where there’s lots of either mode, but not together. They also aim to scale this up using larger backbones and possibly integrate knowledge from language models to improve performance.

“Recognizing and segmenting visual objects in images, as well as environmental sounds and spoken words in audio recordings, are each difficult problems in their own right. Historically researchers have relied upon expensive, human-provided annotations in order to train machine learning models to accomplish these tasks,” says David Harwath, assistant professor in computer science at the University of Texas at Austin who was not involved in the work. “DenseAV makes significant progress towards developing methods that can learn to solve these tasks simultaneously by simply observing the world through sight and sound — based on the insight that the things we see and interact with often make sound, and we also use spoken language to talk about them. This model also makes no assumptions about the specific language that is being spoken, and could therefore in principle learn from data in any language. It would be exciting to see what DenseAV could learn by scaling it up to thousands or millions of hours of video data across a multitude of languages.”

Additional authors on a paper describing the work are Andrew Zisserman, professor of computer vision engineering at the University of Oxford; John R. Hershey, Google AI Perception researcher; and William T. Freeman, MIT electrical engineering and computer science professor and CSAIL principal investigator. Their research was supported, in part, by the U.S. National Science Foundation, a Royal Society Research Professorship, and an EPSRC Programme Grant Visual AI. This work will be presented at the IEEE/CVF Computer Vision and Pattern Recognition Conference this month.

The Biggest Announcements Of Summer Game Fest And Neighboring Shows

E3 is no more, but in its place a new different kind of summer video game show has emerged. It’s Summer Game Fest… and friends? Geoff Keighley’s show has become an anchor of sorts for the whole thing, but there is plenty more happening just generally nearby. Whatever we want to call it, it has created a lot of content for our website (and the coming issues of the magazine) and below you will find a nearly comprehensive list of everything we wrote this weekend. Enjoy!

ROUND-UPS

Every Game Showcased At Day Of The Devs SGF Edition 2024
From mutant giraffes to house-based romance to hand-drawn voyeurism, the future of the indie games space looks promising.

Everything Announced During The 2024 Summer Devolver Direct Presentation
From Heart Machine’s new game to a new roguelite from the designer behind Dead Cells, here’s everything we learned from Devolver’s latest showcase.

Thirteen Of Our Favorite GAmes From The Wholesome Direct 2024

If cozy games are your thing, these are 13 games you should keep an eye on. 

The Biggest Announcements Of Summer Game Fest And Neighboring Shows

PREVIEWS

Assassin’s Creed Shadows Preview – Katanas And Kunai
While in Los Angeles for Summer Game Fest, we got to see a behind-closed-doors demo for the next Assassin’s Creed.

Marvel Rivals Hands-On Preview – Putting The Hero In Hero Shooter
Marvel Rivals brings to the table much of what initially drew me to Overwatch.

Metal Slug Tactics Preview – A Promising And Challenging Boot Camp
We played a full region of the upcoming spin-off and see a lot of potential in this neat reimagining.

Plotting Explosive Moves In Metal Slug Tactics | New Gameplay Today
Watch us enlist in this promising reimagining of the long-running series.

Wolfhound Is Inspired By Metroid And NES Metal Gear | New Gameplay Today
Join us for a look at some gameplay from the upcoming Metroidvania platformer.

Donkey Kong Country Fans Should Take Note Of Nikoderiko: The Magical World | New Gameplay Today
Crash Bandicoot fans should pay attention, as well.

UBISOFT FORWARD

Assassin’s Creed Shadows Highlights Character Differences
A new gameplay trailer shows off the distinct playstyles of the leads in the Japan-set release.

Star Wars Outlaws Gets Lengthy Gameplay Demonstration
Get a look at many of the game’s systems and locations in this guided tour.

Prince Of Persia: The Sands Of Time Remake Is Coming In 2026
Sands Of Time exists again… but we won’t playing it for a long time.

Prince Of Persia: The Lost Crown ‘Divine Trials’ DLC Available Now, Story DLC Out This September
The new Divine Trials DLC includes revisited bosses, new puzzle and platforming challenges, new amulets, and more.

Assassin’s Creed Shadows Gameplay Reveal Shows Off The Disparate Talents Of Yasuke And Naoe
See how each of these very different assassin’s play in this extended gameplay showcase.

XBOX SHOWCASE

Call Of Duty: Black Ops 6 Gameplay Reveal Highlights Post-Cold War Action And Conspiracy In The 1990s
It’s our first look at the story for Call of Duty: Black Ops 6.

Doom: The Dark Ages Revealed, And It Hits PlayStation, Xbox, And PC Next Year
Doom goes medieval for the threequel in ID Software’s modern reboot of the series.

State of Decay 3 Trailer Revealed, Coming To Game Pass Day One
Undead Labs showed off the third installment of the franchise at today’s Xbox event.

Starfield Shattered Space DLC Coming This Year, Additional Content Coming Today
Starfield is getting a big expansion later this year, but there’s new content to play right now, too.

Dragon Age: The Veilguard Trailer Introduces The Cast And Reveals A Fall Release Window
See who you’ll be fighting alongside in the game’s first trailer.

Latest Metal Gear Solid Delta: Snake Eater Trailer Shows All Gameplay
We finally have a proper look at Metal Gears Solid Delta: Snake Eater’s gameplay, and it looks great.

Playground Games’ Fable Gets 2025 Release Window In First Gameplay Trailer
Though the trailer is mostly cineamtic, there are a few glimpses of gameplay within it.

Flintlock: The Siege Of Dawn Sets July Launch
A44 Games is a little over a month away from release, according to a new trailer.

Perfect Dark Gets First Impressive Gameplay Trailer
The long-lost reboot shows signs of life.

Life is Strange: Double Exposure Is A Proper Follow-up To The First Game
It’s a murder mystery that plays out across two parallel timelines.

Diablo IV’s Vessel Of Hatred Expansion Gets October Release Date In New Trailer
The battle of Hatred has only just begun.

New Indiana Jones And The Great Circle Footage Shows Extended Cutscene And Teases Classic Boulder Run
The latest footage of MachineGames’ Indiana Jones game showed an extended cutscene, some gameplay clips, and a tease of the first film’s boulder run.

Get Another Look At Avowed’s Fantasy RPG Action In New Trailer
Avowed hits Xbox Series X/S and PC this year.

Stalker 2: Heart Of Chornobyl’s Latest Trailer Shows Plenty Of Gameplay
Stalker 2’s Xbox Showcase trailer was fully focused on gameplay.

Wuchang: Fallen Feathers Is A Fantastical Souls-like Action RPG Set During China’s Ming Dynasty
Battle demons as a pirate warrior with demonic powers.

Gears Of War: E-Day Is A Prequel Set 14 Years Before The First Game Starring Marcus Fenix
Gears of War goes back to the start of its war.

Three New Xbox Series X/S Models Are Coming This Holiday
If you’re looking to buy your first Xbox, Microsoft has expanded your range of options.

Call Of Duty: Black Ops 6 Release Date Set For October
The latest entry in Treyarch’s long-running subseries arrives on consoles and PC this fall.

Fallout 76’s Skyline Valley Launches Next Week, Play As A Ghoul Starting In 2025
Head southward to the Shenandoah region.

Southern Gothic Action Game South Of Midnight Gets First Gameplay Trailer And 2025 Launch Window
The Bayou-flavored romp comes from the makers of We Happy Few.

Live Like A Mouse With Reveal of Winter Burrow
Pine Creek Games is crafting a new survival game all about making it through winter as a rodent. 

Mixtape Is A Sharp-Looking Coming Of Age Story With Music From Devo, Smashing Pumpkins, And More
The next game from the creators of The Artful Escape is all about getting into innocuous trouble as a teenager.

Clair Obscur: Expedition 33 Is A Slick-Looking Fantasy RPG Coming Next Year
Can you break a yearly cycle of death?

Microsoft Flight Simulator 2024 Will Let You Live Out Your Aviation Career Dreams This November
Become an ambulance pilot, aerial advertiser, VIP charter captain, and more later this year.

FragPunk Is A 5v5 Hero Shooter Built Around Cards And First-Person Action
It’s due out sometime next year.

SUMMER GAME FEST

Battle Suit Aces Is A Card-Based Mecha RPG From The Makers Of Battle Chef Brigade
Play your cards right, and you can build powerful mech suits and personal relationships.

Eriksholm: The Stolen Dream Is A New Stealth Game From Former Mirror’s Edge, Battlefield Developers
It’s due out on PlayStation 5, Xbox Series X/S, and PC sometime next year.

Narrative Action Road Trip Game Dustborn Gets New Trailer And Demo Next Week
Transport a package cross-country disguised as a punk band with a crew of superpowered misfits.

The Star Named EOS Is A Wholesome Photography-Based Puzzle Game Releasing Next Month
It’s coming to PlayStation 5, Xbox Series X/S, Switch, and PC.

The Stanley Parable Creator’s Next Game Is About A Cozy Tea Shop (We Think)
Wanderstop tasks players with running a tea shop – but you can’t trust Davey Wreden.

Possessor(s) Is A Side-Scrolling Action Game From The Devs Behind Hyper Light Drifter And Solar Ash
It’s coming to consoles and PC next year.

Tenjutsu Is A Martial Arts Roguelite From The Designer Of Dead Cells
Developer Deepnight Games is calling it a “rogue-jutsu.”

Cult Of The Lamb’s Unholy Alliance Update Adds Local Campaign Co-Op This August
Player 2 will control a new character known as the Goat.

Amazon MMO New World Coming To Consoles With Major Updates
New World: Aeternum launches in October.

Monster Hunter Wilds Latest Gameplay Trailer Shows Off Thrilling Desert Battle
Watch an action-packed chase in this new gameplay video.

Get A New Look At Phantom Blade Zero In New Gameplay Trailer
There’s still no release date for the game.

Killer Bean Is A Humorous Open-World Action Roguelite Hitting Early Access This Summer
Play as a rough and tough bean assassin in an ever-changing open-world.

Skate: ‘Pre-Pre-Alpha’ Gameplay Revealed In New Trailer, Console Playtesting This Fall
Though there’s no release date in sight, at least players can soon play the game later this year.

Mighty Morphin Power Rangers: Rita’s Rewind Announced
The new retro-themed game looks focused on side-scrolling brawling, but it also has some other surprises in the mix.

Supernatural Action Strategy Game Kunitsu-Gami: Path Of The Goddess Launches In July
A new trailer shows off its colorful and bloody blend of action and strategy.

Valorant For PlayStation And Xbox Coming Later This Year
The popular hero shooter from League of Legends developer Riot Games makes the jump to PlayStation and Xbox later this year.

Alan Wake 2 Night Springs Expansion Releases Tomorrow, Physical Version Of The Game Coming Soon
The DLC will put you in the shoes of three familiar characters.

Battle Aces Is A Far-Future ‘Action Real-Time Strategy’ Game That Aims To Make The Genre More Approachable
Check out the reveal trailer for a look at its sci-fi action.

Sonic X Shadow Generations Gets October Release Date In New Gameplay Trailer
It’s been over 13 years since the original game’s release.

The First Descendant Is Coming In July
The highly anticipated free-to-play shooter is releasing this summer.

Cairn Is A New Climbing Adventure-Survival Game From The Developers Behind Haven
And it might be the first rope-lite.

Kingdom Come: Deliverance II Trailer Details The Story And Shows Its Sense Of Humor
Summer Game Fest offered us our first substantial look at the game and its story.

Silent Hill Creator’s Slitterhead Gets First Gameplay Look In New Trailer, Out This November
Slitterhead is the first game from Keiichiro Toyama’s Bokeh Game Studio.

Street Fighter 6’s Year 2 Fighters Include M. Bison And Fatal Fury Guest Fighters
Four new combatants are ready to hit the streets.

Dragon Ball: Sparking Zero Arrives In October
The next big Dragon Ball fighting game arrives this Fall.

Metaphor: ReFantazio Archetypes Detailed
The creators behind the recent Persona games took to the stage to reveal new details on the exciting new RPG.

New Batman: Arkham Shadow VR Pre-Rendered Trailer Teases The Story
We still haven’t seen gameplay, but the latest Arkham Shadow trailer teases the story and characters you will meet.

Neva Gameplay Trailer Shows Off Its Beautiful World And Graceful Combat
The creators of Gris have new footage of its next project.

Sid Meier’s Civilization 7 Announced, Coming To Consoles And PC Next Year
This reveal follows Geoff Keighley teasing that 2K had plans to reveal “the next iteration” of one of its biggest and most beloved franchises during SGF.

PlayStation Reveals Lego Horizon Adventures, Coming To PS5, Switch, And PC This Holiday
Aloy’s brickified adventure can be played solo or in co-op with a friend.

New Harry Potter Quidditch Video Game Is Coming In September
Harry Potter Quidditch Champions is coming later this year.

NEWS

Possessor(s) Is A Side-Scrolling Action Game From The Devs Behind Hyper Light Drifter And Solar Ash

It’s coming to consoles and PC next year.

Polaris Is A Co-Op PvE Shooter Coming To PC This Year With Fully Destructible Environments
You can sign up for a beta playest right now.

Hotel Galactic Is A Sci-Fi Management Sim With Studio Ghibli-Inspired Visuals Coming To Consoles And PC
Developer-publisher Ancient Forget is launching a Kickstarter for the game next month.

Get Another Look At Into The Dead: Our Darkest Days’ Texan Zombie Action In New Gameplay Teaser
The game is getting a Steam demo this October.

Streets Of Rogue 2 Gets August Early Access Launch Date
The wacky procedurally generated sandbox welcomes players to its chaotic world.

Phantom Line Is A Co-Op Shooter Set In A Post-Nuclear Europe From Former BioShock, Cyberpunk 2077 Devs
Its reveal trailer, which you can watch here, is quite spooky, too.


What was your favorite announcement from the weekend?