Body Duplicating Sci-Fi Adventure The Alters Gets First Gameplay Trailer

Body Duplicating Sci-Fi Adventure The Alters Gets First Gameplay Trailer

The Alters is a fascinating new sci-fi survival game from 11 Bit Studios, makers of Frostpunk and This War of Mine. First revealed last October, the game stars a man named Jan shipwrecked on a desolate planet with no one to rely on but himself. And by “himself,” I mean multiple versions of himself.

Jan is a miner who crashlands on a hostile planet closely orbiting a star, resulting in deadly hot sunrises. His only method of survival is a high-tech mobile base that requires multiple people with various skills to operate. Thankfully, he harnesses a resource called Rapidium that allows him to create several copies of himself, each their own sentient being with independent specialties and personalities reflecting the “what if” scenarios of Jan’s life. These copies, or Alters, also have their own ambitions and fears, which will lead to conflicts as you try to work together to survive and, hopefully, escape. 

[embedded content]

The Alters is launching sometime this year for PlayStation 5, Xbox Series X/S, and PC. Today’s Xbox Partner Preview also revealed it’s coming to Game Pass for console and PC.

Final Fantasy XIV Launches Onto Xbox Series X/S Later This Month

Final Fantasy XIV Launches Onto Xbox Series X/S Later This Month

Last July, Square Enix revealed that Final Fantasy XIV, the critically acclaimed and long-running MMORPG, would be coming to Xbox Series X/S this spring. After a quick but seemingly successful open beta period on the platform, the company has revealed Final Fantasy XIV will fully launch on Xbox Series X/S on March 21. 

This news was revealed during the Xbox Partner Preview, with a reminder that players can partake in the open beta right now. Square Enix also revealed that the starter edition of the game, which includes the base campaign A Realm Reborn and the Heavensward and Stormblood expansions, will be available through Xbox Game Pass Ultimate. However, you must claim it as a perk between March 21 and April 19. 

[embedded content]

The timing of the Xbox Series X/S launch for Final Fantasy XIV comes at a great time as the game’s next expansion, Dawntrail, is set to launch this summer. Here’s a breakdown of everything you need to know about Final Fantasy XIV: Dawntrail

Before Dawntrail, though, you’ll need to play up through the latest expansion, Endwalker, to be fully caught up on the entire story. Getting caught up should be a lot of fun, though, and you’ll get to find out why Final Fantasy XIV is ranked so high on Game Informer’s ranking of every mainline Final Fantasy game


Are you going to play Final Fantasy XIV on Xbox later this month? Let us know in the comments below!

Capcom’s Kunitsu-Gami: Path Of The Goddess Gets First Extended Gameplay Showcase

Capcom’s Kunitsu-Gami: Path Of The Goddess Gets First Extended Gameplay Showcase

About eight months ago, Capcom and Microsoft revealed Kunitsu-Gami: Path of the Goddess in somewhat vague terms. It is an action game based in and inspired by historical Japan, but that was about all we knew. Today, Capcom showed a lot more gameplay and offered some additional insight into the single player game. Plus it detailed more of exactly what the game is on its Xbox Wire page.

The action is defined as “dancing swords,” but you will also be making decisions about what villagers you will take with you on your journey, leading to some strategic elements. You will apparently be making preparations for battle during the day to protect your Maiden and the village before the night, which is when battle takes place.

[embedded content]

Capcom and Microsoft do not have a release date for the game yet, but it is apparently coming this year and will be included with Game Pass subscriptions.

Persona 3 Reload Expansion Pass Adds Episode Aigis: The Answer FES Content

Persona 3 Reload Expansion Pass Adds Episode Aigis: The Answer FES Content

Atlus has confirmed that an epilogue to the Japanese version of the original Persona 3 (that first appeared in the West in Persona 3 FES) will come to Persona 3 Reload later this year. More specifically, Episode Aigis – The Answer will hit the recently released Persona 3 remake as part of its expansion pass this September. 

This news was revealed during today’s Xbox Partner Preview alongside the contents of Wave 1 and Wave 2 of Persona 3 Reload’s expansion pass.

[embedded content]

Here’s what to expect: 

Wave 1 – March 12

  • Persona 5 Royal EX background music set
  • Persona 4 Golden EX background music set

Wave 2 – May

  • Velvet Costume and background music set

Wave 3 – September

  • Episode Aigis – The Answer

The expansion pass will be available for free to all Xbox Game Pass Ultimate subscribers through January 31, 2025. You can also purchase the expansion pass on digital storefronts on platforms where the game is playable for $34.99. 

For more about the game, read Game Informer’s Persona 3 Reload review, and then check out where it sits on Game Informer’s list of the top-scoring reviews of 2024.   


Are you going to check out Episode Aigis later this year? Let us know in the comments below!

Frostpunk 2 Gets Chilling New Trailer And July Release Date

Frostpunk 2 Gets Chilling New Trailer And July Release Date

Today’s Xbox Partner Preview gave us a new look at Frostpunk 2, including a release date.  First announced in 2021, 11 Bit Studios’ tense city-management sim arrives on July 25, and it’s also coming to PC Game Pass.

A new trailer shows off some of the game’s bleak, choice-driven gameplay. You’re charged with leading a city set within an inhospitable frozen wasteland, making decisions about how it’s governed to keep mouths fed and, hopefully, happy. That includes managing labor, politics, and even the food supply, which can ingratiate you to citizens or, worst case, cause them to revolt.  

[embedded content]

Frostpunk 2 will first launch on PC but will come to consoles at a later date (including Xbox Game Pass). You can read our review of the first Frostpunk here

Researchers enhance peripheral vision in AI models

Researchers enhance peripheral vision in AI models

Peripheral vision enables humans to see shapes that aren’t directly in our line of sight, albeit with less detail. This ability expands our field of vision and can be helpful in many situations, such as detecting a vehicle approaching our car from the side.

Unlike humans, AI does not have peripheral vision. Equipping computer vision models with this ability could help them detect approaching hazards more effectively or predict whether a human driver would notice an oncoming object.

Taking a step in this direction, MIT researchers developed an image dataset that allows them to simulate peripheral vision in machine learning models. They found that training models with this dataset improved the models’ ability to detect objects in the visual periphery, although the models still performed worse than humans.

Their results also revealed that, unlike with humans, neither the size of objects nor the amount of visual clutter in a scene had a strong impact on the AI’s performance.

“There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a postdoc and co-author of a paper detailing this study.

Answering that question may help researchers build machine learning models that can see the world more like humans do. In addition to improving driver safety, such models could be used to develop displays that are easier for people to view.

Plus, a deeper understanding of peripheral vision in AI models could help researchers better predict human behavior, adds lead author Anne Harrington MEng ’23.

“Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” she explains.

Their co-authors include Mark Hamilton, an electrical engineering and computer science graduate student; Ayush Tewari, a postdoc; Simon Stent, research manager at the Toyota Research Institute; and senior authors William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences and a member of CSAIL. The research will be presented at the International Conference on Learning Representations.

“Any time you have a human interacting with a machine — a car, a robot, a user interface — it is hugely important to understand what the person can see. Peripheral vision plays a critical role in that understanding,” Rosenholtz says.

Simulating peripheral vision

Extend your arm in front of you and put your thumb up — the small area around your thumbnail is seen by your fovea, the small depression in the middle of your retina that provides the sharpest vision. Everything else you can see is in your visual periphery. Your visual cortex represents a scene with less detail and reliability as it moves farther from that sharp point of focus.

Many existing approaches to model peripheral vision in AI represent this deteriorating detail by blurring the edges of images, but the information loss that occurs in the optic nerve and visual cortex is far more complex.

For a more accurate approach, the MIT researchers started with a technique used to model peripheral vision in humans. Known as the texture tiling model, this method transforms images to represent a human’s visual information loss.  

They modified this model so it could transform images similarly, but in a more flexible way that doesn’t require knowing in advance where the person or AI will point their eyes.

“That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

The researchers used this modified technique to generate a huge dataset of transformed images that appear more textural in certain areas, to represent the loss of detail that occurs when a human looks further into the periphery.

Then they used the dataset to train several computer vision models and compared their performance with that of humans on an object detection task.

“We had to be very clever in how we set up the experiment so we could also test it in the machine learning models. We didn’t want to have to retrain the models on a toy task that they weren’t meant to be doing,” she says.

Peculiar performance

Humans and models were shown pairs of transformed images which were identical, except that one image had a target object located in the periphery. Then, each participant was asked to pick the image with the target object.

“One thing that really surprised us was how good people were at detecting objects in their periphery. We went through at least 10 different sets of images that were just too easy. We kept needing to use smaller and smaller objects,” Harrington adds.

The researchers found that training models from scratch with their dataset led to the greatest performance boosts, improving their ability to detect and recognize objects. Fine-tuning a model with their dataset, a process that involves tweaking a pretrained model so it can perform a new task, resulted in smaller performance gains.

But in every case, the machines weren’t as good as humans, and they were especially bad at detecting objects in the far periphery. Their performance also didn’t follow the same patterns as humans.

“That might suggest that the models aren’t using context in the same way as humans are to do these detection tasks. The strategy of the models might be different,” Harrington says.

The researchers plan to continue exploring these differences, with a goal of finding a model that can predict human performance in the visual periphery. This could enable AI systems that alert drivers to hazards they might not see, for instance. They also hope to inspire other researchers to conduct additional computer vision studies with their publicly available dataset.

“This work is important because it contributes to our understanding that human vision in the periphery should not be considered just impoverished vision due to limits in the number of photoreceptors we have, but rather, a representation that is optimized for us to perform tasks of real-world consequence,” says Justin Gardner, an associate professor in the Department of Psychology at Stanford University who was not involved with this work. “Moreover, the work shows that neural network models, despite their advancement in recent years, are unable to match human performance in this regard, which should lead to more AI research to learn from the neuroscience of human vision. This future research will be aided significantly by the database of images provided by the authors to mimic peripheral human vision.”

This work is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

How sensory gamma rhythm stimulation clears amyloid in Alzheimer’s mice

How sensory gamma rhythm stimulation clears amyloid in Alzheimer’s mice

Studies at MIT and elsewhere are producing mounting evidence that light flickering and sound clicking at the gamma brain rhythm frequency of 40 hertz (Hz) can reduce Alzheimer’s disease (AD) progression and treat symptoms in human volunteers as well as lab mice. In a new open-access study in Nature using a mouse model of the disease, MIT researchers reveal a key mechanism that may contribute to these beneficial effects: clearance of amyloid proteins, a hallmark of AD pathology, via the brain’s glymphatic system, a recently discovered “plumbing” network parallel to the brain’s blood vessels.

“Ever since we published our first results in 2016, people have asked me how does it work? Why 40Hz? Why not some other frequency?” says study senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of The Picower Institute for Learning and Memory of MIT and MIT’s Aging Brain Initiative. “These are indeed very important questions we have worked very hard in the lab to address.”

The new paper describes a series of experiments, led by Mitch Murdock PhD ’23 when he was a brain and cognitive sciences doctoral student at MIT, showing that when sensory gamma stimulation increases 40Hz power and synchrony in the brains of mice, that prompts a particular type of neuron to release peptides. The study results further suggest that those short protein signals then drive specific processes that promote increased amyloid clearance via the glymphatic system.

“We do not yet have a linear map of the exact sequence of events that occurs,” says Murdock, who was jointly supervised by Tsai and co-author and collaborator Ed Boyden, Y. Eva Tan Professor of Neurotechnology at MIT, a member of the McGovern Institute for Brain Research and an affiliate member of the Picower Institute. “But the findings in our experiments support this clearance pathway through the major glymphatic routes.”

How sensory gamma rhythm stimulation clears amyloid in Alzheimer’s mice
Video: The Picower Institute

From gamma to glymphatics

Because prior research has shown that the glymphatic system is a key conduit for brain waste clearance and may be regulated by brain rhythms, Tsai and Murdock’s team hypothesized that it might help explain the lab’s prior observations that gamma sensory stimulation reduces amyloid levels in Alzheimer’s model mice.

Working with “5XFAD” mice, which genentically model Alzheimer’s, Murdock and co-authors first replicated the lab’s prior results that 40Hz sensory stimulation increases 40Hz neuronal activity in the brain and reduces amyloid levels. Then they set out to measure whether there was any correlated change in the fluids that flow through the glymphatic system to carry away wastes. Indeed, they measured increases in cerebrospinal fluid in the brain tissue of mice treated with sensory gamma stimulation compared to untreated controls. They also measured an increase in the rate of interstitial fluid leaving the brain. Moreover, in the gamma-treated mice he measured increased diameter of the lymphatic vessels that drain away the fluids and measured increased accumulation of amyloid in cervical lymph nodes, which is the drainage site for that flow.

To investigate how this increased fluid flow might be happening, the team focused on the aquaporin 4 (AQP4) water channel of astrocyte cells, which enables the cells to facilitate glymphatic fluid exchange. When they blocked APQ4 function with a chemical, that prevented sensory gamma stimulation from reducing amyloid levels and prevented it from improving mouse learning and memory. And when, as an added test, they used a genetic technique for disrupting AQP4, that also interfered with gamma-driven amyloid clearance.

In addition to the fluid exchange promoted by APQ4 activity in astrocytes, another mechanism by which gamma waves promote glymphatic flow is by increasing the pulsation of neighboring blood vessels. Several measurements showed stronger arterial pulsatility in mice subjected to sensory gamma stimulation compared to untreated controls.

One of the best new techniques for tracking how a condition, such as sensory gamma stimulation, affects different cell types is to sequence their RNA to track changes in how they express their genes. Using this method, Tsai and Murdock’s team saw that gamma sensory stimulation indeed promoted changes consistent with increased astrocyte AQP4 activity.

Prompted by peptides

The RNA sequencing data also revealed that upon gamma sensory stimulation a subset of neurons, called “interneurons,” experienced a notable uptick in the production of several peptides. This was not surprising in the sense that peptide release is known to be dependent on brain rhythm frequencies, but it was still notable because one peptide in particular, VIP, is associated with Alzheimer’s-fighting benefits and helps to regulate vascular cells, blood flow, and glymphatic clearance.

Seizing on this intriguing result, the team ran tests that revealed increased VIP in the brains of gamma-treated mice. The researchers also used a sensor of peptide release and observed that sensory gamma stimulation resulted in an increase in peptide release from VIP-expressing interneurons.

But did this gamma-stimulated peptide release mediate the glymphatic clearance of amyloid? To find out, the team ran another experiment: They chemically shut down the VIP neurons. When they did so, and then exposed mice to sensory gamma stimulation, they found that there was no longer an increase in arterial pulsatility and there was no more gamma-stimulated amyloid clearance.

“We think that many neuropeptides are involved,” Murdock says. Tsai added that a major new direction for the lab’s research will be determining what other peptides or other molecular factors may be driven by sensory gamma stimulation.

Tsai and Murdock add that while this paper focuses on what is likely an important mechanism — glymphatic clearance of amyloid — by which sensory gamma stimulation helps the brain, it’s probably not the only underlying mechanism that matters. The clearance effects shown in this study occurred rather rapidly, but in lab experiments and clinical studies weeks or months of chronic sensory gamma stimulation have been needed to have sustained effects on cognition.

With each new study, however, scientists learn more about how sensory stimulation of brain rhythms may help treat neurological disorders.

In addition to Tsai, Murdock, and Boyden, the paper’s other authors are Cheng-Yi Yang, Na Sun, Ping-Chieh Pao, Cristina Blanco-Duque, Martin C. Kahn, Nicolas S. Lavoie, Matheus B. Victor, Md Rezaul Islam, Fabiola Galiana, Noelle Leary, Sidney Wang, Adele Bubnys, Emily Ma, Leyla A. Akay, TaeHyun Kim, Madison Sneve, Yong Qian, Cuixin Lai, Michelle M. McCarthy, Nancy Kopell, Manolis Kellis, and Kiryl D. Piatkevich.

Support for the study came from Robert A. and Renee E. Belfer, the Halis Family Foundation, Eduardo Eurnekian, the Dolby family, Barbara J. Weedon, Henry E. Singleton, the Hubolow family, the Ko Hahn family, Carol and Gene Ludwig Family Foundation, Lester A. Gimpelson, Lawrence and Debra Hilibrand, Glenda and Donald Mattes, Kathleen and Miguel Octavio, David B. Emmes, the Marc Haas Foundation, Thomas Stocky and Avni Shah, the JPB Foundation, the Picower Institute, and the National Institutes of Health.

Is this the future of fashion?

Is this the future of fashion?

Until recently, bespoke tailoring — clothing made to a customer’s individual specifications — was the only way to have garments that provided the perfect fit for your physique. For most people, the cost of custom tailoring is prohibitive. But the invention of active fibers and innovative knitting processes is changing the textile industry.

“We all wear clothes and shoes,” says Sasha MicKinlay MArch ’23, a recent graduate of the MIT Department of Architecture. “It’s a human need. But there’s also the human need to express oneself. I like the idea of customizing clothes in a sustainable way. This dress promises to be more sustainable than traditional fashion to both the consumer and the producer.”

McKinlay is a textile designer and researcher at the Self-Assembly Lab who designed the 4D Knit Dress with Ministry of Supply, a fashion company specializing in high-tech apparel. The dress combines several technologies to create personalized fit and style. Heat-activated yarns, computerized knitting, and robotic activation around each garment generates the sculpted fit. A team at Ministry of Supply led the decisions on the stable yarns, color, original size, and overall design.

“Everyone’s body is different,” says Skylar Tibbits, associate professor in the Department of Architecture and founder of the Self-Assembly Lab. “Even if you wear the same size as another person, you’re not actually the same.”

4D Knit Dress: Transforming Style
Video: Self-Assembly Lab

Active textiles

Students in the Self-Assembly Lab have been working with dynamic textiles for several years. The yarns they create can change shape, change property, change insulation, or become breathable. Previous applications to tailor garments include making sweaters and face masks. Tibbits says the 4D Knit Dress is a culmination of everything the students have learned from working with active textiles.

McKinlay helped produce the active yarns, created the concept design, developed the knitting technique, and programmed the lab’s industrial knitting machine. Once the garment design is programmed into the machine, it can quickly produce multiple dresses. Where the active yarns are placed in the design allows for the dress to take on a variety of styles such as pintucks, pleats, an empire waist, or a cinched waist.

“The styling is important,” McKinlay says. “Most people focus on the size, but I think styling is what sets clothes apart. We’re all evolving as people, and I think our style evolves as well. After fit, people focus on personal expression.”

Danny Griffin MArch ’22, a current graduate student in architectural design, doesn’t have a background in garment making or the fashion industry. Tibbits asked Griffin to join the team due to his experience with robotics projects in construction. Griffin translated the heat activation process into a programmable robotic procedure that would precisely control its application.

“When we apply heat, the fibers shorten, causing the textile to bunch up in a specific zone, effectively tightening the shape as if we’re tailoring the garment,” says Griffin. “There was a lot of trial and error to figure out how to orient the robot and the heat gun. The heat needs to be applied in precise locations to activate the fibers on each garment. Another challenge was setting the temperature and the timing for the heat to be applied.”

It took a while to determine how the robot could reach all areas of the dress.

“We couldn’t use a commercial heat gun — which is like a handheld hair dryer — because they’re too large,” says Griffin. “We needed a more compact design. Once we figured it out, it was a lot of fun to write the script for the robot to follow.”

A dress can begin with one design — pintucks across the chest, for example — and be worn for months before having heat re-applied to alter its look. Subsequent applications of heat can tailor the dress further.

Beyond fit and fashion

Efficiently producing garments is a “big challenge” in the fashion industry, according to Gihan Amarasiriwardena ’11, the co-founder and president of Ministry of Supply.

“A lot of times you’ll be guessing what a season’s style is,” he says. “Sometimes the style doesn’t do well, or some sizes don’t sell out. They may get discounted very heavily or eventually they end up going to a landfill.”

“Fast fashion” is a term that describes clothes that are inexpensive, trendy, and easily disposed of by the consumer. They are designed and produced quickly to keep pace with current trends. The 4D Knit Dress, says Tibbits, is the opposite of fast fashion. Unlike the traditional “cut-and-sew” process in the fashion industry, the 4D Knit Dress is made entirely in one piece, which virtually eliminates waste.

“From a global standpoint, you don’t have tons of excess inventory because the dress is customized to your size,” says Tibbits.

McKinlay says she hopes use of this new technology will reduce the amount of waste in inventory that retailers usually have at the end of each season.

“The dress could be tailored in order to adapt to these changes in styles and tastes,” she says. “It may also be able to absorb some of the size variations that retailers need to stock. Instead of extra-small, small, medium, large, and extra-large sizes, retailers may be able to have one dress for the smaller sizes and one for the larger sizes. Of course, these are the same sustainability points that would benefit the consumer.”

The Self-Assembly Lab has collaborated with Ministry of Supply on projects with active textiles for several years. Late last year, the team debuted the 4D Knit Dress at the company’s flagship store in Boston, complete with a robotic arm working its way around a dress as customers watched. For Amarasiriwardena, it was an opportunity to gauge interest and receive feedback from customers interested in trying the dress on.

“If the demand is there, this is something we can create quickly” unlike the usual design and manufacturing process, which can take years, says Amarasiriwardena.

Griffin and McKinlay were on hand for the demonstration and pleased with the results. For Griffin, with the “technical barriers” overcome, he sees many different avenues for the project.

“This experience leaves me wanting to try more,” he says.

McKinlay too would love to work on more styles.

“I hope this research project helps people rethink or reevaluate their relationship with clothes,” says McKinlay. “Right now when people purchase a piece of clothing it has only one ‘look.’ But, how exciting would it be to purchase one garment and reinvent it to change and evolve as you change or as the seasons or styles change? I’m hoping that’s the takeaway that people will have.”

Princess Peach: Showtime Demo Now Available On Switch

Princess Peach: Showtime Demo Now Available On Switch

Princess Peach: Showtime hits Switch exclusively later this month on March 22 and ahead of its launch, Nintendo has released a demo for the game. In it, you can play as two of Peach’s transformations: Swordfighter Peach and Patissiere Peach. The demo is available to download on Switch right now. 

“Swing, strike, dodge, and counterattack as Swordfighter Peach and cut across an action-packed stage,” a press release reads. “Then, turn into Patissiere Peach and get ready to whip up an array of delectable desserts to prevent the Sweet Festival from experiencing a serious sugar crash.” 

As you can see in the gameplay overview trailer above, each of Peach’s transformations grants her distinct and unique abilities that she’ll need to save the plays at Sparkle Theater. Plus, the above trailer demonstrates some of the different customization options players have at their disposal to add flair to Peach’s dress and Stella’s ribbon. 

Princess Peach: Showtime hits Switch on March 22. 

While waiting for its launch, read Game Informer’s Princess Peach: Showtime impressions after going hands-on with the game, and then check out these pink Nintendo Switch Joy-Con launching alongside Princess Peach: Showtime


Are you going to check out the Princess Peach: Showtime demo? Let us know in the comments below!

Penny’s Big Breakaway Review – A Swinging Pendulum – Game Informer

Penny’s Big Breakaway Review – A Swinging Pendulum – Game Informer

Coming off the success of Sonic Mania, the development team behind one of the best games in Sega’s storied series is back with an all-new franchise. Much like how the studio now known as Evening Star’s previous effort was a love letter to a bygone era of platforming, Penny’s Big Breakaway is a fond tribute to the 3D platformers of the late ‘90s. Evening Star clearly knows how to design a fantastic new entry in this well-worn genre, but some important issues drag down an otherwise strong game.

As Penny, a street performer whose yo-yo is transformed by a cosmic entity, you must leap, swing, spin, and dash through more than 11 colorful, themed worlds of stages. Each world is more colorful than the last, complemented by an upbeat soundtrack full of catchy tracks to push the action forward. In moving through these levels, Penny’s Big Breakaway steps into the spotlight in a big way. With the help of her enhanced yo-yo, Penny can pull off satisfying movement-based combos. Once you master the basics, jumping into the air, swinging from her yo-yo, landing in a roll, and smoothly launching into another combo with a twirl feels fantastic. In combat, however, I struggled with accidentally sending Penny flying off a cliff since double-tapping the attack button also initiates a dash.

[embedded content]

Thankfully, combat is only a small piece of the overall pie, and from the moment the movement mechanics clicked with me to the moment I watched the credits roll, I adored building momentum as I sped through the stages. Those terrific moves are accentuated by top-tier level design. Evening Star provides players with a ton of expertly designed courses that play into Penny’s abilities. Penny’s Big Breakaway is at its best when you’re moving quickly through obstacle courses, and the levels give you plenty of opportunities to do so; even the optional side objectives often require you to complete the given task within a time limit.

I relished every twisting path that let me quickly roll through, but I also enjoyed exploring every corner I could to find the collectibles used to purchase extra-challenging bonus stages. Levels typically offer branching pathways, and I loved trying to find the best route through the stages, though the fixed camera sometimes discouraged me from poking around too much. I was also disappointed by how many times I clipped through a stage element and had to restart from a checkpoint.

Sadly, the entire experience is brought down by a problem many early 3D platformers struggled with: depth perception. By the time I beat the story mode, I had lost count of the number of times I missed a seemingly easy jump because I couldn’t tell where Penny was in relation to the platform I was trying to land on. While the obvious answer is to look at her shadow’s position on the platform, my brain constantly needed to perform the calculus of whether Penny was where she looked like she was or where the game said she was. Unfortunately, this permeates the entire experience, poisoning the well of the overall gameplay.

A smaller issue that often rears its ugly head is that of screen-crowding. One of the key elements Penny’s Big Breakaway uses to propel the player forward is a group of penguins that swarm you in a capture attempt. Each time this happens, it immediately raises the level of on-screen chaos, but it sometimes goes too far as the penguins obscure everything happening in the level. Add to that an intrusive U.I. element that pops up when you’re near a side mission, and on multiple occasions, I had to blindly perform a leap of faith and hope for the best.

It’s a shame so many problems weigh on this otherwise enjoyable adventure. Even with the screen-crowding, bugs, and depth-perception troubles, I still look back fondly on the superb level design and movement mechanics. But because of those important detractors, Penny’s Big Breakaway lands as a solid 3D platformer unable to swing to the great heights it felt destined for.