Best Games of 2024 (So Far) And Anger Foot Review | GI Show

Best Games of 2024 (So Far) And Anger Foot Review | GI Show

In this week’s episode of The Game Informer Show, the crew attempts to highlight the best games of 2024 that have launched between January and July. This is not all-encompassing. Rather, it’s more a conversation about this year’s early standouts and other releases we plan to visit before our official Game of the Year discussions in December. Before we properly dive in, Marcus breaks down his review of Anger Foot, developer Free Lives’ new first-person shooter. Afterwards, Alex highlights a small desktop game (literally) called Rusty’s Retirement. We hope you enjoy this episode and find new games to play!

Follow us on social media: Alex Van Aken (@itsVanAken), Kyle Hilliard (@KyleMHilliard), Marcus Stewart (@MarcusStewart7)

The Game Informer Show is a weekly gaming podcast covering the latest video game news, industry topics, exclusive reveals, and reviews. Join us every Thursday to chat about your favorite games – past and present – with Game Informer staff, developers, and special guests from around the industry. Listen on Apple PodcastsSpotify, or your favorite podcast app.

The Game Informer Show – Podcast Timestamps:

00:00:00 – Intro

00:03:29 – Anger Foot Review

00:18:36 – Rusty’s Retirement

00:30:06 – Best Games of 2024 (So Far)

01:34:51 – Housekeeping

Reasoning skills of large language models are often overestimated

Reasoning skills of large language models are often overestimated

When it comes to artificial intelligence, appearances can be deceiving. The mystery surrounding the inner workings of large language models (LLMs) stems from their vast size, complex training methods, hard-to-predict behaviors, and elusive interpretability.

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers recently peered into the proverbial magnifying glass to examine how LLMs fare with variations of different tasks, revealing intriguing insights into the interplay between memorization and reasoning skills. It turns out that their reasoning abilities are often overestimated.

The study compared “default tasks,” the common tasks a model is trained and tested on, with “counterfactual scenarios,” hypothetical situations deviating from default conditions — which models like GPT-4 and Claude can usually be expected to cope with. The researchers developed some tests outside the models’ comfort zones by tweaking existing tasks instead of creating entirely new ones. They used a variety of datasets and benchmarks specifically tailored to different aspects of the models’ capabilities for things like arithmetic, chess, evaluating code, answering logical questions, etc.

When users interact with language models, any arithmetic is usually in base-10, the familiar number base to the models. But observing that they do well on base-10 could give us a false impression of them having strong competency in addition. Logically, if they truly possess good addition skills, you’d expect reliably high performance across all number bases, similar to calculators or computers. Indeed, the research showed that these models are not as robust as many initially think. Their high performance is limited to common task variants and suffer from consistent and severe performance drop in the unfamiliar counterfactual scenarios, indicating a lack of generalizable addition ability. 

The pattern held true for many other tasks like musical chord fingering, spatial reasoning, and even chess problems where the starting positions of pieces were slightly altered. While human players are expected to still be able to determine the legality of moves in altered scenarios (given enough time), the models struggled and couldn’t perform better than random guessing, meaning they have limited ability to generalize to unfamiliar situations. And much of their performance on the standard tasks is likely not due to general task abilities, but overfitting to, or directly memorizing from, what they have seen in their training data.

“We’ve uncovered a fascinating aspect of large language models: they excel in familiar scenarios, almost like a well-worn path, but struggle when the terrain gets unfamiliar. This insight is crucial as we strive to enhance these models’ adaptability and broaden their application horizons,” says Zhaofeng Wu, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and the lead author on a new paper about the research. “As AI is becoming increasingly ubiquitous in our society, it must reliably handle diverse scenarios, whether familiar or not. We hope these insights will one day inform the design of future LLMs with improved robustness.”

Despite the insights gained, there are, of course, limitations. The study’s focus on specific tasks and settings didn’t capture the full range of challenges the models could potentially encounter in real-world applications, signaling the need for more diverse testing environments. Future work could involve expanding the range of tasks and counterfactual conditions to uncover more potential weaknesses. This could mean looking at more complex and less common scenarios. The team also wants to improve interpretability by creating methods to better comprehend the rationale behind the models’ decision-making processes.

“As language models scale up, understanding their training data becomes increasingly challenging even for open models, let alone proprietary ones,” says Hao Peng, assistant professor at the University of Illinois at Urbana-Champaign. “The community remains puzzled about whether these models genuinely generalize to unseen tasks, or seemingly succeed by memorizing the training data. This paper makes important strides in addressing this question. It constructs a suite of carefully designed counterfactual evaluations, providing fresh insights into the capabilities of state-of-the-art LLMs. It reveals that their ability to solve unseen tasks is perhaps far more limited than anticipated by many. It has the potential to inspire future research towards identifying the failure modes of today’s models and developing better ones.”

Additional authors include Najoung Kim, who is a Boston University assistant professor and Google visiting researcher, and seven CSAIL affiliates: MIT electrical engineering and computer science (EECS) PhD students Linlu Qiu, Alexis Ross, Ekin Akyürek SM ’21, and Boyuan Chen; former postdoc and Apple AI/ML researcher Bailin Wang; and EECS assistant professors Jacob Andreas and Yoon Kim. 

The team’s study was supported, in part, by the MIT–IBM Watson AI Lab, the MIT Quest for Intelligence, and the National Science Foundation. The team presented the work at the North American Chapter of the Association for Computational Linguistics (NAACL) last month.

Infostealers: What are they & far-reaching effects on data security – CyberTalk

Infostealers: What are they & far-reaching effects on data security – CyberTalk

By Hendrik De Bruin, Security Engineer, Check Point Software Technologies.

Infostealers…ransomware’s lesser-known cousin

When it comes to malware, ransomware usually steals the limelight, largely because of the direct, devastating impact that ransomware often causes. However, ransomware’s lesser-known cousin, the “infostealer,” is slowly but surely gaining ever-more attention.

Over the last few years, we have noticed a massive increase in the usage of infostealers. In fact, some research suggests as much as 5,900% growth since 2018. Statistics also indicate that during 2023, over 10 million devices were compromised by info stealing malware, reflecting an increase of 643% over the past three years.

An infostealer is a type of malware designed to infiltrate computer systems, not for purposes of data encryption like ransomware or data deletion like “wipers”, but specifically designed to steal sensitive information.

These malicious programs exfiltrate various data, including login credentials, session cookies, financial information, and personally identifiable information (PII). After harvesting and capturing the sensitive information, the infostealer sends it back to remote servers controlled by cyber criminals.

Once cyber criminals obtain the sensitive information, it is sold on the dark web to various nefarious actors, such as “Initial Access Brokers” who use the info to facilitate larger attacks, like ransomware attacks.

Infostealers…And their real-life impact

To showcase the impact that infostealers can have and to reinforce that infostealers deserve more attention, we can look at two recent incidents: a breach reported at Ticketmaster and at a major European bank.

In both cases, malicious actors gained access to information stored at a third-party service provider called Snowflake. Snowflake offers a cloud-based data storage and analytics service, often referred to as “data-as-a-service”.

During these breaches, attackers simply used credentials — which were most likely obtained through infostealers — to access associated Snowflake accounts, leading to the sale of information belonging to more than 550 million Ticketmaster customers on the dark web.

The info was sold by a group known as “ShinyHunters”, a known player in the infostealer business that’s notorious for using legitimate credentials to obtain initial access.

The ShinyHunters group also claims to have information related to 30 million customers and 28 million credit card numbers associated with the breached banking institution.

Although we focus on these two instances here, they reflect two of at least 165 Snowflake customer accounts that were accessed by this specific threat actor using credentials harvested through infostealers.

How can organisations protect themselves?

Although there may have been various security oversights involved with the two aforementioned breaches, I believe the following three factors played the biggest role:

Another factor that often plays a role when it comes to SaaS security is the popular misconception that the Cloud Service Provider is responsible for your data in the cloud. In reality, YOU as the customer remain responsible and accountable for the security of and access control to data in the cloud.

1. Lack of end user email and browser protection – Among cyber criminals, the most popular means of malware delivery are through email and internet downloads. Not having adequate email and browser security allowed for the initial delivery of the malware.

2. Lack of endpoint protection – Endpoint devices were not properly secured against malware such as infostealers, allowing the malware to be deployed on devices.

3. Lack of SaaS security – The absence of additional security controls, such as Multi-Factor Authentication, allowed for easy access using stolen credentials.

Let’s unpack the items listed above to get a better understanding of how each played a role in the mentioned breaches.

Email and browser protection

Infostealers are typically delivered through internet downloads, phishing emails and or other social engineering attacks.

Your first line of defense for the delivery of infostealers lies in the deployment of email security and anti-phishing solutions such as Harmony Email and Collaboration, which will prevent the delivery of phishing emails and emails containing malware.

Further, should a malicious email be delivered containing a malicious link, having adequate browser protection should prevent the browser from accessing the link and malware from being downloaded.

Internet access control and browser security solutions, such as Harmony SASE Internet Access, will prevent the download of malicious files and restrict corporate password re-use on non-corporate websites.

Corporate password re-use and other password best practices

Although passwords should NEVER be used as the only means of authentication, we often still find this to be the case for various organisations and applications. NIST and other similar institutions provide various guidelines and best practices related to passwords. However, it is also important to note that other than corporate password re-use restrictions, none of these password recommendations from NIST or other similar institutions would have really offered protection from infostealers; mainly because infostealers exfiltrate cleartext passwords.

If you still rely on passwords, the following guidelines from NIST may assist you:

  • Increase password length – Password length matters more than complexity.
  • Avoid corporate password re-use – Ensuring that corporate passwords aren’t re-used for other platforms, such as social media, will keep your corporate credentials and systems protected from external credential breaches.
  • Breached password protection – Ensure that attempted password updates do not contain known breached passwords
  • Password rotation – Contrary to popular beliefs, the NIST advises against rotating passwords too often and regards 30 to 60 days as too often. Ninety days may be a fair compromise.

Endpoint protection and response

From an endpoint perspective, Endpoint Detection and Response (EDR) remains as one of the primary defenses against malware such as infostealers. EDR solutions typically include both signature-based detection mechanisms as well as behaviour based detection mechanisms, which include analyses of data to detect suspicious activity, such as indicators of compromise (IOCs).

A solution like Check Point’s Harmony Endpoint leverages Check Point’s ThreatCloud; a dynamically updated service based on an innovative global network of threat sensors and organisations that share threat data. It collaboratively fights against modern malware by aggregating and analysing big data telemetry and millions of Indicators of Compromise (IoCs).

Over 50 AI-based engines analyze this data. These engines detect and neutralize novel threats, ensuring that both known and unknown threats are addressed and prevented.

Multi-factor authentication

Most Software as a Service (SaaS) offerings have multi-factor authentication available as a configurable option. If your organisation is making use of SaaS offerings, it is critical that multi-factor authentication is configured. Password authentication alone is NOT adequate and should never be used, especially not on publicly exposed SaaS applications.

Although multi-factor authentication may not have completely eliminated the chances of these breaches occurring, it would have at the very least forced far greater costs and efforts onto the attackers. These efforts would also have to involve additional threat vectors, thereby increasing the probability of detection.

The adoption of cloud services, in combination with the “hybrid workforce” has significantly increased organisations’ attack surfaces, leading to greater exposure, risk and complexities. To overcome this, organisations are looking at adopting solutions such as Zero-Trust and SASE.

Zero-Trust

Zero-Trust, at its core, revolves around the idea of NO ACCESS or ZERO ACCESS, unless we can explicitly identify the device, the individual using the device and the security posture associated with both the device and the user. Zero Trust also enforces further concepts such as “least privilege.”

Zero-Trust Network Access (ZTNA) is still often perceived as being a very costly, time consuming and difficult exercise. However, modern solutions, such as Secure Access Service Edge (SASE), really simplify the implementation of Zero Trust.

In this specific instance, SASE with Secure Internet Browsing would have prevented the download of malware or infostealers from the internet.

The deployment of SASE would also allow organisations to further secure their SaaS applications by enforcing IP address based access restrictions on the SaaS application itself.

This will ensure access to the SaaS application ONLY if the device adheres to corporate security posture restrictions and your identity have the appropriate permissions.

In Conclusion

The threat posed by infostealers deserves the same attention as that posed by ransomware, and perhaps even more so, as infostealers often serve as enablers for much larger cyber attacks and breaches.

In the past, we have observed credentials obtained from infostealers being used for initial access during other malicious activities. These stolen credentials open a broader exploitation landscape, which could include personal accounts, corporate accounts, and even infrastructure access through VPNs and cloud management interfaces.

Protection from the risks posed by infostealers require a holistic approach, bringing us back to “good ole” “defense-in-depth”.

First, prevent the initial delivery of infostealers by protecting end users from malicious emails, websites and malware via email and internet access security controls.

Secondly, should email and internet access security controls fail, having an endpoint detection and response solution deployed should prevent the infostealer from being installed on devices and/or prevent credentials from being exfiltrated.

Other controls, such as Zero-Trust frameworks and SASE, further support the concept of defense in depth by preventing access; even with adequate credentials should other factors such as geo-location, device posture and so forth not check out.

Professional services, such as penetration testing, external attack surface assessments and continuous threat exposure management can also assist in reducing the risk posed by infostealers, as they can highlight weak security controls, such as password-only authentication.

For more insights from Hendrik de Bruin, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

MIT SHASS announces appointment of new heads for 2024-25

MIT SHASS announces appointment of new heads for 2024-25

The MIT School of Humanities, Arts, and Social Sciences (SHASS) has announced several changes to the leadership of its academic units for the 2024-25 academic year.

“I’m confident these outstanding members of the SHASS community will provide exceptional leadership. I’m excited to see each implement their vision for the future of their unit,” says Agustin Rayo, the Kenan Sahin Dean of MIT SHASS.

  • Christine Walley will serve as head of the Anthropology Section. Walley is the SHASS Dean’s Distinguished Professor of Anthropology. She received a PhD in anthropology from New York University in 1999. Her first ethnography, “Rough Waters: Nature and Development in an East African Marine Park,” explored environmental conflict in rural Tanzania.

  • Seth Mnookin will serve as head of the Comparative Media Studies Program/Writing. Mnookin is a longtime journalist and science writer and was a 2019-20 Guggenheim Fellow. He graduated from Harvard College in 1994 with a degree in history and science, and was a 2004 Joan Shorenstein Fellow at Harvard’s Kennedy School of Government. Mnookin will continue in his role as director of the Graduate Program in Science Writing.

  • Kieran Setiya will serve as head of the Department of Linguistics and Philosophy. Setiya is a professor of philosophy and is head of the philosophy section. He works mainly in ethics, epistemology, and the philosophy of mind. He received his PhD in philosophy from Princeton University in 2002.

  • In the Literature Section, associate professors Sandy Alexendre and Stephanie Frampton will serve as co-heads. Alexandre’s research spans the late 19th century to present-day Black American literature and culture. She received a PhD in English language and literature from the University of Virginia in 2006. Frampton is also co-chair of the Program in Ancient and Medieval Studies. She received a PhD from Harvard University in comparative literature in 2011.

  • Jay Scheib will serve as head of the Music and Theater Arts Section. Scheib is Class of 1949 Professor of Music and Theater Arts. He received an MFA in theater directing from the Columbia University School of the Arts. He is a recipient of the MIT Edgerton Award, the Richard Sherwood Award, a National Endowment for the Arts/TCG fellowship, an OBIE Award for Best Direction, and the prestigious Guggenheim Fellowship.

  • In the Program in Science, Technology, and Society, Kate Brown will serve as head. Brown is the Thomas M. Siebel Distinguished Professor in History of Science. Her research interests illuminate the point where history, science, technology and bio-politics converge to create large-scale disasters and modernist wastelands. Brown will publish “Tiny Gardens Everywhere: A Kaleidoscopic History of the Food Sovereignty Frontier” in 2025 with W.W. Norton & Co. Brown has held fellowships from the Guggenheim Foundation, the Carnegie Foundation, the European University Institute, The Kennan Institute, Harvard’s Davis Center for Russian and Eurasian Studies, and the U.S. Holocaust Museum. She ​​received her PhD in history from the University of Washington at Seattle.

  • In the Program in Women’s and Gender Studies, Sana Aiyar will serve as interim head. Aiyar is an associate professor of history, and is a historian of modern South Asia. She received her PhD from Harvard University in 2009 and held an Andrew Mellon postdoctoral fellowship at Johns Hopkins University in 2009-10.

When to trust an AI model

When to trust an AI model

Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49 percent confident that a medical image shows a pleural effusion, then 49 percent of the time, the model should be right.

MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.

In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.

This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.

“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.

Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.

Quantifying uncertainty

Uncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.

The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.

The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.

MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.

“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.

For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.

With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.

The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.

But testing each datapoint using MDL would require an enormous amount of computation.

Speeding up the process

With IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.

In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.

The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.

IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.

“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.

In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle. 

MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

Satellite density in Earth’s orbit has increased exponentially in recent years, with lower costs of small satellites allowing governments, researchers, and private companies to launch and operate some 2,877 satellites into orbit in 2023 alone. This includes increased geostationary Earth orbit (GEO) satellite activity, which brings technologies with global-scale impact, from broadband internet to climate surveillance. Along with the manifold benefits of these satellite-enabled technologies, however, come increased safety and security risks, as well as environmental concerns. More accurate and efficient methods of monitoring and modeling satellite behavior are urgently needed to prevent collisions and other disasters.

To address this challenge, the MIT Astrodynamics, Space Robotic, and Controls Laboratory (ARCLab) launched the MIT ARCLab Prize for AI Innovation in Space: a first-of-its-kind competition asking contestants to harness AI to characterize satellites’ patterns of life (PoLs) — the long-term behavioral narrative of a satellite in orbit — using purely passively collected information. Following the call for participants last fall, 126 teams used machine learning to create algorithms to label and time-stamp the behavioral modes of GEO satellites over a six-month period, competing for accuracy and efficiency.

With support from the U.S. Department of the Air Force-MIT AI Accelerator, the challenge offers a total of $25,000. A team of judges from ARCLab and MIT Lincoln Laboratory evaluated the submissions based on clarity, novelty, technical depth, and reproducibility, assigning each entry a score out of 100 points. Now the judges have announced the winners and runners-up:

First prize: David Baldsiefen — Team Hawaii2024

With a winning score of 96, Baldsiefen will be awarded $10,000 and is invited to join the ARCLab team in presenting at a poster session at the Advanced Maui Optical and Space Surveillance Technologies (AMOS) Conference in Hawaii this fall. One evaluator noted, “Clear and concise report, with very good ideas such as the label encoding of the localizer. Decisions on the architectures and the feature engineering are well reasoned. The code provided is also well documented and structured, allowing an easy reproducibility of the experimentation.”

Second prize: Binh Tran, Christopher Yeung, Kurtis Johnson, Nathan Metzger — Team Millennial-IUP

With a score of 94.2, Y, Millennial-IUP will be awarded $5,000 and will also join the ARCLab team at the AMOS conference. One evaluator said, “The models chosen were sensible and justified, they made impressive efforts in efficiency gains… They used physics to inform their models and this appeared to be reproducible. Overall it was an easy to follow, concise report without much jargon.”

Third Prize: Isaac Haik and Francois Porcher — Team QR_Is

With a score of 94, Haik and Porcher will share the third prize of $3,000 and will also be invited to the AMOS conference with the ARCLab team. One evaluator noted, “This informative and interesting report describes the combination of ML and signal processing techniques in a compelling way, assisted by informative plots, tables, and sequence diagrams. The author identifies and describes a modular approach to class detection and their assessment of feature utility, which they correctly identify is not evenly useful across classes… Any lack of mission expertise is made up for by a clear and detailed discussion of the benefits and pitfalls of the methods they used and discussion of what they learned.”

The fourth- through seventh-place scoring teams will each receive $1,000 and a certificate of excellence.

“The goal of this competition was to foster an interdisciplinary approach to problem-solving in the space domain by inviting AI development experts to apply their skills in this new context of orbital capacity. And all of our winning teams really delivered — they brought technical skill, novel approaches, and expertise to a very impressive round of submissions.” says Professor Richard Linares, who heads ARCLab.

Active modeling with passive data

Throughout a GEO satellite’s time in orbit, operators issue commands to place them in various behavioral modes—station-keeping, longitudinal shifts, end-of-life behaviors, and so on. Satellite Patterns of Life (PoLs) describe on-orbit behavior composed of sequences of both natural and non-natural behavior modes.

ARCLab has developed a groundbreaking benchmarking tool for geosynchronous satellite pattern-of-life characterization and created the Satellite Pattern-of-Life Identification Dataset (SPLID), comprising real and synthetic space object data. The challenge participants used this tool to create algorithms that use AI to map out the on-orbit behaviors of a satellite.

The goal of the MIT ARCLab Prize for AI Innovation in Space is to encourage technologists and enthusiasts to bring innovation and new skills sets to well-established challenges in aerospace. The team aims to hold the competition in 2025 and 2026 to explore other topics and invite experts in AI to apply their skills to new challenges. 

Buildots Secures $15M Investment from Intel Capital to Drive Strategic Growth

Buildots, an award-winning AI construction software company, has announced a $15 million investment led by Intel Capital, with participation from OG Tech Partners and previous investors. This funding round, announced on July 11, 2024, also brings Lisa Cohen, Investment Director at Intel Capital, to the Buildots…

Fighting Fire with Fire: The Role of AI in Fighting Instant Payments Fraud

The rapid evolution and global adoption of real-time payment schemes marks a pivotal shift in the global financial ecosystem, improving economies and financial inclusivity…and introducing new opportunities for crime. One unintended benefit of legacy systems that take days or weeks to process transactions is additional time…

Zeb Evans, Founder & CEO of ClickUp – Interview Series

Zeb Evans is a serial entrepreneur and the CEO and Founder of ClickUp, an all-in-one productivity platform that works as an ideal place for teams to come together, brainstorm, plan, and collaborate on everything from process docs to product designs. You’ve stated that since you were…

Anger Foot Review – An Adrenaline-Packed Foot Race – Game Informer

Anger Foot Review – An Adrenaline-Packed Foot Race – Game Informer

Anger Foot exemplifies a simple idea executed to the ninth degree. As a furious sneakerhead possessing seemingly the deadliest legs in the world, you must retrieve your prized collection of stolen footwear by kicking everything in sight. The bombast accompanying this wacky premise – fast-paced, split-second action, satisfying gunplay, and delectable destructibility – turns Anger Foot from a one-kick pony into one of the year’s most exciting, challenging, and tough-to-put-down adrenaline rushes.

Taking place on the seedy streets of Crime City, where crime is not only encouraged but is a way of life, you’ll plow through four gangs and their leaders across dozens of levels to retrieve your pilfered sneakers. Initially, your bare foot is your best and only weapon, as kicking sends the litany of armed goons flying, showcasing the satisfying (and, sometimes, hilariously broken) ragdoll physics. This first-person action game’s frantic yet thoughtful pace is delightfully reminiscent of Hotline Miami and Doom. At best, you can complete the small, densely packed stages in under a minute, and success means quickly and strategically taking out deviously placed foes before they can off you. 

[embedded content]

Since only one or two hits kill players, fast reaction timing and, for better or worse, trial-and-error win the day. Levels can border on being labyrinthine with enemies hiding in blind spots or lurking behind doors, and you won’t discover their presence until their bullet enters your skull. Some deaths feel cheap due to sometimes questionable enemy placement that makes taking damage seem unavoidable in spots. Other times, you’re a victim of physics; a grenade that misses the first time may bounce off something and unexpectedly land at your feet the second time. Dying means starting the stage anew, and while that stings after a good run, instant respawns hasten the process of repeatedly running through levels and absorbing their layouts. 

Kicking foes feels great, but Anger Foot also encourages strategic use of the environment and your opponents, such as kicking doors into distant targets or sending exploding enemies careening into their allies. Wielding firearms, such as handguns and shotguns, plus more exotic fare like crossbows that impale multiple foes and flamethrowers, adds a complementary ranged aspect to the melee-focused action. Gunplay feels awesome, and you can even throw empty weapons to stun targets, providing perfect setups for a kick. I also enjoy how the various enemy types encourage me to change tactics on the fly, such as shield-bearing foes blocking gunfire or speedy, knife-wielding mice focusing on relentless swarming. The multi-stage boss fights are enjoyable (and absurd) but don’t compare to the thrill of blasting through the standard levels. 

When Anger Foot is firing on all cylinders, which is often, it’s a gleefully chaotic execution of skill and resourcefulness. I love slipping into the flow state of running into rooms, rapidly taking out adversaries, grabbing their guns, lobbing depleted firearms to stun other targets, and kicking everything in sight. A mindless approach can work, but more often, it pays to have an ideal order of operations for eliminating threats and pinpointing every environmental advantage. Copius destructibility means encounters often devolve into a parade of exploding rubble, splintered wood, and shattered glass that leaves rooms looking like a tornado plowed through them. This element can be advantageous; why pick off goons perched atop scaffolding when shooting an explosive barrel sends the entire structure tumbling down? Though the framerate occasionally dips when the action overindulges in explosions and enemy mobs, it runs smooth as butter otherwise. 

Anger Foot regularly introduces new ideas and mechanics to keep the gameplay and challenge fresh. Highlights include hopping across and dodging trains in a subway and kicking across rooftops while avoiding a sniper’s laser sight. I always looked forward to seeing what a level had in store and was often surprised and enthusiastic to tackle whatever obstacle developer Free Lives concocted. 

Completing stages and optional objectives, such as finishing it under a time limit or taking no damage, rewards up to three stars spent toward unlocking ability-granting sneakers. You can only wear one pair of these special shoes at a time, and they add fun wrinkles to the action. Some provide helpful perks, like a shoe that grants an extra life or one that causes doors to explode when kicked. Other shoes function like silly cheat codes, like a pair that reduces gravity, meaning everything, yourself included, floats. One useful shoe gives enemies comedically large heads, making them easier targets for headshots. Shoes can be potent game changers, providing a strong hook to replay stages and complete supplementary tasks to unlock them all. 

Defeat can be a bitter pill in Anger Foot, but I was amazed at how eager I remained to jump back in time after time. Firefights remained an exciting challenge even if I’d played it numerous times. Thwarting foes milliseconds before they pull the trigger, either by brute force or cleverly utilizing my surroundings, never ceased to feel cool. You should definitely walk a mile in these shoes.