Ace Attorney Investigations Collection Announced, Launches September

Ace Attorney Investigations Collection Announced, Launches September

In recent years, Capcom has really shown its dedication to the Ace Attorney series by bringing the games to modern consoles. The Great Ace Attorney Chronicles brought two games to the West for the first time, while the Apollo Justice Ace Attorney Trilogy gave the newest games in the series a facelift. Today, with the announcement of Ace Attorney Investigations Collection, Capcom is not just bringing another Japan-only release to Western gamers, but it’s also made the entire library of Ace Attorney games available on modern consoles.

[embedded content]

The Investigations games follow Miles Edgeworth, an antagonist turned reluctant ally from the original trilogy of Phoenix Wright games. Instead of defending suspects in court, Edgeworth is a prosecutor, so it’s up to him to build a case against the main suspects. By collecting evidence and completing logic puzzles, he and the lovable Detective Gumshoe solve mysteries and serve justice. While the first game launched in the States as a DS title, the second never officially made it over, so this is many players’ first chance to see its story.

Like other Ace Attorney collections, the Ace Attorney Investigations Collection will have a number of modes and settings for fans to appreciate the art and music of the original game. While it includes new “hand-drawn character visuals” for a more modern look, it also allows players to switch to classic pixel sprites if they prefer the look of the original. Meanwhile, the gallery includes character art, a photo album, and music from the game, including orchestral arrangements.

While the announcement was made during the Nintendo Direct, Ace Attorney Investigations collection is far from a console exclusive – the two-game collection will launch on PlayStation 4, Xbox One, Switch, and PC on September 6.

Elden Ring: Shadow of the Erdtree Review – An Emphatic Exclamation Point – Game Informer

Elden Ring: Shadow of the Erdtree Review – An Emphatic Exclamation Point – Game Informer

Following up Elden Ring is a gargantuan task. It’s one of my favorite games of all time, and the base adventure isn’t lacking for content, intrigue, or surprises. Shadow of the Erdtree doesn’t outclass the primary campaign but expands it, adding a fun and fascinating new zone in the Realm of Shadow. With entertaining new dungeons, a challenging fresh slate of bosses, and a smart new form of progression, Shadow of the Erdtree gives Elden Ring fans more of everything that worked in the main game and is a fantastic excuse to endure its many dangers once more. 

From Software expansions are notorious for being exceptionally more difficult than the base game. Shadow of the Erdtree is overall harder, but the degree of which will, of course, vary based on the character you’re bringing into it. Since defeating Radahn and Mohg is the only prerequisite for beginning the expansion, and because Shadow of the Erdtree requires owning the base game, players are likely using late-game or New Game + characters. For context, I began the expansion using my endgame (level 165), single playthrough character who proved to be more than ready to handle the new threats – at least for a while. 

[embedded content]

Because of these circumstances, your character likely requires an exorbitant amount of runes to level up. From Software clearly considered this and introduced smart new progression items called Scadutree Fragments and Reverned Ash Fragments. Scattered all over the map, spending these items at checkpoints improves overall damage output and resistance: Scadutree for yourself and Reverned Ash for your Spirit Ashes (though the effect only applies in the expansion). This is a great, streamlined method of strengthening your character, and I love not relying solely on grinding to gather tens of thousands of runes just to level up once. This is also great for bolstering maxed-out Spirit Ashes, letting me roll with my favorite(s) after they peaked in the base game. These fragments won’t suddenly turn your Tarnished into an unstoppable juggernaut, but it is a noticeable, if small, difference that doesn’t throw off the game’s balancing.   

Without getting too specific, Shadow of the Erdtree also goes out of its way to provide a surplus of smithing stones to upgrade the expansion’s plethora of new weapons (which you can use in the base game). This offers a strong argument to retire old favorites in favor of using something new. During the early hours, I stubbornly clung to the loadout that brought me success in the main game. Eventually, I discovered numerous cool and powerful weapons, armor sets, spells, enchantments, and charms that compelled me to finally create new, potent loadouts. Shadow of the Erdtree encourages experimentation as much as the main game, if not more so, thanks to its roster of intimidating, grotesque, and, in some cases, outright bizarre new enemies.

Needless to say, Shadow of the Erdtree isn’t a walk in the park. An imposing new class of armored adversaries that would probably be considered mini-bosses in the base game now roam the map as normal enemy types. They’re tough enough that I was shocked to see them respawn after spending a good amount of time and effort defeating them once. Basket-like fire giants stomping around the map may as well be wearing signs saying “Mess around and find out” due to how obscenely powerful and sturdy they are. Creative new boss encounters offer fresh – and infuriating – trials that had me yelling in agony at defeat and jumping for joy upon victory. I won’t spoil any of them, but a couple of particular foes may rival Malenia in difficulty. They’re all fun to topple, and, like the main game, the sting of defeat can often be remedied by simply moving on to someplace else. 

The Realm of Shadow may be smaller overall, but it’s still huge and sports several postcard-worthy locales, several of which are tricky to even reach. Don’t be surprised to go dozens of hours before un-fogging the map due to how well From Software uses the Realm of Shadow’s verticality to hide layers of crucial routes and openings. I appreciate how this layered-cake approach to world design makes exploring the Realm of Shadow feel distinctly different from roaming The Lands Between. Trekking up or down is usually the answer to most navigational conundrums, with the former often offering gorgeous views of the landscape and the latter taking players through underground pathways, revealing hidden ruins, villages, and more. Despite the increased challenge of finding where to go next, the thrill of discovery remains a powerful motivator after 40-plus hours of play, and my curiosity was usually rewarded with a cool location, a useful item, or a terrifying foe. 

The new dungeons, including repeatable ones like smelting forges and underground gaols, beg to be thoroughly explored thanks to some clever and devious secrets, presenting more great examples of From’s exceptional level design. While it’s tough to beat mind-boggling discoveries like the underground cities in the main game, a few points of interest gave me pause to admire them and have unique visual identities. Meeting the strange and questionably trustworthy faces occupying these zones is its own treat. Even if you don’t totally understand (or care) what’s going on with Miquella and his followers, characters like a shady sorcerer soliciting favors or engaging with weirdly charitable bug warriors contribute to the expansion’s head-tilting but alluring charm. 

The boring but ultimately correct shorthand to summarize Shadow of the Erdtree is that it’s more Elden Ring. The incredible sense of discovery, fantastic dungeon design, entertainingly deep combat, and intriguing lore and characters that defined From Software’s 2022 masterpiece all apply to this expansion. From Software didn’t drop the ball and make Elden Ring worse, nor do I believe it wholly topped what it had achieved before. Shadow of the Erdtree maintains a sky-high status quo, even if it loses a little magic from being a known quantity this time instead of a complete surprise. Still, Shadow of the Erdtree is one hell of a mic drop that further cements this adventure as one of the finest ever crafted.

Dragon Quest III: HD-2D Remake Preview – Returning To The Roots – Game Informer

The long-awaited Dragon Quest III HD-2D Remake appeared during today’s highly anticipated Nintendo Direct. We learned much more about the upcoming game, including its release date, which falls in November. On top of that, during Summer Game Fest 2024, we spent about 45 minutes checking out the upcoming retro remake of one of the most beloved and important Dragon Quest games of all time.

In Dragon Quest III, which takes place years prior to the first two games in the series, you step into the shoes of the only child of Ortega, a great hero who failed to defeat Baramos, an Archfiend who threatens the world’s safety. On the child’s 16th birthday, they’re summoned by the king of Aliahan and told to take on their father’s unfinished quest to defeat Baramos. The 16-year-old Hero must assemble their party, explore a massive world full of towns and dungeons, and defeat monsters in turn-based battles.

Dragon Quest III: HD-2D Remake Preview – Returning To The Roots – Game Informer

In Dragon Quest III HD-2D Remake, players enjoy the gorgeous HD-2D visuals, which take 2D sprites and add 3D graphics and elements to the mix as popularized by other Square Enix games like Octopath Traveler and Triangle Strategy. The remake also includes modernized UI and various quality-of-life improvements. Dragon Quest III HD-2D Remake is pitched as faithful to the story of the original game, but developers Team Asano and Artdink also expanded the core narrative under the supervision of series creator Yuji Horii. 

This new version still relies on the traditionally turn-based combat present in the NES original that came to the US in 1992. However, the team has also expanded on that, giving players new animations, adjustable battle speed, and even an auto-battle setting. During my hands-on time, these improvements were the most impactful. Yes, the visuals are beautiful, and the UI improvements help, but being able to speed up the traditionally slower-paced turn-based fights and even set them to auto-battle made the grind so much more enjoyable. Pushing through endless waves of Antnibblers, Stark Ravens, Slimes, and Bunicorns using these settings helped me level my characters as they explored the large overworld map. 

Dragon Quest III: HD-2D Remake

However, it is worth noting that this is clearly a game that was created in the late ’80s. Various modernizations and updates have been made to the formula, but you might be left wanting if you’re expecting something in line with modern game design and gameplay standards. However, if you’re a fan of the original or are just curious about going back and experiencing this beloved classic, this seems like it could be the best way to enjoy the story of Dragon Quest III in 2024.

If this remake sounds appealing, we also received a release date during the Nintendo Direct: Dragon Quest III HD-2D Remake will be available on PS5, Xbox Series X/S, Switch, and PC on November 14.

Nintendo Direct Reveals The Legend of Zelda: Echoes of Wisdom, Out September

Nintendo Direct Reveals The Legend of Zelda: Echoes of Wisdom, Out September

For years, fans of the Legend of Zelda have clamored for the titular princess to star in her own game, but even as she’s become a more prominent character in recent entries, the Zelda-led Zelda game has yet to appear on store shelves. In today’s Nintendo Direct, that wish was finally granted; in The Legend of Zelda: Echoes of Wisdom, it’s up to Princess Zelda to save Hyrule when Link is captured.

[embedded content]

While the game takes its art style from 2019’s Link’s Awakening remake, this title is not a remake of any kind, and there’s no clear indication that it is connected to Link’s Awakening. In this adventure, Zelda uses a new magic item called the Tri Rod to journey across Hyrule. The Tri Rod can create “echoes” of items, like tables, beds, or boxes, to climb and explore the overworld and its dungeons, but it doesn’t stop there. Echoes of water blocks can be used to swim up and over certain obstacles, while trampolines allow players to easily leap across gaps.

Throughout the gameplay demonstration, Series producer Eiji Aonuma explains that players can also make echoes of enemies, and that these enemies can be used in combat on your side. Zelda captures a moblin to fight some slimes, then uses meat to lure in some bird enemies and summons a deku baba to snap them up. Aonuma goes on to say that there are so many echoes in the game and that he hasn’t even counted them all – we’ll have to learn what the limits of echoes are, if any, some other time.

As the trailer continues we get more glimpses into who Zelda will be interacting with throughout the game, and it includes two kinds of Zoras, some Deku shrubs, a Sheikah person (potentially Impa) and the Great Deku Tree. It also features some 2D platforming and underwater sections, as well as Zelda using birds and plants with helicopter-like leaves to glide.

The game launches alongside a golden Hyrule-themed Switch lite, which you can view in the gallery of images above. Luckily, you won’t have to wait long for either of them: the handheld and The Legend of Zelda: Echoes of Wisdom will be available later this year, on September 26.

New Firmware Features for the Move 4K 20x PTZ Camera – Videoguys

New Firmware Features for the Move 4K 20x PTZ Camera – Videoguys

Great news! The PTZOptics Move 4K firmware is now available for our 20X models, enhancing the performance and features of your PTZ cameras.

Currently available for 20x models ONLY; firmware update for 12x and 30x models coming soon!

Move 4K 20X Firmware Feature Highlights:

Subject Selection
A unique number is now provided for each subject shown in the live video area of the web interface. This new feature allows you to quickly switch between subjects that you would like to track using the web-interface or the IR remote control. Plus, our intuitive bounding box system only appears on the web interface, keeping your SDI, HDMI, USB, and NDI feeds clean and distraction-free. Simplify your video experience with our smart auto-tracking technology!

Auto-Tracking Composition Modes
Introducing composition framing for auto-tracking in the Move 4K. Whether you prefer your subject centered, to the left, or to the right, these innovative modes offer the flexibility to frame your shots just as you envision. The camera seamlessly tracks your subject, ensuring they stay in the perfect spot within the scene. These new modes allow camera operators to determine how they want the subject framed in the scene while the camera automatically tracks the subject.

Auto-Tracking Zoom Modes
Take control of your visuals with the Move 4K’s advanced Auto-Tracking Zoom Modes! We’re excited to introduce four new zoom modes: Dynamic, Close-Up, Medium, and Long Shot. Dynamic Zoom Mode puts you in the director’s seat, giving you full command over optical zoom during auto-tracking. Prefer consistent framing? Choose from Close-Up, Medium, or Long Shot modes for steady, uniform zoom levels. This feature is great for tracking subjects with one close-up view and a second wide angle shot.

Track Now
Introducing the ‘Track Now’ feature in our Move SE and 4K cameras! With ‘Track Now’, the camera begins tracking from its current position, offering immediate and seamless subject following. This mode is a perfect addition to our standard auto-tracking, which starts from a pre-defined PTZ preset location such as a teachers desk or presenter podium.

Auto-Tracking Sensitivity
Enhance your camera operations with the Move 4K’s new Auto-Tracking Sensitivity feature! This customizable sensitivity setting allows you to adjust camera movements to your specific needs, ensuring smooth, responsive tracking in every scene. Whether it’s a subtle pan or a quick zoom, your Move 4K can adapt to your creative vision. Experience seamless, professional-grade video with just a few adjustments.

Auto-Framing
Auto-Framing is now available in the PTZOptics Move 4K cameras! Auto-Framing seamlessly includes multiple subjects in every shot, intelligently adjusting pan, tilt, and zoom to frame your composition perfectly. Whether it’s a group discussion, a team presentation, or a special event, the camera can intelligently adjust to frame all participants in a single, harmonious scene.

Note: Auto-Framing operates independently and cannot be used simultaneously with auto-tracking.

PTZOptics API G3
Exciting news for developers! We’re thrilled to introduce the PTZOptics API G3, a significant leap forward in camera control technology. Building on the success of our previous versions, the new API expands your capabilities with over five times more options. Now, harness the power of advanced features like auto-tracking, auto-framing, and subject selection, all through our robust API. PTZOptics API G3 is designed to empower our developer community, enabling you to create more dynamic, responsive, customized camera experiences. Dive into a world of possibilities with PTZOptics API G3!

Preset Tour
The Preset Tour feature allows users to cycle through a specific set of saved pan, tilt, and zoom presets. Designed for automation and precision, this feature allows you to cycle through a series of customized pan, tilt, and zoom presets effortlessly. Once activated, the Preset Tour navigates through your pre-determined settings at a chosen speed, ensuring smooth transitions and consistent framing. Ideal for controlled environments like studios, conferences, or theaters, where precise camera positioning is key.

Privacy Mode
Introducing the Privacy Mode on PTZOptics cameras, a feature designed for your peace of mind. In today’s world, privacy and security are paramount, especially in sensitive environments. Privacy Mode goes beyond just turning off the camera. When activated, it places the camera in a standby state, closes the lens iris, and physically turns the lens to face the wall, ensuring complete privacy. Unlike powering down, this mode keeps the camera ready to activate instantly when needed. This blend of convenience and security makes Privacy Mode ideal for confidential settings where quick reactivation is essential.

FreeD
Step into the future of video production with FreeD support now available on PTZOptics Move 4K cameras. FreeD, a camera tracking data protocol, unlocks new creative possibilities. By integrating PTZ camera position data with FreeD systems such as those found in Unreal Engine, you can craft immersive virtual environments and enhance your live video projects. This feature is perfect for creators seeking to blend physical and digital realms, offering a dynamic and engaging viewer experience. Dive into a world where your imagination is your only limit with PTZOptics Move 4K and FreeD.

[embedded content]

Business readiness for the impending deepfake superstorm – CyberTalk

Business readiness for the impending deepfake superstorm – CyberTalk

EXECUTIVE SUMMARY:

Deepfake technologies, as powered by artificial intelligence (AI), are rapidly proliferating, affecting businesses both large and small, worldwide. Between last year and this year, AI-driven deepfake attacks have increased by an astonishing 3,000%. Although deepfake technologies do have legitimate applications, the risk that they pose to businesses is non-trivial. The following serves as a brief overview of what to keep track of:

Business risk

1. Deepfakes impersonating executives. At this point, deepfakes can mimic the voices and appearances of high-ranking individuals so effectively that cyber criminals are manipulating financial transactions, ensuring authorization of payments, and weaponizing videos to gain access to information.

The financial losses caused by deepfakes can prove substantial. Think $25 million or more, as exemplified in this incident. Millions of dollars lost can affect the company’s gross revenue, jeopardizing a company’s future.

What’s more is that impersonation of an executive, even if it only occurs once, can send stakeholders into a tailspin as they wonder who to trust, when to trust them and whether or not to only trust people in-person. This can disrupt the fluidity of day-to-day operations, causing internal instability and turmoil.

2. Reputational damage. If deepfakes are used publicly against an organization – for example, if a CEO is shown to be on stage, sharing a falsehood – the business’s image may rapidly deteriorate.

The situation could unravel further in the event that a high-level individual is depicted as participating in unethical or illegal behavior. Rebuilding trust and credibility after such incidents can be challenging and time-consuming (or all-out impossible).

3. Erosion of public trust. Deepfakes can potentially deceive customers, clients and partners.

For example, a cyber criminal could deepfake a customer service representative, and could pretend to assist a client, stealing personal details in the process. Or, a partner organization could be misled by deepfake impersonators on a video call.

These types of events can erode trust, lead to lost business and result in public reputational harm. When clients or partners report deepfake issues, news headlines emerge quickly, and prospective clients or partners are liable to back out of otherwise value-add deals.

Credit risk warning

Cyber security experts aren’t the only people who are concerned about the impending “deepfake superstorm” that threatens to imperil businesses. In May, credit ratings firm Moody’s warned that deepfakes could pose credit risks. The corresponding report points to a series of deepfake scams that have impacted the financial sector.

These scams have frequently involved fake video calls. Preventing deepfake scams – as through stronger cyber security and related measures – can potentially present businesses with greater opportunities to ensure good credit, acquire new capital and obtain lower insurance rates, among other things.

Cyber security solutions

Deepfake detection tools can help. Such tools typically use a variety of deepfake identification techniques to prevent and mitigate the threats. These include deep learning algorithms, machine learning models and more.

Check Point Research (CPR) actively investigates emerging threats, including deepfakes, and the research informs Check Point’s robust security solutions, which are designed to combat deepfake-related risks.

To see how a Check Point expert views and prevents deepfakes, please see CyberTalk.org’s past coverage. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Researchers leverage shadows to model 3D scenes, including objects blocked from view

Researchers leverage shadows to model 3D scenes, including objects blocked from view

Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?

Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.

They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.

They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.

By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.

In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.

“Our key idea was taking these two things that have been done in different disciplines before and pulling them together — multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to explore and get the best of both worlds,” says Tzofi Klinghoffer, an MIT graduate student in media arts and sciences, affiliate of the MIT Media Lab, and lead author of a paper on PlatoNeRF.

Klinghoffer wrote the paper with his advisor, Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; senior author Rakesh Ranjan, a director of AI research at Meta Reality Labs; as well as Siddharth Somasundaram at MIT, and Xiaoyu Xiang, Yuchen Fan, and Christian Richardt at Meta. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

Shedding light on the problem

Reconstructing a full 3D scene from one camera viewpoint is a complex problem.

Some machine-learning approaches employ generative AI models that try to guess what lies in the occluded regions, but these models can hallucinate objects that aren’t really there. Other approaches attempt to infer the shapes of hidden objects using shadows in a color image, but these methods can struggle when shadows are hard to see.

For PlatoNeRF, the MIT researchers built off these approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light and measuring the time it takes that light to bounce back to the sensor. Because single-photon lidars can detect individual photons, they provide higher-resolution data.

The researchers use a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

By calculating how long it takes light to bounce twice and then return to the lidar sensor, PlatoNeRF captures additional information about the scene, including depth. The second bounce of light also contains information about shadows.

The system traces the secondary rays of light — those that bounce off the target point to other points in the scene — to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.

The lidar sequentially illuminates 16 points, capturing multiple images that are used to reconstruct the entire 3D scene.

“Every time we illuminate a point in the scene, we are creating new shadows. Because we have all these different illumination sources, we have a lot of light rays shooting around, so we are carving out the region that is occluded and lies beyond the visible eye,” Klinghoffer says.

A winning combination

Key to PlatoNeRF is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene.

This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar, Klinghoffer says.

“The biggest challenge was figuring out how to combine these two things. We really had to think about the physics of how light is transporting with multibounce lidar and how to model that with machine learning,” he says.

They compared PlatoNeRF to two common alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image.

They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.

“About 15 years ago, our group invented the first camera to ‘see’ around corners, that works by exploiting multiple bounces of light, or ‘echoes of light.’ Those techniques used special lasers and sensors, and used three bounces of light. Since then, lidar technology has become more mainstream, that led to our research on cameras that can see through fog. This new work uses only two bounces of light, which means the signal to noise ratio is very high, and 3D reconstruction quality is impressive,” Raskar says.

In the future, the researchers want to try tracking more than two bounces of light to see how that could improve scene reconstructions. In addition, they are interested in applying more deep learning techniques and combining PlatoNeRF with color image measurements to capture texture information.

“While camera images of shadows have long been studied as a means to 3D reconstruction, this work revisits the problem in the context of lidar, demonstrating significant improvements in the accuracy of reconstructed hidden geometry. The work shows how clever algorithms can enable extraordinary capabilities when combined with ordinary sensors — including the lidar systems that many of us now carry in our pocket,” says David Lindell, an assistant professor in the Department of Computer Science at the University of Toronto, who was not involved with this work.

Forza Horizon 5 Adds Iconic Cars From Back To The Future, Jurassic Park, And Knight Rider

Developer Playground Games has announced its next wave of Forza Horizon 5 content will be focused on newer innovations in the realm of automobiles. This means that players will have the opportunity to drive around Mexico in a 2024 Ford Mustang Dark Horse, 2023 Hyundai IONIQ 5 N, 2021 Toyta GR Yaris, 2023 Porsche Taycan Cross Turismo Turbo S, and 2023 Kia EV6 GT. However, for many, those are unlikely to be the highlight of this batch of releases. Available tomorrow, Forza Horizon 5 is bringing several iconic vehicles from Universal Studios films and TV shows.

The Universal Icons Car Pack adds KITT from Knight Rider, Jurassic Park‘s 1992 Jeep Wrangler Sahara, and a distinct version of the Delorean Time Machine from each Back to the Future film in the trilogy. Each Time Machine features a unique look and visual effects once you hit 88 miles per hour. The Jurassic Park Wrangler features the park’s logo as well as the iconic color scheme from the film. Finally, KITT includes a body kit that includes Super Pursuit Mode.

Forza Horizon 5 Adds Iconic Cars From Back To The Future, Jurassic Park, And Knight Rider

This isn’t the first time Forza Horizon 5 has introduced licensed vehicles from pop-culture properties. Players can already purchase the Forza Horizon 5: Fast X Car Pack as well as the Forza Horizon 5: Hot Wheels expansion. In addition to those aforementioned vehicles, this season adds 17 EventLab props to help players build modern highways and a robot collectible out in the world.

We absolutely loved Forza Horizon 5 when it came out in 2021, awarding it a 9.5 out of 10, naming it one of the top 10 games of 2021, and awarding it both Best Microsoft Exclusive and Best Racing Game for that year. The Universal Icons Car Pack will be available tomorrow, June 18. The Mustang arrives on June 20, the IONIQ and EV6 land in-game on June 27. The Yaris comes out on July 4 and the Taycan on July 11.

Technologies enable 3D imaging of whole human brain hemispheres at subcellular resolution

Observing anything and everything within the human brain, no matter how large or small, while it is fully intact has been an out-of-reach dream of neuroscience for decades. But in a new study in Science, an MIT-based team describes a technology pipeline that enabled them to finely process, richly label, and sharply image full hemispheres of the brains of two donors — one with Alzheimer’s disease and one without — at high resolution and speed.

“We performed holistic imaging of human brain tissues at multiple resolutions, from single synapses to whole brain hemispheres, and we have made that data available,” says senior and corresponding author Kwanghun Chung, associate professor the MIT departments of Chemical Engineering and Brain and Cognitive Sciences and member of The Picower Institute for Learning and Memory and the Institute for Medical Engin­­­­eering and Science. “This technology pipeline really enables us to analyze the human brain at multiple scales. Potentially this pipeline can be used for fully mapping human brains.”

The new study does not present a comprehensive map or atlas of the entire brain, in which every cell, circuit, and protein is identified and analyzed. But with full hemispheric imaging, it demonstrates an integrated suite of three technologies to enable that and other long-sought neuroscience investigations. The research provides a “proof of concept” by showing numerous examples of what the pipeline makes possible, including sweeping landscapes of thousands of neurons within whole brain regions; diverse forests of cells, each in individual detail; and tufts of subcellular structures nestled among extracellular molecules. The researchers also present a rich variety of quantitative analytical comparisons focused on a chosen region within the Alzheimer’s and non-Alzheimer’s hemispheres.

The importance of being able to image whole hemispheres of human brains intact and down to the resolution of individual synapses (the teeny connections that neurons forge to make circuits) is two-fold for understanding the human brain in health and disease, Chung says.

One brain is better than two

On one hand, it will enable scientists to conduct integrated explorations of questions using the same brain, rather than having to (for example) observe different phenomena in different brains, which can vary significantly, and then try to construct a composite picture of the whole system. A key feature of the new technology pipeline is that analysis doesn’t degrade the tissue. On the contrary, it makes the tissues extremely durable and repeatedly re-labelable to highlight different cells or molecules as needed for new studies for potentially years on end. In the paper, Chung’s team demonstrates using 20 different antibody labels to highlight different cells and proteins, but they are already expanding that to a hundred or more.

“We need to be able to see all these different functional components — cells, their morphology and their connectivity, subcellular architectures, and their individual synaptic connections — ideally within the same brain, considering the high individual variabilities in the human brain and considering the precious nature of human brain samples,” Chung says. “This technology pipeline really enables us to extract all these important features from the same brain in a fully integrated manner.”

On the other hand, the pipeline’s relatively high scalability and throughput (imaging a whole brain hemisphere once it is prepared takes 100 hours, rather than many months) means that it is possible to create many samples to represent different sexes, ages, disease states, and other factors that can enable robust comparisons with increased statistical power. Chung says he envisions creating a brain bank of fully imaged brains that researchers could analyze and re-label as needed for new studies to make more of the kinds of comparisons he and co-authors made with the Alzheimer’s and non-Alzheimer’s hemispheres in the new paper.

Three key innovations

Chung says the biggest challenge he faced in achieving the advances described in the paper was building a team at MIT that included three especially talented young scientists, each a co-lead author of the paper because of their key roles in producing the three major innovations. Ji Wang, a mechanical engineer and former postdoc, developed the “Megatome,” a device for slicing intact human brain hemispheres so finely that there is no damage to them. Juhyuk Park, a materials engineer and former postdoc, developed the chemistry that makes each brain slice clear, flexible, durable, expandable, and quickly, evenly, and repeatedly labelable — a technology called “mELAST.” Webster Guan, a former MIT chemical engineering graduate student with a knack for software development, created a computational system called “UNSLICE” that can seamlessly reunify the slabs to reconstruct each hemisphere in full 3D, down to the precise alignment of individual blood vessels and neural axons (the long strands they extend to forge connections with other neurons).

No technology allows for imaging whole human brain anatomy at subcellular resolution without first slicing it, because it is very thick (it’s 3,000 times the volume of a mouse brain) and opaque. But in the Megatome, tissue remains undamaged because Wang, who is now at a company Chung founded called LifeCanvas Technologies, engineered its blade to vibrate side-to-side faster, and yet sweep wider, than previous vibratome slicers. Meanwhile she also crafted the instrument to stay perfectly within its plane, Chung says. The result are slices that don’t lose anatomical information at their separation or anywhere else. And because the vibratome cuts relatively quickly and can cut thicker (and therefore fewer) slabs of tissue, a whole hemisphere can be sliced in a day, rather than months.

A major reason why slabs in the pipeline can be thicker comes from mELAST. Park engineered the hydrogel that infuses the brain sample to make it optically clear, virtually indestructible, and compressible and expandable. Combined with other chemical engineering technologies developed in recent years in Chung’s lab, the samples can then be evenly and quickly infused with the antibody labels that highlight cells and proteins of interest. Using a light sheet microscope the lab customized, a whole hemisphere can be imaged down to individual synapses in about 100 hours, the authors report in the study. Park is now an assistant professor at Seoul National University in South Korea.

“This advanced polymeric network, which fine-tunes the physicochemical properties of tissues, enabled multiplexed multiscale imaging of the intact human brains,” Park says.

After each slab has been imaged, the task is then to restore an intact picture of the whole hemisphere computationally. Guan’s UNSLICE does this at multiple scales. For instance, at the middle, or “meso” scale, it algorithmically traces blood vessels coming into one layer from adjacent layers and matches them. But it also takes an even finer approach. To further register the slabs, the team purposely labeled neighboring neural axons in different colors (like the wires in an electrical fixture). That enabled UNSLICE to match layers up based on tracing the axons, Chung says. Guan is also now at LifeCanvas.

In the study, the researchers present a litany of examples of what the pipeline can do. The very first figure demonstrates that the imaging allows one to richly label a whole hemisphere and then zoom in from the wide scale of brainwide structures to the level of circuits, then individual cells, and then subcellular components, such as synapses. Other images and videos demonstrate how diverse the labeling can be, revealing long axonal connections and the abundance and shape of different cell types including not only neurons but also astrocytes and microglia.

Exploring Alzheimer’s

For years, Chung has collaborated with co-author Matthew Frosch, an Alzheimer’s researcher and director of the brain bank at Massachusetts General Hospital, to image and understand Alzheimer’s disease brains. With the new pipeline established they began an open-ended exploration, first noticing where within a slab of tissue they saw the greatest loss of neurons in the disease sample compared to the control. From there, they followed their curiosity — as the technology allowed them to do — ultimately producing a series of detailed investigations described in the paper.

“We didn’t lay out all these experiments in advance,” Chung says. “We just started by saying, ‘OK, let’s image this slab and see what we see.’ We identified brain regions with substantial neuronal loss so let’s see what’s happening there. ‘Let’s dive deeper.’ So we used many different markers to characterize and see the relationships between pathogenic factors and different cell types.

“This pipeline allows us to have almost unlimited access to the tissue,” Chung says. “We can always go back and look at something new.”

They focused most of their analysis in the orbitofrontal cortex within each hemisphere. One of the many observations they made was that synapse loss was concentrated in areas where there was direct overlap with amyloid plaques. Outside of areas of plaques the synapse density was as high in the brain with Alzheimer’s as in the one without the disease.

With just two samples, Chung says, the team is not offering any conclusions about the nature of Alzheimer’s disease, of course, but the point of the study is that the capability now exists to fully image and deeply analyze whole human brain hemispheres to enable exactly that kind of research.

Notably, the technology applies equally well to many other tissues in the body, not just brains.

“We envision that this scalable technology platform will advance our understanding of the human organ functions and disease mechanisms to spur development of new therapies,” the authors conclude.

In addition to Park, Wang, Guan, Chung, and Frosch, the paper’s other authors are Lars A. Gjesteby, Dylan Pollack, Lee Kamentsky, Nicholas B. Evans, Jeff Stirman, Xinyi Gu, Chuanxi Zhao, Slayton Marx, Minyoung E. Kim, Seo Woo Choi, Michael Snyder, David Chavez, Clover Su-Arcaro, Yuxuan Tian, Chang Sin Park, Qiangge Zhang, Dae Hee Yun, Mira Moukheiber, Guoping Feng, X. William Yang, C. Dirk Keene, Patrick R. Hof, Satrajit S. Ghosh, and Laura J. Brattain.

The main funding for the work came from the National Institutes of Health, The Picower Institute for Learning and Memory, The JPB Foundation, and the NCSOFT Cultural Foundation.

Navigating the Ethics of Digital Humans

With the emergence of any new technology, ethical challenges arise. The rise of digital humans is no exception.   Gartner predicts that by 2035, the digital human economy will become a $125-billion market that will continue to grow further. When deployed at such scale, the digital human…