The latest cyber kidnapping victim, U.S. exchange student – CyberTalk

The latest cyber kidnapping victim, U.S. exchange student – CyberTalk

EXECUTIVE SUMMARY:

In the U.S. state of Utah, police discovered a teenage Chinese exchange student alone in a freezing cold tent, after he had become the target in a “cyber kidnapping,” an elaborate online ransom scheme.

Online scammers had connected with 17 year-old Kai Zhuang, telling him that his family was in jeopardy. The only way to stop grave harm would be for him to behave in accordance with their demands.

At the same time, scammers had connected with Kai’s parents, informing them that Kai would remain in serious peril until they paid a ransom fee. Kai’s parents ultimately paid $80,000.

Cyber kidnapping

“The technology has reached a point where even loving parents who know their kids really well can be tricked,” said cyber security expert Joseph Steinberg.

In most cyber kidnapping cases, cyber criminals call or message a family or individual to deceive them into believing that a loved one has been kidnapped, even though the person is safe.

Victims have reported hearing screams on the phone, in their loved one’s voice, while the perpetrator claimed that the loved one was in dire straits – all in the interest of securing a monetary payment.

How it works

Effectively, all parties involved – the loved one and their family members – are manipulated into thinking that the other is in danger. In most cases, the loved one who’s ‘being held captive’ is not in any actual physical distress.

Cyber kidnappers will say anything to keep victims on the phone. In the event that a suspicious victim attempts to hang up or contact anyone, the scammers will make terrifying remarks that would spur the most level-headed of people to make rushed decisions around payment.

How common is it?

According to the U.S. Federal Bureau of Investigation, this is not an isolated incident. Other foreign exchange students, particularly of Chinese origin, have been targeted in similar terror-and-ransom scams in the U.S, Canada and Australia.

At present, no data exists around the frequency of virtual kidnappings. These events largely go unreported and unaccounted for, according to experts.

Kai Zhuang’s cyber kidnapping case

When Kai was found on a mountainside in a tent, he appeared to be relieved to see the police.
He requested to speak with his family over the phone to ensure their safety, and asked for a warm cheeseburger, both of which were accomplished on the way back to the police station.

“I want foreign exchange students to know they can trust police to protect them and to work with police to ensure their safety as well as their family’s safety abroad,” noted Riverdale Police Chief, Casey Warren.

Investigators are working with American authorities and the Chinese Embassy in order to find the kidnappers. The Chinese Embassy in the U.S. has since warned its citizens to watch out for cyber kidnappings and other forms of online fraud.

Cyber kidnapping prevention

Anyone can fall victim to a virtual kidnapping scam. Nonetheless, people can take steps to protect themselves.

  • First and foremost, people need to know about the problem.
  • People also need to remain cognizant of the personal information of theirs that’s available on the internet, as cyber kidnappers typically collect details about victims ahead of making threats.
  • Those who receive a suspicious call or message that could indicate a cyber kidnapping should try to independently reach the loved one in order to verify their location.
  • Ahead of traveling, set up specific keywords or phrases to use in emergency situations involving family; a code that cyber kidnappers would not be aware of.
  • Telecommunications companies can play a role in preventing these types of crimes by making improvements in call authentication and in tracing the source of calls.

Related resources

Hogwarts Legacy Has Sold More Than 22 Million Copies In Less Than A Year

Hogwarts Legacy Has Sold More Than 22 Million Copies In Less Than A Year

Since its release on PlayStation 5, Xbox Series X/S, and PC last February, developer Avalanche’s Hogwarts Legacy has continued to sell well. It has maintained the top position almost every month for best-selling games in the U.S., too. Thanks to a new interview at Variety with Warner Bros. Interactive Entertainment president David Haddad, we now know Hogwarts Legacy has surpassed 22 million copies sold and it did so in less than a year. 

Haddad told Variety that Hogwarts Legacy crossed the 22 million mark at the end of 2023, picking up 2 million alone in December. He says it’s the best-selling game of the year “in the entire industry worldwide.” Typically, that title goes to a Call of Duty or other triple-A release. 

[embedded content]

“But it’s not just the unit sold that I’m so proud of, it’s just that it delighted the fans so much,” Haddad tells the publication. “It brought Harry Potter to life in a new way for gamers where they could be themselves in this world, in this story. And that’s what the team at Avalanche set out to do when they were developing the game and I think that’s really why it resonated so well and remains the best-selling game of the year in the entire industry worldwide.” 

Hogwarts Legacy was also the top trending search for games on Google in 2023. While many players were likely searching for new trailers, gameplay, and guides for the game, there’s no doubt some of the Google searching for the game stemmed from Harry Potter author J.K. Rowling’s inflammatory statements about the trans community. Publisher Warner Bros. Interactive says Rowling was not involved with the creation of Hogwarts Legacy, which was developed by Avalanche, but given it’s based on the Harry Potter franchise she created, she undoubtedly made money off of it.

Elsewhere in Variety’s report, Haddad touches on something Warner Bros. Discovery CEO David Zaslav told shareholders during the company’s Q3 earnings call in November of last year. During it, Zaslav said the company’s gaming focus is on transforming its biggest franchises from largely console and PC with 3-to-4-year release schedules to include “more always-on gameplay through live services, multi-platform, and free-to-play extensions with the goal to have more players spending more time on more platforms,” according to Variety​​​​​​. 

Haddad told Variety that to achieve that, it needs to accumulate “live games launched with new content, keeping our mobile games services vibrant, and large launching new content.” Notably, Hogwarts Legacy is not a live-service game, nor does it feature any multiplayer components. It is a single-player game, and the best-selling game of 2023, according to its publisher. 

However, Warner Bros. will soon dive into the world of live-service games when Batman Arkham trilogy developer Rocksteady releases its multiplayer third-person shooter Suicide Squad: Kill The Justice League on February 2. 

For more about Hogwarts Legacy and Warner Bros.’s wider gaming plans, be sure to read Variety’s full report here


Were you one of the 22 million people who bought Hogwarts Legacy last year? Let us know what you think of the game in the comments below!

The Making Of Final Fantasy VII Remake

The ground beneath Square Enix’s Tetsuya Nomura’s feet trembled. In his time since serving as character designer and visual director on Final Fantasy VII, his legend has grown substantially. In addition to working on nearly every acclaimed Final Fantasy game since Nomura also helped create the Kingdom Hearts series and has become a figurehead and luminary within the stacked ranks of Square Enix’s stable of developers. But this 2015 trip to Los Angeles, California, was different.

PlayStation’s E3 2015 livestream had just revealed a teaser trailer featuring the iconic Final Fantasy VII protagonist, Cloud Strife, walking through Midgar in glorious, modern, HD graphics. The dream of so many – a remake of the classic RPG – was finally realized. The fans weren’t the only ones feeling the weight of the moment, though, and it was no longer just the ground that was shaking; it was Nomura’s entire body.

[embedded content]

“There were no staff members around, so I was kind of just off to the side, standing there alone,” Nomura says. “When I heard the cheers from the crowd and the passion, I became overwhelmed and I started shivering. I was walking like a fawn, just overwhelmed by the intensity of the crowd. I thought, ‘This has become such a big deal,’ and I wanted to cry.”

Meanwhile, series producer Yoshinori Kitase was at his home in Tokyo watching it on YouTube. “It still comes up on my ‘Videos You Should Watch,” he says with a laugh. “Someone should have taken a video of you, like a reaction video and uploaded it to YouTube!”

“I don’t think we had that culture of reaction vids back then,” Nomura says. “If I knew, I would have taken it, but I might have been shaking mid-way through!”

The road to this moment was long and arduous but something Nomura had dreamt of for years. Operating as a team of one, Nomura had spent part of the 2000s imagining what a modern remake of Final Fantasy VII could look like. Unfortunately, not much progress was made since the rest of the team members were tied up with other projects.

Around this time, fans started clamoring for a modernized remake of Final Fantasy VII, and the developers began hearing about it from media members. Kitase, who has worked at Square Enix since 1990, serving as director on beloved games like Final Fantasy VI, Chrono Trigger, the original Final Fantasy VII, and Final Fantasy X, was inundated with questions during a series of 2009 interviews.

The Making Of Final Fantasy VII Remake

“We were on the U.S. media tour for Final Fantasy XIII, and we took on a bunch of interviews, and we got a ton of questions from reporters asking, ‘When are we going to make a Final Fantasy VII remake?'” Kitase recalls. “Just hearing that so many times, I did think that we would do it one day, that’s for certain.”

Kitase returned to Tokyo and approached Nomura about making it a reality. As two of the creators of the original Final Fantasy VII, they noticed the writing on the wall; fan and media demand was at a fever pitch, and Square Enix was beginning to embrace the idea of modern remakes for classic games more than ever before. They knew they had to act. 

“Within Square Enix, gradually, remakes were being made, and these ideas for remakes were coming up in other departments,” Nomura says. “If we weren’t going to do Final Fantasy VII, others were going to do it, so we had to rise up and do it! We had the sense that we had to guard Final Fantasy VII and have to be the ones taking this on, or someone else is going to do it. I thought it may be a bit troublesome is other teams without us took on the project.”

Final Fantasy VII

Nomura and Kitase are a pair of legends within the Final Fantasy and Square Enix fandom, but they needed help to make it a reality. To create the team, the duo tapped into Square Enix Creative Business Unit I, the group historically responsible for many of the most beloved Final Fantasy titles. Kazushige Nojima, who joined Square Enix in 1994, working on games like Final Fantasy VII, VIII, and X, as well as the Kingdom Hearts series, and Motomu Toriyama, who joined Square Enix in 1995 and worked on the original Final Fantasy VII in addition to Final Fantasy X, XIII, and more, signed on to co-write the remake.

“I had always hoped to be a part of the title if and when a remake was to be made,” Toriyama says. “I was very happy when hearing the news [that we were making one].” 

But the development team behind this project couldn’t just be members of the original dev team; most had left the company or were working on other projects. “I would say the majority of the dev staff and production members are those who were players of the original, not creators,” Toriyama says.

Motomu Toriyama and Teruki Endo

Two of those developers who started as fans of the Final Fantasy series before joining Square Enix are Naoki Hamaguchi and Teruki Endo. Hamaguchi joined Square Enix in 2003, working on titles like Final Fantasy XII and the XIII trilogy. After serving as project manager on the mobile title Mobius Final Fantasy, he joined the Remake team as a co-director. Endo got his start in the late 2000s at Capcom, working primarily on the Monster Hunter series, but when he heard about a remake for Final Fantasy VII, as a fan of the original, he couldn’t resist joining the team as battle director.

“I was working for another gaming company when I heard they were looking for members to be involved on the battle side of creating this game and felt like this was a great opportunity in which I could utilize the skills that I had gained thus far working in the industry,” Endo recalls.

With the core team assembled, Final Fantasy VII Remake was underway.

Naoki Hamaguchi

When a game is as beloved as Final Fantasy VII, modernizing it without alienating fans of the original can be a tricky proposition; if you keep things too close to the original, then you don’t keep up with the latest trends, squandering the opportunity to create something distinct. Conversely, if you put things too far away from the source material, you risk alienating those who made Final Fantasy VII so famous in the first place.

According to Toriyama, the members of the team who experienced VII as fans, like Hamaguchi and Endo, are more protective of the source material than those who worked on the original title. Kitase worried those younger staff members would be too loyal to the original title, but his concerns eased once the team started working together. “This concern was all for naught because this was clearly not true,” Kitase says. “We were able to work together very well and realize all of our visions and a game that can be accepted and enjoyed by contemporary users, so that was wonderful.”

“The top consideration, I believe, is that for both players who may not know Final Fantasy VII and those who do know or have played it, for both of these types of users to be able to play [Remake] and enjoy it,” Nomura adds.

The team worked together to balance the old with the new, to create something that definitively retells the story of Final Fantasy VII with modern conventions while not going too far in either direction. “For me, it really comes down to considering what it was that the players enjoyed and loved in the original title,” Endo says. “Of course, we expect a variance in each player’s depth and span of what they enjoy and the things that they love, but at the end of the day, I do have to trust my instincts and thoughts on what I loved and enjoyed playing the game.”

Teruki Endo

For Endo’s part – the battle system – he opted to blend action with the more traditional Active Time Battle (ATB) mechanics from the original game, in which characters can act once a meter fills. The result appeals to both new and longtime players. “Seeing that the Final Fantasy series has a strong focus on its characters, I believe the action enhances this and lets the players be further immersed into the characters they play,” Endo says. “Along with the strategic battles that I believe are key to Final Fantasy VII, I wanted to see how best I could mix these two elements of the command and strategy-based battle with the action that allows for instant immersion.” 

Though Endo wanted to introduce action, his desire to balance it with the traditional ATB elements struck a chord with Nomura. “I do have this idea of how Final Fantasy battles should be and should feel,” Nomura says. “We want to still keep this strategy element, in which the player will consider the elemental weaknesses of enemies during battle while using these action moves and being engaged, intact. That was always my core belief in how we should approach Final Fantasy battles. […] I thought this was truly vital to this game; I didn’t want it to be a game where it’s a reflex-type action or reflex-based battle; we wanted to combine all of these elements.”

For Hamaguchi, it was more about removing barriers that exist for players when trying to feel as though they’re a part of the world. “I do believe that, not just for RPGs, but for other fantasy-type titles as well, the trend will be such that it’ll be moving towards incorporating more action elements and that will be the trajectory of games overall,” Hamaguchi says. “It’s very much favored by contemporary players in that it creates a sense of immersion because players are able to receive this immediate response to the input from the controls. There’s this immediacy that brings about further immersion into the gameplay. Instead of viewing this fantasy world from the outside perspective as a player, you’re able to be fully immersed as if you are inside that world.”

“In that sense, I believe the Final Fantasy VII Remake series has this wonderful balance of all these elements,” Nomura adds. “It’s not quite completely action-leaning or action-focused, but it very skillfully combines these elements into a balanced and enjoyable, immersive experience.”

The battle system of Final Fantasy VII Remake garnered acclaim, but it’s not the only piece of the title that changed. The visual leap forward is immediately recognizable, and the story received numerous upgrades. Instead of retelling the entire Final Fantasy VII arc in one game, Square Enix opted to release the remake in the form of three games. The first title, Final Fantasy VII Remake, retold the party’s initial push through Midgar – a section of the original that takes about 6 hours to complete – across a 30 to 40-hour title. 

This decision came from Nomura, who identified early on that fully capturing the events of Final Fantasy VII in a modern way and with enough depth to do the story justice wouldn’t be possible in its original one-game form, not to mention the drastically different format the game takes following the party’s emergence from Midgar. “To recreate the world of Final Fantasy VII as it was in the original today in its full volume, the only way for us to realize this was to divide the titles or else it simply was not possible,” Nomura says. “We had to divide it, or we can’t do it right.”

Final Fantasy VII Remake’s extended stay in Midgar fully fleshed out characters previously relegated to minor roles like Biggs, Wedge, and Jessie and further developed the personalities and relationships of the main characters like Cloud, Tifa, Barret, and Aerith. “When the remake project was first decided, at that point, we had already felt that if we are going to take on this series, it’s imperative that we depict the characters much deeper,” Nomura says.

Final Fantasy VII Remake was released on PlayStation 4 on April 10, 2020, earning an 87 out of 100 on reviews aggregator Metacritic, including an 8.75 out of 10 from Game Informer. And now, with the quality bar set high and fan expectations even higher, that same team sets out to push the well-known story forward as Cloud and his friends step out of Midgar and venture into a massive world full of adventure and intrigue in the second act of the Remake series, Final Fantasy VII Rebirth.

Final Fantasy VII Rebirth arrives on PlayStation 5 on February 29. To learn more about Final Fantasy VII Rebirth, visit our exclusive coverage hub through the banner below.


Parts of this article originally appeared in Issue 362 of Game Informer.

Is AI the Future of Green Energy?

Green energy is essential in the fight against climate change. The world needs to use less power and switch to less harmful sources, but that’s more complicated than it initially seems. AI could prove to be the missing part of the puzzle. Experts have identified over…

Writesonic Review: Can AI Get My Article to #1 on Google?

As a freelance writer and SEO specialist, I’ve been fascinated with AI writing generators. After testing and reviewing multiple, I’ve seen first-hand how much they can streamline the writing process and improve content quality. An AI writing generator that I’ve recently come across is Writesonic. Some…

Multiple AI models help robots execute complex plans more transparently

Multiple AI models help robots execute complex plans more transparently

Your daily to-do list is likely pretty straightforward: wash the dishes, buy groceries, and other minutiae. It’s unlikely you wrote out “pick up the first dirty dish,” or “wash that plate with a sponge,” because each of these miniature steps within the chore feels intuitive. While we can routinely complete each step without much thought, a robot requires a complex plan that involves more detailed outlines.

MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), has offered these machines a helping hand with a new multimodal framework: Compositional Foundation Models for Hierarchical Planning (HiP), which develops detailed, feasible plans with the expertise of three different foundation models. Like OpenAI’s GPT-4, the foundation model that ChatGPT and Bing Chat were built upon, these foundation models are trained on massive quantities of data for applications like generating images, translating text, and robotics.

Unlike RT2 and other multimodal models that are trained on paired vision, language, and action data, HiP uses three different foundation models each trained on different data modalities. Each foundation model captures a different part of the decision-making process and then works together when it’s time to make decisions. HiP removes the need for access to paired vision, language, and action data, which is difficult to obtain. HiP also makes the reasoning process more transparent.

What’s considered a daily chore for a human can be a robot’s “long-horizon goal” — an overarching objective that involves completing many smaller steps first — requiring sufficient data to plan, understand, and execute objectives. While computer vision researchers have attempted to build monolithic foundation models for this problem, pairing language, visual, and action data is expensive. Instead, HiP represents a different, multimodal recipe: a trio that cheaply incorporates linguistic, physical, and environmental intelligence into a robot.

“Foundation models do not have to be monolithic,” says NVIDIA AI researcher Jim Fan, who was not involved in the paper. “This work decomposes the complex task of embodied agent planning into three constituent models: a language reasoner, a visual world model, and an action planner. It makes a difficult decision-making problem more tractable and transparent.”

The team believes that their system could help these machines accomplish household chores, such as putting away a book or placing a bowl in the dishwasher. Additionally, HiP could assist with multistep construction and manufacturing tasks, like stacking and placing different materials in specific sequences.

Evaluating HiP

The CSAIL team tested HiP’s acuity on three manipulation tasks, outperforming comparable frameworks. The system reasoned by developing intelligent plans that adapt to new information.

First, the researchers requested that it stack different-colored blocks on each other and then place others nearby. The catch: Some of the correct colors weren’t present, so the robot had to place white blocks in a color bowl to paint them. HiP often adjusted to these changes accurately, especially compared to state-of-the-art task planning systems like Transformer BC and Action Diffuser, by adjusting its plans to stack and place each square as needed.

Another test: arranging objects such as candy and a hammer in a brown box while ignoring other items. Some of the objects it needed to move were dirty, so HiP adjusted its plans to place them in a cleaning box, and then into the brown container. In a third demonstration, the bot was able to ignore unnecessary objects to complete kitchen sub-goals such as opening a microwave, clearing a kettle out of the way, and turning on a light. Some of the prompted steps had already been completed, so the robot adapted by skipping those directions.

A three-pronged hierarchy

HiP’s three-pronged planning process operates as a hierarchy, with the ability to pre-train each of its components on different sets of data, including information outside of robotics. At the bottom of that order is a large language model (LLM), which starts to ideate by capturing all the symbolic information needed and developing an abstract task plan. Applying the common sense knowledge it finds on the internet, the model breaks its objective into sub-goals. For example, “making a cup of tea” turns into “filling a pot with water,” “boiling the pot,” and the subsequent actions required.

“All we want to do is take existing pre-trained models and have them successfully interface with each other,” says Anurag Ajay, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and a CSAIL affiliate. “Instead of pushing for one model to do everything, we combine multiple ones that leverage different modalities of internet data. When used in tandem, they help with robotic decision-making and can potentially aid with tasks in homes, factories, and construction sites.”

These models also need some form of “eyes” to understand the environment they’re operating in and correctly execute each sub-goal. The team used a large video diffusion model to augment the initial planning completed by the LLM, which collects geometric and physical information about the world from footage on the internet. In turn, the video model generates an observation trajectory plan, refining the LLM’s outline to incorporate new physical knowledge.

This process, known as iterative refinement, allows HiP to reason about its ideas, taking in feedback at each stage to generate a more practical outline. The flow of feedback is similar to writing an article, where an author may send their draft to an editor, and with those revisions incorporated in, the publisher reviews for any last changes and finalizes.

In this case, the top of the hierarchy is an egocentric action model, or a sequence of first-person images that infer which actions should take place based on its surroundings. During this stage, the observation plan from the video model is mapped over the space visible to the robot, helping the machine decide how to execute each task within the long-horizon goal. If a robot uses HiP to make tea, this means it will have mapped out exactly where the pot, sink, and other key visual elements are, and begin completing each sub-goal.

Still, the multimodal work is limited by the lack of high-quality video foundation models. Once available, they could interface with HiP’s small-scale video models to further enhance visual sequence prediction and robot action generation. A higher-quality version would also reduce the current data requirements of the video models.

That being said, the CSAIL team’s approach only used a tiny bit of data overall. Moreover, HiP was cheap to train and demonstrated the potential of using readily available foundation models to complete long-horizon tasks. “What Anurag has demonstrated is proof-of-concept of how we can take models trained on separate tasks and data modalities and combine them into models for robotic planning. In the future, HiP could be augmented with pre-trained models that can process touch and sound to make better plans,” says senior author Pulkit Agrawal, MIT assistant professor in EECS and director of the Improbable AI Lab. The group is also considering applying HiP to solving real-world long-horizon tasks in robotics.

Ajay and Agrawal are lead authors on a paper describing the work. They are joined by MIT professors and CSAIL principal investigators Tommi Jaakkola, Joshua Tenenbaum, and Leslie Pack Kaelbling; CSAIL research affiliate and MIT-IBM AI Lab research manager Akash Srivastava; graduate students Seungwook Han and Yilun Du ’19; former postdoc Abhishek Gupta, who is now assistant professor at University of Washington; and former graduate student Shuang Li PhD ’23.

The team’s work was supported, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the U.S. Office of Naval Research Multidisciplinary University Research Initiatives, and the MIT-IBM Watson AI Lab. Their findings were presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS).

YoloLiv YoloBox Ultra Tutorial: Activating NDI – Videoguys

YoloLiv YoloBox Ultra Tutorial: Activating NDI – Videoguys

Learn how to activate NDI on your YoloBox Ultra with this comprehensive guide. Seamlessly integrate your device for enhanced live streaming and video production. Follow the step-by-step process, including payment options and submission details, to unlock the full potential of NDI on your YoloBox Ultra. The blog post titled “Activating NDI on YoloBox Ultra: A Step-by-Step Guide” by Meredith Jia for YoloLiv provides a comprehensive guide to unlocking the NDI (Network Device Interface) feature on the YoloBox Ultra device. Here’s a summary of the key points:

  1. Understanding the Activation Process:

    • To activate NDI on YoloBox Ultra, a license must be purchased from YoloLiv at a cost of $99.
    • Two payment options are available: PayPal transfer or bank transfer.
  2. Payment Options:

    • PayPal Transfer: Make a $99 payment to contact@yololiv.com through PayPal.
    • Bank Transfer: Choose a bank transfer option using the provided details
  3. Submitting Details for Activation:

    • After payment, gather necessary information: a screenshot of the transaction and the YoloBox Ultra serial number.
    • Send an email to contact@yololiv.com with the collected information for validation.
  4. Verification and Activation:

    • The YoloLiv support team verifies the transaction and device serial number.
    • Upon validation, they activate the full version of NDI for the YoloBox Ultra.
  5. Enjoy NDI Functionality:

    • Once activated, the YoloBox Ultra gains enhanced capabilities, allowing seamless integration into various video production setups.
  6. Conclusion:

    • Activating NDI on YoloBox Ultra is described as a straightforward process that enhances live streaming and video production capabilities.
    • By following the provided steps and submitting the required information, users can fully utilize the potential of their YoloBox Ultra.
  7. Additional Update:

    • As of the current moment, NDI support is exclusive to the YoloBox Ultra model.
    • There are plans to extend NDI support to future models like YoloBox Pro and YoloBox Mini, indicating potential expanded functionalities across the YoloBox product line.
    • Exciting developments are anticipated, promising more impressive capabilities in future YoloBox iterations.

In summary, the blog post guides users through the activation process, from payment to verification, enabling them to leverage the NDI feature on their YoloBox Ultra for enhanced video production and streaming experiences.

Read the full blog post by Meredith Jia for YoloLiv HERE

Co-creating climate futures with real-time data and spatial storytelling

Co-creating climate futures with real-time data and spatial storytelling

Virtual story worlds and game engines aren’t just for video games anymore. They are now tools for scientists and storytellers to digitally twin existing physical spaces and then turn them into vessels to dream up speculative climate stories and build collective designs of the future. That’s the theory and practice behind the MIT WORLDING initiative.

Twice this year, WORLDING matched world-class climate story teams working in XR (extended reality) with relevant labs and researchers across MIT. One global group returned for a virtual gathering online in partnership with Unity for Humanity, while another met for one weekend in person, hosted at the MIT Media Lab.

“We are witnessing the birth of an emergent field that fuses climate science, urban planning, real-time 3D engines, nonfiction storytelling, and speculative fiction, and it is all fueled by the urgency of the climate crises,” says Katerina Cizek, lead designer of the WORLDING initiative at the Co-Creation Studio of MIT Open Documentary Lab. “Interdisciplinary teams are forming and blossoming around the planet to collectively imagine and tell stories of healthy, livable worlds in virtual 3D spaces and then finding direct ways to translate that back to earth, literally.”

At this year’s virtual version of WORLDING, five multidisciplinary teams were selected from an open call. In a week-long series of research and development gatherings, the teams met with MIT scientists, staff, fellows, students, and graduates, as well as other leading figures in the field. Guests ranged from curators at film festivals such as Sundance and Venice, climate policy specialists, and award-winning media creators to software engineers and renowned Earth and atmosphere scientists. The teams heard from MIT scholars in diverse domains, including geomorphology, urban planning as acts of democracy, and climate researchers at MIT Media Lab.

Mapping climate data

“We are measuring the Earth’s environment in increasingly data-driven ways. Hundreds of terabytes of data are taken every day about our planet in order to study the Earth as a holistic system, so we can address key questions about global climate change,” explains Rachel Connolly, an MIT Media Lab research scientist focused in the “Future Worlds” research theme, in a talk to the group. “Why is this important for your work and storytelling in general? Having the capacity to understand and leverage this data is critical for those who wish to design for and successfully operate in the dynamic Earth environment.”

Making sense of billions of data points was a key theme during this year’s sessions. In another talk, Taylor Perron, an MIT professor of Earth, atmospheric and planetary sciences, shared how his team uses computational modeling combined with many other scientific processes to better understand how geology, climate, and life intertwine to shape the surfaces of Earth and other planets. His work resonated with one WORLDING team in particular, one aiming to digitally reconstruct the pre-Hispanic Lake Texcoco — where current day Mexico City is now situated — as a way to contrast and examine the region’s current water crisis.

Democratizing the future

While WORLDING approaches rely on rigorous science and the interrogation of large datasets, they are also founded on democratizing community-led approaches.

MIT Department of Urban Studies and Planning graduate Lafayette Cruise MCP ’19 met with the teams to discuss how he moved his own practice as a trained urban planner to include a futurist component involving participatory methods. “I felt we were asking the same limited questions in regards to the future we were wanting to produce. We’re very limited, very constrained, as to whose values and comforts are being centered. There are so many possibilities for how the future could be.”

Scaling to reach billions

This work scales from the very local to massive global populations. Climate policymakers are concerned with reaching billions of people in the line of fire. “We have a goal to reach 1 billion people with climate resilience solutions,” says Nidhi Upadhyaya, deputy director at Atlantic Council’s Adrienne Arsht-Rockefeller Foundation Resilience Center. To get that reach, Upadhyaya is turning to games. “There are 3.3 billion-plus people playing video games across the world. Half of these players are women. This industry is worth $300 billion. Africa is currently among the fastest-growing gaming markets in the world, and 55 percent of the global players are in the Asia Pacific region.” She reminded the group that this conversation is about policy and how formats of mass communication can be used for policymaking, bringing about change, changing behavior, and creating empathy within audiences.

Socially engaged game development is also connected to education at Unity Technologies, a game engine company. “We brought together our education and social impact work because we really see it as a critical flywheel for our business,” said Jessica Lindl, vice president and global head of social impact/education at Unity Technologies, in the opening talk of WORLDING. “We upscale about 900,000 students, in university and high school programs around the world, and about 800,000 adults who are actively learning and reskilling and upskilling in Unity. Ultimately resulting in our mission of the ‘world is a better place with more creators in it,’ millions of creators who reach billions of consumers — telling the world stories, and fostering a more inclusive, sustainable, and equitable world.”

Access to these technologies is key, especially the hardware. “Accessibility has been missing in XR,” explains Reginé Gilbert, who studies and teaches accessibility and disability in user experience design at New York University. “XR is being used in artificial intelligence, assistive technology, business, retail, communications, education, empathy, entertainment, recreation, events, gaming, health, rehabilitation meetings, navigation, therapy, training, video programming, virtual assistance wayfinding, and so many other uses. This is a fun fact for folks: 97.8 percent of the world hasn’t tried VR [virtual reality] yet, actually.”

Meanwhile, new hardware is on its way. The WORLDING group got early insights into the highly anticipated Apple Vision Pro headset, which promises to integrate many forms of XR and personal computing in one device. “They’re really pushing this kind of pass-through or mixed reality,” said Dan Miller, a Unity engineer on the poly spatial team, collaborating with Apple, who described the experience of the device as “You are viewing the real world. You’re pulling up windows, you’re interacting with content. It’s a kind of spatial computing device where you have multiple apps open, whether it’s your email client next to your messaging client with a 3D game in the middle. You’re interacting with all these things in the same space and at different times.”

“WORLDING combines our passion for social-impact storytelling and incredible innovative storytelling,” said Paisley Smith of the Unity for Humanity Program at Unity Technologies. She added, “This is an opportunity for creators to incubate their game-changing projects and connect with experts across climate, story, and technology.”

Meeting at MIT

In a new in-person iteration of WORLDING this year, organizers collaborated closely with Connolly at the MIT Media Lab to co-design an in-person weekend conference Oct. 25 – Nov. 7 with 45 scholars and professionals who visualize climate data at NASA, the National Oceanic and Atmospheric Administration, planetariums, and museums across the United States.

A participant said of the event, “An incredible workshop that had had a profound effect on my understanding of climate data storytelling and how to combine different components together for a more [holistic] solution.”

“With this gathering under our new Future Worlds banner,” says Dava Newman, director of the MIT Media Lab and Apollo Program Professor of Astronautics chair, “the Media Lab seeks to affect human behavior and help societies everywhere to improve life here on Earth and in worlds beyond, so that all — the sentient, natural, and cosmic — worlds may flourish.” 

“WORLDING’s virtual-only component has been our biggest strength because it has enabled a true, international cohort to gather, build, and create together. But this year, an in-person version showed broader opportunities that spatial interactivity generates — informal Q&As, physical worksheets, and larger-scale ideation, all leading to deeper trust-building,” says WORLDING producer Srushti Kamat SM ’23.

The future and potential of WORLDING lies in the ongoing dialogue between the virtual and physical, both in the work itself and in the format of the workshops.