The Friday Roundup – Resolve VFX Tips & Using Markers

The #1 PROBLEM with VFX – DaVinci Resolve Fusion Tips Regular readers will know that I usually include something each week from Casey Faris somehow related to the subject of DaVinci Resolve. What you may have also noticed is that over the past few months Casey…

A creation story told through immersive technology

In the beginning, as one version of the Haudenosaunee creation story has it, there was only water and sky. According to oral tradition, when the Sky Woman became pregnant, she dropped through a hole in the clouds. While many animals guided her descent as she fell, she eventually found a place on the turtle’s back. They worked together, with the aid of other water creatures, to lift the land from the depths of these primordial waters to create what we now know as our earth.

The new immersive experience, “Ne:Kahwistará:ken Kanónhsa’kówa í:se Onkwehonwe,” is a vivid retelling of this creation story by multimedia artist Jackson 2bears, also known as Tékeniyáhsen Ohkwá:ri (Kanien’kehà:ka), the 2022–24 Ida Ely Rubin Artist in Residence at the MIT Center for Art, Science and Technology. “A lot of what drives my work is finding new ways to keep Haudenosaunee teachings and stories alive in our communities, finding new ways to tell them, but also helping with the transmission and transformation of those stories as they are for us, a living part of our cultural practice,” he says.

 

A creation story told through immersive technology

Play video

An Immersive Multimedia Experience of the Haudenosaunee Creation Story by Jackson 2bears

A virtual recreation of the traditional longhouse

2bears was first inspired to create a virtual reality version of a longhouse, a traditional Haudenosaunee structure, in collaboration with Thru the RedDoor, an Indigenous-owned media company in Six Nations at the Grand River that 2bears calls home. The longhouse is not only a “functional dwelling,” says 2bears, but an important spiritual and cultural center where creation myths are shared. “While we were developing the project, we were told by one of our knowledge keepers in the community that longhouses aren’t structures, they’re not the materials they’re made out of,” 2bears recalls, “They’re about the people, the Haudenosaunee people. And it’s about our creative cultural practices in that space that make it a sacred place.”

The virtual recreation of the longhouse connects storytelling to the physical landscape, while also offering a shared space for community members to gather. In Haudenosaunee worldview, says 2bears, “stories are both durational, but they’re also dimensional.” With “Ne:Kahwistará:ken Kanónhsa’kówa í:se Onkwehonwe,” the longhouse was brought to life with drumming, dancing, knowledge-sharing, and storytelling. The immersive experience was designed to be communal. “We wanted to develop a story that we could work on with a bunch of other people rather than just having a story writer or director,” 2bears says, “We didn’t want to do headsets. We wanted to do something where we could be together, which is part of the longhouse mentality,” he says.

The power of collaboration

2bears produced the project with the support of Co-Creation Studio at MIT’s Open Documentary Lab. “We think of co-creation as a dance, as a way of working that challenges the notion of the singular author, the single one point of view,” says documentarian Kat Cizek, the artistic director and co-founder of the studio, who began her work at MIT as a CAST visiting artist. “And Jackson does that. He does that within the community at Six Nations, but also with other communities and other Indigenous artists.”

In an individualist society that so often centers the idea of the singular author, 2bears’s practice offers a powerful example of what it means to work as a collective, says Cizek. “It’s very hard to operate, I think, in any discipline without some level of collaboration,” she says, “What’s different about co-creation for us is that people enter the room with no set agenda. You come into the room and you come with questions and curiosity about what you might make together.”

2bears at MIT

At first, 2bears thought his time at MIT would help with the technical side of his work. But over time, he discovered a rich community at MIT, a place to explore the larger philosophical questions relating to technology, Indigenous knowledge, and artificial intelligence. “We think very often about not only human intelligence, but animal intelligence and the spirit of the sky and the trees and the grass and the living earth,” says 2bears, “and I’m seeing that kind of reflected here at the school.”

In 2023, 2bears participated in the Co-Creation Studio Indigenous Immersive Incubator at MIT, an historic gathering of 10 Indigenous artists, who toured MIT labs and met with Indigenous leaders from MIT and beyond. As part of the summit, he shared “Ne:Kahwistará:ken Kanónhsa’kówa í:se Onkwehonwe” as a work in progress. This spring, he presented the latest iteration of the work at MIT in smaller settings with groups of students, and in a large public lecture presented by CAST and the Art, Culture and Technology Program. His “experimental method of storytelling and communication really conveys the power of what it means to be a community as an Indigenous person, and the unique beauty of all of our people,” says Nicole McGaa, Oglala Lakota, co-president of MIT’s Native American Indigenous Association.

Storytelling in 360 degrees

2bear’s virtual recreation became even more important after the longhouse in the community unexpectedly burned down midway through the process, after the team had created 3D scans of the structure. With no building to project onto, they used ingenuity and creativity to pivot to the project’s current iteration.

The immersive experience was remarkable in its sheer size: 8-foot tall images played on a canvas screen 34 feet in diameter. With video mapping using multiple projectors and 14-channel surround sound, the story of Sky Woman coming down to Turtle Island was given an immense form. It premiered at the 2RO MEDIA Festival, and was met with an enthusiastic response from the Six Nations community. “It was so beautiful. You can look in any direction, and there was something happening,” says Gary Joseph, director of Thru the RedDoor. “It affects you in a way that you didn’t think you could be affected because you’re seeing the things that are sacred to you being expressed in a way that you’ve never imagined.”

In the future, 2bears hopes to make the installation more interactive, so participants can engage with the experience in their own ways, creating multiple versions of the creation story. “I’ve been thinking about it as creating a living installation,” he says. “It really was a project made in community, and I couldn’t have been happier about how it turned out. And I’m really excited about where I see this project going in the future.”

Technique improves the reasoning capabilities of large language models

Technique improves the reasoning capabilities of large language models

Large language models like those that power ChatGPT have shown impressive performance on tasks like drafting legal briefs, analyzing the sentiment of customer reviews, or translating documents into different languages.

These machine-learning models typically use only natural language to process information and answer queries, which can make it difficult for them to perform tasks that require numerical or symbolic reasoning.

For instance, a large language model might be able to memorize and recite a list of recent U.S. presidents and their birthdays, but that same model could fail if asked the question “Which U.S. presidents elected after 1950 were born on a Wednesday?” (The answer is Jimmy Carter.)

Researchers from MIT and elsewhere have proposed a new technique that enables large language models to solve natural language, math and data analysis, and symbolic reasoning tasks by generating programs.

Their approach, called natural language embedded programs (NLEPs), involves prompting a language model to create and execute a Python program to solve a user’s query, and then output the solution as natural language.

They found that NLEPs enabled large language models to achieve higher accuracy on a wide range of reasoning tasks. The approach is also generalizable, which means one NLEP prompt can be reused for multiple tasks.

NLEPs also improve transparency, since a user could check the program to see exactly how the model reasoned about the query and fix the program if the model gave a wrong answer.

“We want AI to perform complex reasoning in a way that is transparent and trustworthy. There is still a long way to go, but we have shown that combining the capabilities of programming and natural language in large language models is a very good potential first step toward a future where people can fully understand and trust what is going on inside their AI model,” says Hongyin Luo PhD ’22, an MIT postdoc and co-lead author of a paper on NLEPs.

Luo is joined on the paper by co-lead authors Tianhua Zhang, a graduate student at the Chinese University of Hong Kong; and Jiaxin Ge, an undergraduate at Peking University; Yoon Kim, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); senior author James Glass, senior research scientist and head of the Spoken Language Systems Group in CSAIL; and others. The research will be presented at the Annual Conference of the North American Chapter of the Association for Computational Linguistics.

Problem-solving with programs

Many popular large language models work by predicting the next word, or token, given some natural language input. While models like GPT-4 can be used to write programs, they embed those programs within natural language, which can lead to errors in the program reasoning or results.

With NLEPs, the MIT researchers took the opposite approach. They prompt the model to generate a step-by-step program entirely in Python code, and then embed the necessary natural language inside the program.

An NLEP is a problem-solving template with four steps. First, the model calls the necessary packages, or functions, it will need to solve the task. Step two involves importing natural language representations of the knowledge the task requires (like a list of U.S. presidents’ birthdays). For step three, the model implements a function that calculates the answer. And for the final step, the model outputs the result as a line of natural language with an automatic data visualization, if needed.

“It is like a digital calculator that always gives you the correct computation result as long as the program is correct,” Luo says.

The user can easily investigate the program and fix any errors in the code directly rather than needing to rerun the entire model to troubleshoot.

The approach also offers greater efficiency than some other methods. If a user has many similar questions, they can generate one core program and then replace certain variables without needing to run the model repeatedly.

To prompt the model to generate an NLEP, the researchers give it an overall instruction to write a Python program, provide two NLEP examples (one with math and one with natural language), and one test question.

“Usually, when people do this kind of few-shot prompting, they still have to design prompts for every task. We found that we can have one prompt for many tasks because it is not a prompt that teaches LLMs to solve one problem, but a prompt that teaches LLMs to solve many problems by writing a program,” says Luo.

“Having language models reason with code unlocks many opportunities for tool use, output validation, more structured understanding into model’s capabilities and way of thinking, and more,” says Leonid Karlinsky, principal scientist at the MIT-IBM Watson AI Lab.

“No magic here”

NLEPs achieved greater than 90 percent accuracy when prompting GPT-4 to solve a range of symbolic reasoning tasks, like tracking shuffled objects or playing a game of 24, as well as instruction-following and text classification tasks. The researchers found that NLEPs even exhibited 30 percent greater accuracy than task-specific prompting methods. The method also showed improvements over open-source LLMs. 

Along with boosting the accuracy of large language models, NLEPs could also improve data privacy. Since NLEP programs are run locally, sensitive user data do not need to be sent to a company like OpenAI or Google to be processed by a model.

In addition, NLEPs can enable small language models to perform better without the need to retrain a model for a certain task, which can be a costly process.

“There is no magic here. We do not have a more expensive or fancy language model. All we do is use program generation instead of natural language generation, and we can make it perform significantly better,” Luo says.

However, an NLEP relies on the program generation capability of the model, so the technique does not work as well for smaller models which have been trained on limited datasets. In the future, the researchers plan to study methods that could make smaller language models generate more effective NLEPs. In addition, they want to investigate the impact of prompt variations on NLEPs to enhance the robustness of the model’s reasoning processes.

This research was supported, in part, by the Center for Perceptual and Interactive Intelligence of Hong Kong. 

Latest Metal Gear Solid Delta: Snake Eater Trailer Shows All Gameplay

Latest Metal Gear Solid Delta: Snake Eater Trailer Shows All Gameplay

The 2024 Summer Xbox Showcase offered our first substantial look at the remake of Metal Gear Solid 3: Snake Eater’s (renamed Metal Gears Solid Delta: Snake Eater) gameplay, and the game looks great. The footage thankfully didn’t spoil any major story beats for the Metal Gear Solid 3 newcomers, but we saw lots of sneaking, eating, and CQC.

[embedded content]

Other details, like whether series creator Hideo Kojima is involved in any way (unlikely) and a release date, are still unknown. But we’re still excited to commence the virtuous mission again, hopefully soon.

Playground Games’ Fable Gets 2025 Release Window In First Gameplay Trailer

Playground Games’ Fable Gets 2025 Release Window In First Gameplay Trailer

Fable will launch sometime next year, developer Playground Games has revealed. It did so during today’s Xbox Games Showcase with a new trailer that features quick snippets of gameplay starring our heroine. 

In the trailer, we see our hero waltzing through a large medieval town while a narrator discusses an old threat returning to the world. Our hero wants to save Albion, and to do so, she needs to enlist the help of Humphry, who is the one narrarating the trailer. 

While there appears to be gameplay in the trailer, there’s no U.I. and no combat – it’s mostly just our hero protagonist walking and running through various fantastical locations. We do see some glimpses of combat, but it looks cinematic so it’s hard to tell if it’s actual combat gameplay. 

Check it out for yourself in the Fable gameplay trailer below

[embedded content]

“What does it mean to be a hero,” the trailer’s description reads. “Humphry, once one of the greatest, will be forced out of retirement when a mysterious figure from his past threatens Albion’s very existence.” 

Microsoft revealed Fable was coming back in 2020, with a reveal trailer from Forza Horizon developer Playground Games. We then got our next look at the game with a more narrative-focused trailer starring Richard Ayoade last year, and today’s new trailer is our first look at the game since then. 

Fable launches on Xbox Series X/S (and presumably PC) in 2025. 


What do think of this new trailer? Let us know in the comments below!

Perfect Dark Gets First Impressive Gameplay Trailer

Perfect Dark Gets First Impressive Gameplay Trailer

The long-lost reboot of Perfect Dark made its grand return during today’s Xbox Games Showcase. First announced in 2020 at The Game Awards, the trailer consists almost entirely of gameplay, showing off the reimagined vision of the Nintendo 64 classic. 

The over 3-minute trailer shows off first-person gameplay as Joanna Dark airdrops into a lush sci-fi city. Pursuing a target, she utilizes gadgets that allow her to hack systems to open doors and eavesdrop on nearby conversations. Getting around the city involves first-person parkour as she leaps and swings across balconies. Johanna is eventually greeted by goons that she drops by blasting them with bullets, stunning them with electrical rounds, or humbling them with CQC melee takedowns. She also has a scanner that reveals enemies behind barriers.

[embedded content]

Perfect Dark is being developed by The Initiative and Crystal Dynamics. It is coming to Xbox Series X/S, but it still has no release window. 

Diablo IV’s Vessel Of Hatred Expansion Gets October Release Date In New Trailer

Diablo IV’s Vessel Of Hatred Expansion Gets October Release Date In New Trailer

Blizzard Entertainment has released a new trailer for Diablo IV’s upcoming Vessel of Hatred expansion, and its looks gorgeously horrific. This trailer also reveals the expansion hits Xbox, PlayStation, and PC on October 8.

In the cinematic trailer, we get another look at the terrifying and dark world of Diablo IV by way of Vessel of Hatred’s opening cinematic. If you’re curious about what to expect, give it a watch – otherwise, it’s likely the cinematic that will play at the start of the expansion later this year when Vessel of Hatred hits PlayStation 5, Xbox Series X/S, PlayStation 4, Xbox One, and PC.

Check it out for yourself in the Diablo IV: Vessel of Hatred cinematic below: 

[embedded content]

Pre-purchasing the expansion will give players instant rewards in the game. 

For more, read Game Informer’s Diablo IV review, and then read about how the game recently hit Xbox Game Pass back in March


Are you going to check out Vessel of Hatred? Let us know in the comments below!

New Indiana Jones And The Great Circle Footage Shows Extended Cutscene And Teases Classic Boulder Run

New Indiana Jones And The Great Circle Footage Shows Extended Cutscene And Teases Classic Boulder Run

The latest footage of MachineGames’ Indiana Jones game showed an extended cutscene, some gameplay clips, and a tease of the first film’s boulder run. In the footage we see Jones and a companion coming across a battleship somehow perched on top of an icy mountain. What follows is an extended cutscene where Jones comes face to face with a verbose Nazi who really wants the stone Jones discovered. A punchout ensues and the ship begins to fall before the footage switched to a series of gameplay snippets. Perhaps most exciting, however, was a tease of Indiana Jones outrunning the boulder like he did in the first film. Maybe the Great Circle will feature classic moments from the films?

[embedded content]

Indiana Jones and the Great Circle is planned for release this year, but we still don’t know the release date.

Get Another Look At Avowed’s Fantasy RPG Action In New Trailer

Get Another Look At Avowed’s Fantasy RPG Action In New Trailer

Microsoft has released a new trailer for developer Obsidian Entertainment’s Avowed, and though there’s still no release date for the game, this trailer confirms, once more, it’s still due out on Xbox Series X/S and PC sometime this year. Revealed during today’s Xbox Games Showcase, the new Avowed trailer highlights more of the game’s first-person fantasy RPG action, and it continues to look great. 

“Explore the Living Lands, a mysterious island filled with adventure and danger,” the trailer’s description reads. “As an envoy of Aedyr, you are sent to investigate rumors of a spreading plague with a secret that threatens to destroy everything. Can you save the island and your soul from the forces threatening to tear them apart?” 

Check it out for yourself in the Avowed gameplay trailer below

[embedded content]

For more, watch the Avowed reveal trailer from 2020, and then check out the first look at Avowed’s combat here. After that, watch this deep-dive into the game’s combat from January. 

Avowed hits Xbox Series X/S and PC sometime this year. 


What do you think of this latest look at Avowed? Let us know in the comments below!

PlayStation Rolling Out Update To Allow Players To Join Discord Chat Directly From PS5

PlayStation Rolling Out Update To Allow Players To Join Discord Chat Directly From PS5

PlayStation is rolling out an update in the coming weeks that will finally allow players to join a Discord voice chat directly from their PS5 consoles. Currently, you can only use Discord on PS5 after first joining a voice chat group via a PC or phone, which is quite a cumbersome step to take in order to talk with friends while gaming. 

The update will gradually roll out to PS5 players in the coming weeks, starting first with Japan and Asia, followed by Europe, Australia and New Zealand, and the Middle East, and finally, North and South America. You will need to update your console to the latest system software and link your PlayStation Network and Discord accounts in order to take advantage of this upcoming feature. 

[embedded content]

To start a Discord voice chat directly from a PS5, players need to select the Discord tab in the Control Center’s Game Base. Here, choose a Discord server or DM group you’d like to join, select your preferred voice channel, and you’re set. You will receive a notification when another Discord user calls you, too, allowing you to join immediately. 

This upcoming update will arrive more than two years after PlayStation gave players the ability to link their PSN accounts to Discord back in 2021. 


Are you excited about this update? Let us know in the coming weeks!