Now corporate boards have responsibility for cybersecurity, too

Now corporate boards have responsibility for cybersecurity, too

A new ruling from the U.S. Securities and Exchange Commission (SEC), known as the Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, went into effect last fall. The ruling requires public companies to disclose whether their boards of directors have members with cybersecurity expertise. Specifically, registrants are required to disclose whether the entire board, a specific board member, or a board committee is responsible for the oversight of cyber risks; the processes by which the board is informed about cyber risks, and the frequency of its discussions on this topic; and whether and how the board or specified board committee considers cyber risks as part of its business strategy, risk management, and financial oversight.

“In simplest terms, boards are on the hook for management, governance, and disclosure reporting,” explains Keri Pearlson, executive director of the Cybersecurity at MIT Sloan Research Consortium (CAMS). “While there is a lot of interpretation left to do, this we know for sure.”

Also well understood is the increasing likelihood of hacking events and the exponential cost to companies. Despite recent efforts to beef up cybersecurity by companies and governments worldwide, data breaches continue to increase year over year. Data show a 20 percent increase in data breaches from 2022 to 2023. Given the rapid proliferation of digital work and digitization in general, this should come as no surprise. As noted by the SEC in a fact sheet accompanying the recent rulings, “Cybersecurity risks have increased alongside the digitalization of registrants’ operations, the growth of remote work, the ability of criminals to monetize cybersecurity incidents, the use of digital payments, and the increasing reliance on third-party service providers for information technology services, including cloud computing technology.”

Cyber resilience: respond and recover

Pearlson’s ongoing research includes organizational, strategic, management, and leadership issues in cybersecurity. Her current focus is on the board’s role in cybersecurity. In a January 2023 MIT Sloan Management Review article, “An Action Plan for Cyber Resilience,” Pearlson and her co-authors suggest that board members must assume that cyberattacks are likely and exercise their oversight role to ensure that executives and managers have made the proper preparations to respond and recover.

“After all, if we assume every organization has a likely risk of being breached or attacked, and it’s not possible to be 100 percent protected from every attack, the most rational approach is to make sure the organization can recover with little or no damage to operations, to the financial bottom line, and to the organization’s reputation,” says Pearlson. To properly mitigate cyber risk, company leaders must have rock-solid plans in place to respond and recover quickly so that the company can continue to operate. They need to be cyber resilient.

Pearlson compares cyber resilience to Covid resilience practices. “We did things like stay home, wear masks, and get vaccines to both reduce the chances we got Covid, but also to reduce the consequences of getting sick.”

In other words, the current, protection-oriented approach most companies take to cyber is not enough. Protection only helps us mitigate issues we know about. But cyber criminals are innovative, and we don’t know what we don’t know. They seem to continually find new ways to break into our systems. Pearlson talks about the need to be resilient and how that kind of thinking comes from the top. “While boards have been getting reports on cybersecurity for a long time, these are typically once a year and not focused on the data that boards need to ensure their companies are resilient,” says Pearlson.

In their May 2023 Harvard Business Review article, “Boards Are Having the Wrong Conversations About Cybersecurity,” Pearlson and co-author Lucia Milică comment on the inadequacy of typical cybersecurity presentations during board meetings, which usually cover threats and the actions or technologies the company is implementing to protect against them. “To us, that is the wrong perspective for board oversight. We know we cannot be completely protected, no matter how much money we invest in technologies or programs to stop cyberattacks. While spending resources to protect our assets is critical, limiting discussions to protection sets us up for disaster.”

Instead, the conversation needs to focus on resilience. For example, instead of going into detail in a board meeting on how an organization is set up to respond to an incident, members must focus on what the biggest risk might be and how the organization is prepared to quickly recover from the damage should that situation happen.

Assessing risk using a Balanced Scorecard approach

To that end, Pearlson developed the Board Level Balanced Scorecard for Cyber Resilience (BSCR), designed to help boards and management have more productive discussions and understand the organization’s biggest risks to cyber resilience. Inspired by Kaplan and Norton’s Balanced Scorecard, a well-known tool for measuring organizational performance, Pearlson’s BSCR maps these key risk areas into four quadrants: performance, technology, organizational activities (such as people and compliance requirements), and supply chain. Each quadrant includes three components:

  • A quantitative progress indicator (red-yellow-green stoplight) based on the organization’s existing framework for cybersecurity controls such as CISA Cybersecurity Performance Goals (CPG), NIST SP 800-53, ISO 27001, CIS Controls or other controls assessments;
  • The biggest risk factor to organizational resilience according to C-level leaders; and
  • A qualitative action plan, where C-level leaders share their plan to address this risk.

The scorecard helps orient board reporting and conversation on the focus areas around which the organization should be concerned in the event of a cyberattack — specifically, the technology, the financial side of the business, the organizational side, and the supply chain. While some companies may require other quadrants, the idea is that each of those focus areas should have quantitative measures. By looking at these indicators together in a single framework, leaders can draw conclusions that might otherwise be missed.

“Having controls is nothing new, particularly for publicly traded companies that have a program for measuring and managing their cybersecurity investments,” says Pearlson. “However, there is a qualitative risk that often doesn’t come across in those measurements. While a typical control may measure how many people failed the phishing exercise, which is an important component of cybersecurity, the scorecard encourages businesses to also understand what is at risk and what is being done about it.” You can read more about the scorecard in this recent Harvard Business Review article.

Providing boards the information they need

The vast majority of leaders understand they are in jeopardy of an attack — they just don’t know how to talk about it or what to do about it. While it’s easiest for cyber executives to report on technology metrics or organizational metrics, this information does not help the board with their job of ensuring cyber resilience. “It’s the wrong information, at least initially, for conversations with the board,” says Pearlson.

Throughout Pearlson’s research, cybersecurity leaders, board directors, and other subject matter experts expressed their interest in key information about system assets, proactive capabilities, and how quickly they could recover. Some wanted to better understand what data types their company maintained, where they were maintained, the likelihood of compromise, and the impact that compromise would have on business operations. More than half of the participants wanted to know the financial dollar value involved with breaches or cyberattacks on their organization.

Pearlson’s BSCR helps to put these risks in the context of specific areas or processes that are core to the business and to address nuances, such as: is this an immediate risk or a long-term? Would a compromise in this area have a minimal impact or a huge impact?

“A Balanced Scorecard for Cyber Resilience is the starting place for the discussions about how the business will continue operations when an event occurs,” says Pearlson. “It is not enough to invest only in protection today. We need to focus on business resilience to cyber vulnerabilities and threats. To do that, we need a balanced, qualitative assessment from the operational leaders who know.”

Pearlson teaches in two MIT Sloan Executive Education courses that help individuals and their organizations be more resilient. Designed for non-cyber professionals, Cybersecurity Leadership for Non-Technical Executives helps participants become knowledgeable in the discussion. Cybersecurity Governance for the Board of Directors assists board members, C-suite leaders, and other senior executives in quickly gathering essential language and perspectives for cybersecurity strategy and risk management to better carry out their oversight and leadership responsibilities.

Say goodbye to standard security for smartphones (you need this instead) – CyberTalk

Say goodbye to standard security for smartphones (you need this instead) – CyberTalk

By Zahier Madhar, Lead Security Engineer and Office of the CTO, Check Point.

Smartphones play a pivotal role in all of our lives. In a way, smartphones today are a sort of a diary, storing pictures, organizing them and displaying them in a story telling modality. Smartphones are much more than a piece of technology that allows you to make phone calls and send text messages.

Many people, before they go to bed, they have their smartphones in their hands; they are getting the latest updates, finishing some work, or watching a movie or video shorts. And once they wake up, the first activity of the day consists of picking up the smartphone, checking the time and seeing about whether or not they have missed any updates.

Smartphones: dual uses

That very same smartphone is often used for business purposes as well; such attending or hosting meetings, emails and managing an agenda. The dual-purpose dimension is also the case with a laptop (used for both private and business purposes). The biggest different between a laptop and a smartphone is that a smartphone is always turned on and connected to the internet, while a laptop, not-so-much.

A second difference is that a laptop is managed and has a threat prevention application on it. In contrast, smartphones are, in many cases, managed by the organization but not secured by a threat prevention application.  However, the smartphone contains the same mix of private data and business related data as the laptop. See the problem?

The bakery next door

In a previous Cyber Talk thought leadership article, I talked about the bakery next door. The bakery next door can use a smartphone to get the status of the ovens, but also to control the ovens. Imagine if the baker’s smartphone were hacked and hackers took control over the most important ovens. This would impact the bakery’s output immediately. This is not just a theory; this could happen. Although this example is small-scale, the implications are immense; lack of smartphone security can jeopardize a business.

History of mobile threats

Malware targeting smartphones is not new. The difference today compared with 20 years ago is that the smartphone holds sensitive data; sensitive data on private and business level.

The question is why do organization fail to install mobile anti-malware on the smartphones? I believe it has to do with awareness, costs, and user experience or they think it is not needed (especially for iOS users).

iOS cyber security

Despite popular belief, it is possible to install malware on iOS devices and since the EU’s Digital Markets Act of 2022 came about, Apple has been forced to allow also apps outside the App store on its phones.

But regulating smartphones based on unified endpoint management and mobile device management is not enough. The reason why is simple: These security tools do not contain security controls for inspecting apps, network connections and interfaces in regards to malicious behavior.

Malware prevention

Let’s get back to the bakery next door. The baker uses his smartphone for daily business (baking bread-related tasks) and also for personal use. To avoid getting infected by malware, the baker does not install apps outside of the App store, does not scan QR codes and does not connect to public wifi.

As with his laptop, he makes sure that the smartphone and his apps are always updated with the latest software releases. Still, this is not enough. The baker won’t successfully avoid SMS phishing, malicious websites and network related attacks by taking those steps. To truly advance his security, the baker needs to install a mobile security solution that protects the smartphone from mobile security risks.

The baker is lucky because he relies on a cyber security vendor partner to deliver a platform and he can simply apply mobile security, in addition to the other security controls that have been delivered, through the platform.

In other words, what the baker has is a consolidated cyber security platform with threat prevention, ensuring that his business won’t be disrupted by opportunistic hackers.

Key takeaways

As I mentioned earlier, smartphones have become day-to-day essentials, shaping our social interactions and business operations. However, they also present security risks, as they contain sensitive personal and business information. Here are some tips to enhance smartphone security:

1. Stick to official app stores for downloading apps.

2. Avoid connecting to public wifi networks.

3. Consider installing a mobile threat prevention application.

As a Chief Information Security Officer (CISO), it’s crucial to treat smartphones with the same level of security awareness as laptops. Incorporate them into your awareness campaigns and ensure they are regularly updated with the latest patches.

Implement mobile threat prevention solutions like Harmony Mobile from Check Point to serve as a security enforcement point for your Unified Endpoint Management (UEM) or Mobile Device Management (MDM) system.

These measures will enhance security maturity and provide visibility into potential malicious activities on mobile devices within your organization.

For more insights from Zahier Madhar, please click here. To receive compelling cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Remedy Shares Development Updates For Control 2, Max Payne 1 & 2 Remake, And Codename Condor

Remedy Shares Development Updates For Control 2, Max Payne 1 & 2 Remake, And Codename Condor

Remedy has provided updates on its portfolio of upcoming titles via a quarterly business review. In addition to painting the overall financial health of the studio, the report shares details on the current development state of Control 2, Max Payne 1&2 Remake, the Control multiplayer spin-off Codename Condor, and more.

According to the report, the Control 2 team is “focused on finalizing the proof-of-concept stage, in which the game world, game mechanics and visual targets are proven.” The project is expected to move into the production readiness stage in Q2 of this year. Control 2 was first announced in 2022. 

Max Payne 1&2 Remake, first announced in 2022 as a rebuilt version that combines both games into a single title, remains in the production readiness stage, as first announced last October. However, the game is expected to enter full production in Q2 2024.

Codename Condor has entered full production. Remedy describes its core loop as “engaging” and that “the game brings a unique Remedy angle to the genre.” Codename Kestral, Remedy’s other multiplayer title described as a “premium game with a strong, cooperative multiplayer component”, remains in the concept stage. 

[embedded content]

Remedy also shares that Alan Wake 2 has sold 1.3 million units as of early February, recouping a “significant” chunk of its development and marketing expenses in the process. The development team continues to work on its upcoming expansions, Night Springs and Lake House. 

Following Remedy’s acquisition of the rights to Control from publisher 505 Games in February, the studio is now free to do what it wants with the IP. CEO Tero Virtala states, “We are currently weighing self-publishing and related business models. Simultaneously, we are actively looking into different partner publishing models and evaluating potential partners.” Remedy also expects to have two projects in full production simultaneously, stating “We are confident that the good progress of the beginning of the year carries over to the full production stages.”

You can read our review of Alan Wake 2 here and for Control here

Teenage Mutant Ninja Turtles Arcade: Wrath of the Mutants Review – Better Left In The Sewers – Game Informer

Teenage Mutant Ninja Turtles Arcade: Wrath of the Mutants Review – Better Left In The Sewers – Game Informer

The Teenage Mutant Ninja Turtles were synonymous with gaming in the late ‘80s and early ‘90s, largely thanks to their influence over arcade brawlers. Games like 1989’s Teenage Mutant Ninja Turtles (also known as ’89 Arcade) and 1991’s Turtles in Time are time-honored classics that shaped the side-scrolling beat-‘em-up genre, and 2022’s Shredder’s Revenge demonstrated that the style is still viable in the modern landscape. Teenage Mutant Ninja Turtles Arcade: Wrath of the Mutants clearly takes inspiration from those beloved games, but it falls spectacularly short of those acclaimed titles.

Originally released to arcades in 2017, Wrath of the Mutants takes a similar approach to gameplay as the original TMNT arcade games: You choose from Leonardo, Donatello, Michelangelo, and Raphael, each with distinct moves, as you slash and brawl through stages full of baddies. Based on the 2012 Nickelodeon cartoon, Wrath of the Mutants includes a ton of enemies for the Turtles to beat up in various locales; this home port adds three all-new stages and six new bosses. Unfortunately, no amount of Easter eggs and fanservice can compensate for its uninteresting gameplay.

Though the core concept is the same as the most beloved entries in the series, I never felt anything more than listlessness as I fought through the six extremely linear stages on offer. Each Turtle brandishes their signature weapon and a unique Turtle Power that clears the screen of enemies. These moves should feel empowering, but instead, they throw the action to a halt while a drawn-out animation plays; Leo spins to form a tornado that sucks up all the minions, while Raph drums on the ground, sending enemies flying.

But it all feels so routine as you fight through waves of the exact same enemies in tedious stages that require no strategy – you just go right and spam the attack button. You can also pick up power-ups that cause your character to spin on their shell or summon side characters to dispatch enemies, but with the base combat so uninteresting, I only enjoyed deploying these frequent special moves because they provided a quicker path through the long levels.

Brawling the seemingly endless screens of Foot and Krang minions found in each stage wouldn’t be so bad if the signature arcade unfairness wasn’t ever-present. TMNT Arcade: Wrath of the Mutants isn’t a tough game by any measure, but there are moments where you simply cannot avoid being hit. At nearly every phase, enemies attack you from off-screen, where you can’t see or reach them, and they frustratingly won’t stop attacking you nor come into view unless you go to the other side of the screen. Additionally, enemy projectiles are deadly accurate, and with the Turtles’ sluggish movement and no way to effectively dodge, you’re all but guaranteed to take hits.

These enemies don’t do a ton of damage, but it’s often death by a thousand papercuts, and since each hit briefly stuns you, your combos are constantly getting interrupted. The bosses, who often just repeat the same attacks over and over, are trials of patience rather than engaging challenges. These boss encounters typically bring slight variations on the same move sets, causing them to all play out similarly. Even the final fight against Shredder does little to differentiate itself; he just lumbers around the screen while you wail on him with little strategy required other than jumping when the game tells you to jump – another repeated convention in nearly every boss battle.

[embedded content]

Stage elements meant to break up the monotony serve as more frustration than diversity of experience. Trains speed past, Krang’s Android body shoots electricity at you, and explosive barrels litter the levels, but they add so little. In one instance, where a giant eyeball continually blasts lasers at you while you fight waves of enemies, your character is too slow to avoid getting zapped unless you’re just standing around waiting for it to broadcast where it’s firing. I should be excited to see these new challenges and twists emerge, but I met most of them with a shrug and others with annoyance.

Though seeing the 2012 animated series get some attention in 2024 is fun, the presentation also disappoints. The visuals are nothing special, and I’m not a fan of some of the character designs of this era, but they fit the show’s look well enough. It’s the audio that most irritates, as the Turtles obnoxiously scream the entire time and enemies repeatedly spout the same lines while generic action-oriented music loops in the background. After the first few levels, I was relieved to crank the volume down and listen to something else instead.

Beating the entire game takes less than two hours, but it still somehow manages to drag. You can return to the game’s six stages to try and get higher scores, but I had zero interest in doing so. The arcade games of yesteryear sometimes lacked depth, but they at least had a hook that stuck with you and kept you itching to return to pump more quarters into the cabinet. Teenage Mutant Ninja Turtles Arcade: Wrath of the Mutants strives for the greatness of the influential arcade hits of the past but falls well short. Thanks to uninteresting and annoying gameplay, repetitive enemy and boss encounters, and grating audio design, Wrath of the Mutants is little more than a shell of the series’ glory years.

SanDisk Professional G-DRIVE PROJECT Wins Best Storage – Videoguys

SanDisk Professional G-DRIVE PROJECT Wins Best Storage – Videoguys

Discover why the SanDisk Professional G-DRIVE PROJECT was crowned Best Storage at NAB 2024. With Thunderbolt 3 connectivity, up to 24 TB storage capacity, and sleek design, it’s the ideal choice for video producers seeking fast data transfers and efficient post-production workflows.

At NAB 2024, SanDisk emerged triumphant with its SanDisk Professional G-DRIVE PROJECT, clinching the coveted title of Best Storage. This accolade recognizes the exceptional features and performance of the G-DRIVE PROJECT, tailored to meet the demanding storage needs of video producers.

Nicole LaJeunesse, in her insightful article for Videomaker, provides an in-depth look at the SanDisk Professional G-DRIVE PROJECT and why it’s the ultimate storage solution for video production.

Designed for desktop use, the G-DRIVE PROJECT boasts an impressive storage capacity of up to 24 TB, making it the perfect solution for managing large volumes of data generated in video production. Its Thunderbolt 3 connectivity ensures lightning-fast data transfers, with speeds of up to 260 MB/s, significantly reducing wait times during post-production tasks.

One standout feature of the G-DRIVE PROJECT is its inclusion of the SanDisk Professional PRO-BLADE SSD Mag Slot, offering blazing-fast transfer speeds of up to 10 Gbps. This feature streamlines the process of offloading footage from SSD capture media directly onto the G-DRIVE PROJECT, enhancing efficiency in data management.

In addition to its high-performance capabilities, the G-DRIVE PROJECT also prioritizes data security with its reliable 7,200 RPM Ultrastar enterprise-class hard drive, ensuring the safety of valuable content.

The sleek and professional design of the G-DRIVE PROJECT, featuring an anodized aluminum housing, seamlessly integrates into professional computer setups. Its compatibility with iPads via USB-C further enhances its versatility, catering to a wider range of users. Setting up the G-DRIVE PROJECT is a breeze, thanks to its color-coded cable system, which simplifies the connection process by matching port colors with cable colors.

Available in various configurations ranging from 6 TB to the top-of-the-line 24 TB option, the SanDisk Professional G-DRIVE PROJECT offers flexibility to suit different storage needs. Whether you’re a freelance videographer or a production studio, there’s a G-DRIVE PROJECT model to meet your requirements.

In conclusion, Nicole LaJeunesse emphasizes the SanDisk Professional G-DRIVE PROJECT’s status as the ultimate storage solution for video producers, combining high-capacity storage, fast data transfers, and sleek design. Experience the power and efficiency of the G-DRIVE PROJECT and take your post-production workflow to the next level.

Read the full article by Nicole LaJeunesse for Videomaker HERE

An AI dataset carves new paths to tornado detection

An AI dataset carves new paths to tornado detection

The return of spring in the Northern Hemisphere touches off tornado season. A tornado’s twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It’s hard to know exactly when a tornado has formed, or even why.

A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature’s most mysterious and violent phenomena.

“A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project’s co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning’s ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

Swirling uncertainty

About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

Yet tornadoes are notoriously difficult to forecast because scientists don’t have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won’t. We don’t fully understand it,” Kurdzo says.

A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm’s broad, rotating updraft. A mesocyclone doesn’t always produce a tornado.

With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

“What’s beautiful about a true benchmark dataset is that we’re all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

“This dataset also means that a grad student doesn’t have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

This project was funded by Lincoln Laboratory’s Climate Change Initiative, which aims to leverage the laboratory’s diverse technical strengths to help address climate problems threatening human health and global security.

Chasing answers with deep learning

Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

“We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren’t searched for by forecasters,” Veillette says.

The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

“The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

Taking steps toward operations

On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

“As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don’t know about?” he asks.

Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

“None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters’ eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It’s a first step.”

The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they’ll eventually be shown to forecasters, to start a process of transitioning into operations.

In the end, the path could circle back to trust.

“We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.”

Julie Shah named head of the Department of Aeronautics and Astronautics

Julie Shah named head of the Department of Aeronautics and Astronautics

Julie Shah ’04, SM ’06, PhD ’11, the H.N. Slater Professor in Aeronautics and Astronautics, has been named the new head of the Department of Aeronautics and Astronautics (AeroAstro), effective May 1.

“Julie brings an exceptional record of visionary and interdisciplinary leadership to this role. She has made substantial technical contributions in the field of robotics and AI, particularly as it relates to the future of work, and has bridged important gaps in the social, ethical, and economic implications of AI and computing,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to her role as a faculty member in AeroAstro, Shah served as associate dean of Social and Ethical Responsibilities of Computing in the MIT Schwarzman College of Computing from 2019 to 2022, helping launch a coordinated curriculum that engages more than 2,000 students a year at the Institute. She currently directs the Interactive Robotics Group in MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), and MIT’s Industrial Performance Center.

Shah and her team at the Interactive Robotics Group conduct research that aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, health-care applications, transportation, and defense. In 2020, Shah co-authored the popular book “What to Expect When You’re Expecting Robots,” which explores the future of human-robot collaboration.

As an expert on how humans and robots interact in the workforce, Shah was named co-director of the Work of the Future Initiative, a successor group of MIT’s Task Force on the Work of the Future, alongside Ben Armstrong, executive director and research scientist at MIT’s Industrial Performance Center. In March of this year, Shah was named a co-leader of the Working Group on Generative AI and the Work of the Future, alongside Armstrong and Kate Kellogg, the David J. McGrath Jr. Professor of Management and Innovation. The group is examining how generative AI tools can contribute to higher-quality jobs and inclusive access to the latest technologies across sectors.

Shah’s contributions as both a researcher and educator have been recognized with many awards and honors throughout her career. She was named an associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2017, and in 2018 she was the recipient of the IEEE Robotics and Automation Society Academic Early Career Award. Shah was also named a Bisplinghoff Faculty Fellow, was named to MIT Technology Review’s TR35 List, and received an NSF Faculty Early Career Development Award. In 2013, her work on human-robot collaboration was included on MIT Technology Review’s list of 10 Breakthrough Technologies.

In January 2024, she was appointed to the first-ever AIAA Aerospace Artificial Intelligence Advisory Group, which was founded “to advance the appropriate use of AI technology particularly in aeronautics, aerospace R&D, and space.” Shah currently serves as editor-in-chief of Foundations and Trends in Robotics, as an editorial board member of the AIAA Progress Series, and as an executive council member of the Association for the Advancement of Artificial Intelligence.

A dedicated educator, Shah has been recognized for her collaborative and supportive approach as a mentor. She was honored by graduate students as “Committed to Caring” (C2C) in 2019. For the past 10 years, she has served as an advocate, community steward, and mentor for students in her role as head of house of the Sidney Pacific Graduate Community.

Shah received her bachelor’s and master’s degrees in aeronautical and astronautical engineering, and her PhD in autonomous systems, all from MIT. After receiving her doctoral degree, she joined Boeing as a postdoc, before returning to MIT in 2011 as a faculty member.

Shah succeeds Professor Steven Barrett, who has led AeroAstro as both interim department head and then department head since May 2023.

MIT faculty, instructors, students experiment with generative AI in teaching and learning

How can MIT’s community leverage generative AI to support learning and work on campus and beyond?

At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not replace — learning experiences.

This annual event, co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, celebrates teaching and learning innovations. When introducing new teaching and learning technologies, panelists stressed the importance of iteration and teaching students how to develop critical thinking skills while leveraging technologies like generative AI.

“The Festival of Learning brings the MIT community together to explore and celebrate what we do every day in the classroom,” said Christopher Capozzola, senior associate dean for open learning. “This year’s deep dive into generative AI was reflective and practical — yet another remarkable instance of ‘mind and hand’ here at the Institute.”  

MIT faculty, instructors, students experiment with generative AI in teaching and learning

Play video

2024 Festival of Learning: Highlights

Incorporating generative AI into learning experiences 

MIT faculty and instructors aren’t just willing to experiment with generative AI — some believe it’s a necessary tool to prepare students to be competitive in the workforce. “In a future state, we will know how to teach skills with generative AI, but we need to be making iterative steps to get there instead of waiting around,” said Melissa Webster, lecturer in managerial communication at MIT Sloan School of Management. 

Some educators are revisiting their courses’ learning goals and redesigning assignments so students can achieve the desired outcomes in a world with AI. Webster, for example, previously paired written and oral assignments so students would develop ways of thinking. But, she saw an opportunity for teaching experimentation with generative AI. If students are using tools such as ChatGPT to help produce writing, Webster asked, “how do we still get the thinking part in there?”

One of the new assignments Webster developed asked students to generate cover letters through ChatGPT and critique the results from the perspective of future hiring managers. Beyond learning how to refine generative AI prompts to produce better outputs, Webster shared that “students are thinking more about their thinking.” Reviewing their ChatGPT-generated cover letter helped students determine what to say and how to say it, supporting their development of higher-level strategic skills like persuasion and understanding audiences.

Takako Aikawa, senior lecturer at the MIT Global Studies and Languages Section, redesigned a vocabulary exercise to ensure students developed a deeper understanding of the Japanese language, rather than just right or wrong answers. Students compared short sentences written by themselves and by ChatGPT and developed broader vocabulary and grammar patterns beyond the textbook. “This type of activity enhances not only their linguistic skills but stimulates their metacognitive or analytical thinking,” said Aikawa. “They have to think in Japanese for these exercises.”

While these panelists and other Institute faculty and instructors are redesigning their assignments, many MIT undergraduate and graduate students across different academic departments are leveraging generative AI for efficiency: creating presentations, summarizing notes, and quickly retrieving specific ideas from long documents. But this technology can also creatively personalize learning experiences. Its ability to communicate information in different ways allows students with different backgrounds and abilities to adapt course material in a way that’s specific to their particular context. 

Generative AI, for example, can help with student-centered learning at the K-12 level. Joe Diaz, program manager and STEAM educator for MIT pK-12 at Open Learning, encouraged educators to foster learning experiences where the student can take ownership. “Take something that kids care about and they’re passionate about, and they can discern where [generative AI] might not be correct or trustworthy,” said Diaz.

Panelists encouraged educators to think about generative AI in ways that move beyond a course policy statement. When incorporating generative AI into assignments, the key is to be clear about learning goals and open to sharing examples of how generative AI could be used in ways that align with those goals. 

The importance of critical thinking

Although generative AI can have positive impacts on educational experiences, users need to understand why large language models might produce incorrect or biased results. Faculty, instructors, and student panelists emphasized that it’s critical to contextualize how generative AI works. “[Instructors] try to explain what goes on in the back end and that really does help my understanding when reading the answers that I’m getting from ChatGPT or Copilot,” said Joyce Yuan, a senior in computer science. 

Jesse Thaler, professor of physics and director of the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions, warned about trusting a probabilistic tool to give definitive answers without uncertainty bands. “The interface and the output needs to be of a form that there are these pieces that you can verify or things that you can cross-check,” Thaler said.

When introducing tools like calculators or generative AI, the faculty and instructors on the panel said it’s essential for students to develop critical thinking skills in those particular academic and professional contexts. Computer science courses, for example, could permit students to use ChatGPT for help with their homework if the problem sets are broad enough that generative AI tools wouldn’t capture the full answer. However, introductory students who haven’t developed the understanding of programming concepts need to be able to discern whether the information ChatGPT generated was accurate or not.

Ana Bell, senior lecturer of the Department of Electrical Engineering and Computer Science and MITx digital learning scientist, dedicated one class toward the end of the semester of Course 6.100L (Introduction to Computer Science and Programming Using Python) to teach students how to use ChatGPT for programming questions. She wanted students to understand why setting up generative AI tools with the context for programming problems, inputting as many details as possible, will help achieve the best possible results. “Even after it gives you a response back, you have to be critical about that response,” said Bell. By waiting to introduce ChatGPT until this stage, students were able to look at generative AI’s answers critically because they had spent the semester developing the skills to be able to identify whether problem sets were incorrect or might not work for every case. 

A scaffold for learning experiences

The bottom line from the panelists during the Festival of Learning was that generative AI should provide scaffolding for engaging learning experiences where students can still achieve desired learning goals. The MIT undergraduate and graduate student panelists found it invaluable when educators set expectations for the course about when and how it’s appropriate to use AI tools. Informing students of the learning goals allows them to understand whether generative AI will help or hinder their learning. Student panelists asked for trust that they would use generative AI as a starting point, or treat it like a brainstorming session with a friend for a group project. Faculty and instructor panelists said they will continue iterating their lesson plans to best support student learning and critical thinking. 

Panelists from both sides of the classroom discussed the importance of generative AI users being responsible for the content they produce and avoiding automation bias — trusting the technology’s response implicitly without thinking critically about why it produced that answer and whether it’s accurate. But since generative AI is built by people making design decisions, Thaler told students, “You have power to change the behavior of those tools.”

Remedy Shares Development Updates For Control 2, Max Payne 1+2 Remake, And Codename Condor

Remedy Shares Development Updates For Control 2, Max Payne 1 & 2 Remake, And Codename Condor

Remedy has provided updates on its portfolio of upcoming titles via a quarterly business review. In addition to painting the overall financial health of the studio, the report shares details on the current development state of Control 2, Max Payne 1+2 Remake, the Control multiplayer spin-off Codename Condor, and more.

According to the report, the Control 2 team is “focused on finalizing the proof-of-concept stage, in which the game world, game mechanics and visual targets are proven.” The project is expected to move into the production readiness stage in Q2 of this year. Control 2 was first announced in 2022. 

Max Payne 1+2 Remake, first announced in 2022 as a rebuilt version that combines both PS2 games into a single title, remains in the production readiness stage, as first announced last October. However, the game is expected to enter full production in Q2 2024.

Codename Condor has entered full production. Remedy describes its core loop as “engaging” and that “the game brings a unique Remedy angle to the genre.” Codename Kestral, Remedy’s other multiplayer title described as a “premium game with a strong, cooperative multiplayer component”, remains in the concept stage. 

[embedded content]

Remedy also shares that Alan Wake 2 has sold 1.3 million units as of early February, recouping a “significant” chunk of its development and marketing expenses in the process. The development team continues to work on its upcoming expansions, Night Springs and Lake House. 

Following Remedy’s acquisition of the rights to Control from publisher 505 Games in February, the studio is now free to do what it wants with the IP. CEO Tero Virtala states, “We are currently weighing self-publishing and related business models. Simultaneously, we are actively looking into different partner publishing models and evaluating potential partners.” Remedy also expects to have two projects in full production simultaneously, stating “We are confident that the good progress of the beginning of the year carries over to the full production stages.”

You can read our review of Alan Wake 2 here and for Control here