CD Projekt Red Not Interested In Being Acquired, Says CEO

CD Projekt Red is one of the biggest developers in Europe, but the studio behind The Witcher and Cyberpunk 2077 has no interest in being acquired. That comes from a new interview with the studio’s CEO, Adam Kiciński, who discusses acquisition possibilities while providing small updates on the studio’s litany of upcoming projects.

Speaking to the Polish outlet Parkiet, Kiciński was asked about rumors regarding the studio being a target of a takeover. He states via translation, “We are not interested in being included in any larger entity. We have worked our whole lives to get to the position we have now. And we believe that in a few years we will be even bigger and stronger. We have ambitious plans and we are passionate about what we do. We value independence.”

In turn, Kiciński also states CD Projekt Red is currently uninterested in buying another studio purely for the sake of having it to, in his words, “consolidate their financial results.” In October 2021, the studio acquired Molasses Flood, the developer of The Flame in the Flood and Drake Hollow. That studio is currently working on a game set in the Witcher universe codenamed “Sirius.” 

Kicińsk also provides small updates on its portfolio of upcoming Witcher and Cyberpunk games as well as its new IP. He states that work on the next mainline Witcher game, codenamed “Polaris,” is in “full swing” and has around 330 employees, which will rise to 400 by later this year. 

“Orion,” the codename for the next Cyberpunk game, has been in the conceptual stage for some time, and the development team is still being assembled. It will primarily be made from CD Projekt Red’s Boston studio and eventually expand to Vancouver, with support from the main Polish headquarters. 

Kicińsk then briefly touches on project “Hadar,” which is a new IP that is currently in the conceptual phase. When asked what Hadar is, Kicińsk simply says, “I assure you that it will be an interesting pop culture concept, fitting both The Witcher and Cyberpunk.”

CD Projekt Red will spend 2024 chipping away at these games and is riding a wave of positive momentum following the successful launch of Cyberpunk 2077’s Phantom Liberty expansion and other big updates the game received. Be sure to read the full interview to learn more about the studio’s financial situation, its broader media strategy outside of video games, and its thoughts on the launch of Phantom Liberty. 

CISO of Fortune 35 company talks 55 million alerts – CyberTalk

EXECUTIVE SUMMARY:

Thomas Dager is the CISO at Archer Daniels Midland Company (ADM). He develops, implements and monitors a strategic, comprehensive enterprise information security and IT risk management program to ensure the integrity, confidentiality and availability of information owned, controlled or processed by the organization. Previously, he was with Delta Community Credit Union as an audit committee member.

In this edited interview excerpt from the CISO’s Secrets podcast, CISO Thomas Dager shares insights into managing alerts, IoT, growing cyber security programs and artificial intelligence.

Take us through your journey. Talk to us a bit about how you got to where you are.

I had the great fortune, after I left the military and went into law enforcement, of working in white collar crime or computer crime during the early days.

And that piqued my interest, as I’d always been a bit of a computer geek. The two just kind of gelled really well – At a certain point in my law enforcement career, I looked forward and asked myself ‘what do I really want to do?’

I made that leap from law enforcement, back into IT. After about a year or two of doing a traditional networking kind of job, I landed in Secureworks – before it was Dell Secureworks.

While there, I had the opportunity to delve into all aspects of security. I eventually became the Director of Security for a security company, which is a unique position to be in.

A lot of that helped formulate my vision and viewpoint as I grew in my career and eventually landed at ADM…

In two consecutive years during your tenure at ADM, you were recognized as a top global CISO by a cyber defense magazine. That’s pretty impressive.

Well, I appreciate that. The honor really goes to my team. I am just their humble servant and leader. I believe that my accomplishments couldn’t have been done without them. So, it’s really a recognition of my direct reports and the leadership that they have driven across the greater cyber organization at ADM. But I appreciate that, thank you.

You joined at an early stage of Secureworks’ development (subsequently Dell). You were also ahead of the curve when it came to Security Operations Center (SOC) development. Talk a bit about the dawn of SOC and subsequent market saturation?

When I first joined Secureworks, it was literally four folding tables in a square. We used to ‘hot-swap’ seats on computers in our SOC center. Early days, it was startup…We were still manually creating and collating tickets…

If you weren’t there in those early days, the manual processes are probably just ‘unthinkable’ to anyone who runs a SOC today. The sheer volume of information today…I mean…

It really fostered a deep knowledge of how those things work, of how attacks worked at the time, of what matters and what doesn’t…You know, some of this has changed over time. We’ve gotten smarter and faster…

But having had that formative experience, and then continuing my journey in cyber security has really helped me gain an appreciation of and insight into how important a Security Operations Center is.

I am sitting here and kind of pondering and thinking about alerts, and being able to really have the time to delve into them…

Here at ADM, just last quarter, we had 55 million alerts. They’re, of course, run through a series of filters, both manual and (mostly) automatic, to get down to an actionable set of incidents that we can investigate.

But again, that visibility – just the alerts, they’re growing in volume. And as we bring more tooling online – internet of things, manufacturing companies adopting smart tools – one of my internal mantras is ‘you can’t protect what you cannot see’.

If I can’t see it, we’ve got a problem. You just get that sheer volume of information that comes in. And it takes expertise and dedication to really build those use-cases…

I have a great director over at what we call Global Cyber Defense Operations and he uses an internal threat intelligence team that helps inform what our best-case use-cases should be…he’s continually evaluating that on a literally constant basis.

It just boggles my mind when I talk to the team, and say ‘what are you working on today?’ and get a sense of what they’re investigating…Internally as a CISO, I’m only seeing the tip of the iceberg. But that’s where you trust your people.

What a transformation you’ve been at the height of in terms of the IoT phenomenon…

Absolutely. Not just IoT in the sense that we talk about today, but regular old OT.

When you really think about a manufacturing plant, you have traditional OT. I’m talking about your PLCs and your SCADA systems environments, but increasingly, there’s also IoT that’s layered on top of that…Or there’s a value-add to using certain tools, that we wouldn’t have thought of previously, that are internet-connected today. The explosion on both the IoT and OT sides of things have been dramatic.

For those readers who don’t know who we are, we’re a Fortune 35 company. We’re one of the largest companies in the world. About 70% of what the average person eats or drinks contains something in it from us. While we’re not the traditional target of a Microsoft, Amazon or Walmart, we’re part of critical infrastructure when it comes to food.

If we have a major incident, the downstream impacts to the supply chain, as it relates to food (human and animal nutrition) could be substantial. It means that we have to monitor, across the globe, all of these cyber threats that seem to just come out of the woodwork, all the time!

One of the things that I’d love to hear about is your view on AI. What’s your position on AI? What do you think of it?

…We’re looking at how AI can assist analysts; helping them action an incident faster, and things of that nature. We’re looking at ways to leverage AI in order to help people gain efficiency within their jobs, and access information more quickly…but we do want to contain that within certain guardrails, because we don’t want for sensitive information to be out there in the public domain.

But when I think beyond that…

For the full conversation, listen here.

For more CISO strategy insights:

Final Fantasy VII Rebirth Preview – Square Enix Hints At Zack’s Expanded Role – Game Informer

Final Fantasy VII Remake gave fans of the original game quite a shock as Cloud and the party exited Midgar. Zack, the protagonist of Crisis Core: Final Fantasy VII and a key figure in Cloud, Aerith, and Sephiroth’s past, appears to be alive, and helped an injured Cloud reach Midgar. This stands in stark contrast to his fate in the original continuity, where he was killed and his Buster Sword was handed over to Cloud. 

Zack’s changed fate is emblematic of how the Final Fantasy VII Remake and Rebirth teams approached the source material. “If we trace the original and stay exactly loyal to it as is, I think that is lacking in the gaming experience itself,” director Naoki Hamaguchi says. “With elements like the Whispers or Zack – these new elements introduced in Remake or Rebirth – this really gives players the feeling that based on these, perhaps the ending is going to be different from what we know from the original and have that sort of wonder and anticipation building. The mystery building is something we truly wanted players to feel in Rebirth.”

Crisis Core: Final Fantasy VII Reunion key art

In Final Fantasy VII Rebirth, Zack appears to have a more defined role. “Through him, players will be able to experience and understand more of the Final Fantasy VII worldview and it will deepen their understanding of the Final Fantasy VII world,” Hamaguchi says. “We have used the character Zack to depict the combined view of [story and scenario writer Kazushige Nojima, creative director Tetsuya Nomura, and producer Yoshinori Kitase] – the original creators – intents of how this world of Final Fantasy VII came to be and its policies and rules governing this world. This is going to be depicted through the character of Zack. As much as the Whispers within the story, Zack is an equally, immensely important, crucial, key character to this story that I believe fans will enjoy within Rebirth.”

While Hamaguchi and the team didn’t go into too many details, it certainly whets my appetite and makes me want to go back to Crisis Core: Final Fantasy VII Reunion, even if that’s not part of the current continuity. Final Fantasy VII Rebirth arrives on PlayStation 5 on February 29. For more Final Fantasy VII Rebirth coverage, head to our exclusive hub through the banner below!

MIT community members elected to the National Academy of Inventors for 2023

The National Academy of Inventors (NAI) recently announced the election of more than 160 individuals to their 2023 class of fellows. Among them are two members of the MIT Koch Institute for Integrative Cancer Research, Professor Daniel G. Anderson and Principal Research Scientist Ana Jaklenec. In addition, 11 MIT alumni were also recognized.

The highest professional distinction accorded solely to academic inventors, election to the NAI recognizes individuals who have created or facilitated outstanding inventions that have made a tangible impact on quality of life, economic development, and the welfare of society.  

“Daniel and Ana embody some of the Koch Institute’s core values of interdisciplinary innovation and drive to translate their discoveries into real impact for patients,” says Matthew Vander Heiden, director of the Koch Institute. “Their election to the academy is very well-deserved, and we are honored to count them both among the Koch Institute’s and MIT’s research community.”

Daniel Anderson is the Joseph R. Mares (1924) Professor of Chemical Engineering, and a core member of the Institute for Medical Engineering and Science. He is a leading researcher in the fields of nanotherapeutics and biomaterials. Anderson’s work has led to advances in a range of areas, including medical devices, cell therapy, drug delivery, gene therapy, and material science, and has resulted in the publication of more than 500 papers, patents, and patent applications. He has founded several companies, including Living Proof, Olivo Labs, Crispr Therapeutics (CRSP), Sigilon Therapeutics, Verseau Therapeutics, oRNA, and VasoRx. He is a member of National Academy of Medicine, the Harvard-MIT Division of Health Science and Technology, and is an affiliate of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT and Harvard.

Ana Jaklenec, a principal research scientist and principal investigator at the Koch Institute, is a leader in the fields of bioengineering and materials science, focused on controlled delivery and stability of therapeutics for global health. She is an inventor of several drug delivery technologies that have the potential to enable equitable access to medical care globally. Her lab is developing new manufacturing techniques for the design of materials at the nano- and micro-scale for self-boosting vaccines, 3D printed on-demand microneedles, heat-stable polymer-based carriers for oral delivery of micronutrients and probiotics, and long-term drug delivery systems for cancer immunotherapy. She has published over 100 manuscripts, patents, and patent applications and has founded three companies: Particles for Humanity, VitaKey, and OmniPulse Biosciences.

The 11 MIT alumni who were elected to the NAI for 2023 include:

  • Michel Barsoum PhD ’85 (Materials Science and Engineering);
  • Eric Burger ’84 (Electrical Engineering and Computer Science);
  • Kevin Kelly SM ’88, PhD ’91 (Mechanical Engineering);
  • Ali Khademhosseini PhD ’05 (Biological Engineering);
  • Joshua Makower ’85 (Mechanical Engineering);
  • Marcela Maus ’97 (Biology);
  • Milos Popovic SM ’02, PhD ’08 (Electrical Engineering and Computer Science);
  • Milica Radisic PhD ’04 (Chemical Engineering);
  • David Reinkensmeyer ’88 (Electrical Engineering);
  • Boris Rubinsky PhD ’81 (Mechanical Engineering); and
  • Paul S. Weiss ’80, SM ’80 (Chemistry).

Since its inception in 2012, the NAI Fellows program has grown to include 1,898 exceptional researchers and innovators, who hold over 63,000 U.S. patents and 13,000 licensed technologies. NAI Fellows are known for the societal and economic impact of their inventions, contributing to major advancements in science and consumer technologies. Their innovations have generated over $3 trillion in revenue and generated 1 million jobs.    

“This year’s class of NAI Fellows showcases the caliber of researchers that are found within the innovation ecosystem. Each of these individuals are making significant contributions to both science and society through their work,” says Paul R. Sanberg, president of the NAI. “This new class, in conjunction with our existing fellows, are creating innovations that are driving crucial advancements across a variety of disciplines and are stimulating the global and national economy in immeasurable ways as they move these technologies from lab to marketplace.” 

AI agents help explain other AI systems

Explaining the behavior of trained neural networks remains a compelling puzzle, especially as these models grow in size and sophistication. Like other scientific challenges throughout history, reverse-engineering how artificial intelligence systems work requires a substantial amount of experimentation: making hypotheses, intervening on behavior, and even dissecting large networks to examine individual neurons. To date, most successful experiments have involved large amounts of human oversight. Explaining every computation inside models the size of GPT-4 and larger will almost certainly require more automation — perhaps even using AI models themselves. 

Facilitating this timely endeavor, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel approach that uses AI models to conduct experiments on other systems and explain their behavior. Their method uses agents built from pretrained language models to produce intuitive explanations of computations inside trained networks.

Central to this strategy is the “automated interpretability agent” (AIA), designed to mimic a scientist’s experimental processes. Interpretability agents plan and perform tests on other computational systems, which can range in scale from individual neurons to entire models, in order to produce explanations of these systems in a variety of forms: language descriptions of what a system does and where it fails, and code that reproduces the system’s behavior. Unlike existing interpretability procedures that passively classify or summarize examples, the AIA actively participates in hypothesis formation, experimental testing, and iterative learning, thereby refining its understanding of other systems in real time. 

Complementing the AIA method is the new “function interpretation and description” (FIND) benchmark, a test bed of functions resembling computations inside trained networks, and accompanying descriptions of their behavior. One key challenge in evaluating the quality of descriptions of real-world network components is that descriptions are only as good as their explanatory power: Researchers don’t have access to ground-truth labels of units or descriptions of learned computations. FIND addresses this long-standing issue in the field by providing a reliable standard for evaluating interpretability procedures: explanations of functions (e.g., produced by an AIA) can be evaluated against function descriptions in the benchmark.  

For example, FIND contains synthetic neurons designed to mimic the behavior of real neurons inside language models, some of which are selective for individual concepts such as “ground transportation.” AIAs are given black-box access to synthetic neurons and design inputs (such as “tree,” “happiness,” and “car”) to test a neuron’s response. After noticing that a synthetic neuron produces higher response values for “car” than other inputs, an AIA might design more fine-grained tests to distinguish the neuron’s selectivity for cars from other forms of transportation, such as planes and boats. When the AIA produces a description such as “this neuron is selective for road transportation, and not air or sea travel,” this description is evaluated against the ground-truth description of the synthetic neuron (“selective for ground transportation”) in FIND. The benchmark can then be used to compare the capabilities of AIAs to other methods in the literature. 

Sarah Schwettmann PhD ’21, co-lead author of a paper on the new work and a research scientist at CSAIL, emphasizes the advantages of this approach. “The AIAs’ capacity for autonomous hypothesis generation and testing may be able to surface behaviors that would otherwise be difficult for scientists to detect. It’s remarkable that language models, when equipped with tools for probing other systems, are capable of this type of experimental design,” says Schwettmann. “Clean, simple benchmarks with ground-truth answers have been a major driver of more general capabilities in language models, and we hope that FIND can play a similar role in interpretability research.”

Automating interpretability 

Large language models are still holding their status as the in-demand celebrities of the tech world. The recent advancements in LLMs have highlighted their ability to perform complex reasoning tasks across diverse domains. The team at CSAIL recognized that given these capabilities, language models may be able to serve as backbones of generalized agents for automated interpretability. “Interpretability has historically been a very multifaceted field,” says Schwettmann. “There is no one-size-fits-all approach; most procedures are very specific to individual questions we might have about a system, and to individual modalities like vision or language. Existing approaches to labeling individual neurons inside vision models have required training specialized models on human data, where these models perform only this single task. Interpretability agents built from language models could provide a general interface for explaining other systems — synthesizing results across experiments, integrating over different modalities, even discovering new experimental techniques at a very fundamental level.” 

As we enter a regime where the models doing the explaining are black boxes themselves, external evaluations of interpretability methods are becoming increasingly vital. The team’s new benchmark addresses this need with a suite of functions with known structure, that are modeled after behaviors observed in the wild. The functions inside FIND span a diversity of domains, from mathematical reasoning to symbolic operations on strings to synthetic neurons built from word-level tasks. The dataset of interactive functions is procedurally constructed; real-world complexity is introduced to simple functions by adding noise, composing functions, and simulating biases. This allows for comparison of interpretability methods in a setting that translates to real-world performance.      

In addition to the dataset of functions, the researchers introduced an innovative evaluation protocol to assess the effectiveness of AIAs and existing automated interpretability methods. This protocol involves two approaches. For tasks that require replicating the function in code, the evaluation directly compares the AI-generated estimations and the original, ground-truth functions. The evaluation becomes more intricate for tasks involving natural language descriptions of functions. In these cases, accurately gauging the quality of these descriptions requires an automated understanding of their semantic content. To tackle this challenge, the researchers developed a specialized “third-party” language model. This model is specifically trained to evaluate the accuracy and coherence of the natural language descriptions provided by the AI systems, and compares it to the ground-truth function behavior. 

FIND enables evaluation revealing that we are still far from fully automating interpretability; although AIAs outperform existing interpretability approaches, they still fail to accurately describe almost half of the functions in the benchmark. Tamar Rott Shaham, co-lead author of the study and a postdoc in CSAIL, notes that “while this generation of AIAs is effective in describing high-level functionality, they still often overlook finer-grained details, particularly in function subdomains with noise or irregular behavior. This likely stems from insufficient sampling in these areas. One issue is that the AIAs’ effectiveness may be hampered by their initial exploratory data. To counter this, we tried guiding the AIAs’ exploration by initializing their search with specific, relevant inputs, which significantly enhanced interpretation accuracy.” This approach combines new AIA methods with previous techniques using pre-computed examples for initiating the interpretation process.

The researchers are also developing a toolkit to augment the AIAs’ ability to conduct more precise experiments on neural networks, both in black-box and white-box settings. This toolkit aims to equip AIAs with better tools for selecting inputs and refining hypothesis-testing capabilities for more nuanced and accurate neural network analysis. The team is also tackling practical challenges in AI interpretability, focusing on determining the right questions to ask when analyzing models in real-world scenarios. Their goal is to develop automated interpretability procedures that could eventually help people audit systems — e.g., for autonomous driving or face recognition — to diagnose potential failure modes, hidden biases, or surprising behaviors before deployment. 

Watching the watchers

The team envisions one day developing nearly autonomous AIAs that can audit other systems, with human scientists providing oversight and guidance. Advanced AIAs could develop new kinds of experiments and questions, potentially beyond human scientists’ initial considerations. The focus is on expanding AI interpretability to include more complex behaviors, such as entire neural circuits or subnetworks, and predicting inputs that might lead to undesired behaviors. This development represents a significant step forward in AI research, aiming to make AI systems more understandable and reliable.

“A good benchmark is a power tool for tackling difficult challenges,” says Martin Wattenberg, computer science professor at Harvard University who was not involved in the study. “It’s wonderful to see this sophisticated benchmark for interpretability, one of the most important challenges in machine learning today. I’m particularly impressed with the automated interpretability agent the authors created. It’s a kind of interpretability jiu-jitsu, turning AI back on itself in order to help human understanding.”

Schwettmann, Rott Shaham, and their colleagues presented their work at NeurIPS 2023 in December.  Additional MIT coauthors, all affiliates of the CSAIL and the Department of Electrical Engineering and Computer Science (EECS), include graduate student Joanna Materzynska, undergraduate student Neil Chowdhury, Shuang Li PhD ’23, Assistant Professor Jacob Andreas, and Professor Antonio Torralba. Northeastern University Assistant Professor David Bau is an additional coauthor.

The work was supported, in part, by the MIT-IBM Watson AI Lab, Open Philanthropy, an Amazon Research Award, Hyundai NGV, the U.S. Army Research Laboratory, the U.S. National Science Foundation, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.

The Case for Decentralizing Your AI Tech Stack

So much of the conversation on AI development has become dominated by a futuristic and philosophical debate – should we approach general artificial intelligence, where AI will become advanced enough to perform any task the way a human could? Is that even possible? While the acceleration…

ChatGPT Meets Its Match: The Rise of Anthropic Claude Language Model

Over the past year, generative AI has exploded in popularity, thanks largely to OpenAI’s release of ChatGPT in November 2022. ChatGPT is an impressively capable conversational AI system that can understand natural language prompts and generate thoughtful, human-like responses on a wide range of topics. However,…

Someone Is Already Making A Steamboat Willie Mickey Mouse Horror Game

On January 1, Steamboat Willie, the cartoon that introduced the world to Mickey Mouse, entered the public domain, meaning the first version of Disney’s iconic mascot is (mostly) free to use by anyone. And just like when the film Winnie the Pooh: Blood and Honey took advantage of that whimsical character entering the public domain, someone has already started making a twisted, horrifying version of Mickey. In this case, it’s a video game.

It’s called Infestation: Origins, and it’s the debut title by Nightmare Forge Games. The game is an episodic four-player co-op survival adventure in which players control exterminators tasked with killing mutated vermin – and one of them happens to be the terrifying, person-sized monster modeled after Mickey. Similar to games like Phasmophobia and Lethal Company, you’ll rely on various tools and surveillance equipment to track the source of the infestations before neutralizing them. You’ll also have to maintain power for certain pieces of equipment while evading the giant, murderous rodents.  

[embedded content]

The game was announced on New Year’s Day but has already courted some controversy. It was originally titled Infestation 88 (as seen in the trailer) until it received blowback online for its perceived ties to Neo-Nazis (88 is a common dogwhistle), among other references in the trailer. Nightmare Forge Games quickly denied this and issued a statement to IGN  saying it was unaware of the reference and apologizing for any unintentional endorsement of Neo-Nazis, explaining the title referred to the game’s late 1980s setting. The team states:

“We want to apologize for our ignorance on this topic and appreciate that it was brought to our attention so we could address it. There is no intentional use of Nazi symbolism in our game nor studio, and we’ll continue to address any concerns as they arise. We strongly stand against Nazism and hate in any form.”

Outlets such as Motherboard have also pointed out red flags indicating the game could be shovelware, such as its apparent use of asset flipping (via the Unity store) and AI-generated text-to-speech voice work. “As an indie studio, we do rely on some purchased assets from the Unreal and Unity stores,” Nightmare Forge Games told Motherboard. “However, there is a lot of work going into this project that we’re hopeful will be evident upon release.” The developer states the AI voiceovers in particular are placeholders and plans to hire real voice actors in the future. 

Following the failure of similarly slapped together games like The Day Before, curious players should perhaps approach Infestation: Origins with some degree of caution. Infestation: Origins is coming to PC via Steam and will first launch in Early Access sometime this year.