Dead By Deadlight Gets Lara Croft Survivor Today, 2v8 Mode And Cross Progression On The Way

Dead by Daylight’s 8th-anniversary stream revealed a few big updates for the popular asymmetrical horror game. The most immediate is the addition of Lara Croft as the game’s latest crossover survivor along with the impending addition of the much-requested 2v8 mode. 

Tomb Raider Chapter

Lara Croft (the 2013 reboot version, specifically) joins DBD today, and though she lacks her pistols and bow, she sports three new perks highlighting her adventuring prowess. Here’s an explanation of each per DBD’s website:

  • Finesse: Your fast vaults are faster when healthy, with a cooldown after a successful use.
  • Hardened: When you open a Chest and cleanse or bless a Totem, Hardened activates for the duration of the Trial. From that point, every time you scream, you’ll instead reveal the Killer’s Aura.
  • Specialist: When you open or rummage through a Chest, gain 1 Token (up to 3). When you do a Great Skill Check, consume a Token to reduce the max required Generator progress.

[embedded content]

2v8 Mode

On July 25, a new 2v8 multiplayer mode descends into the game. This pits two Killers against eight survivors in expanded versions of five classic maps. The number of generators has been doubled, and Survivors must now repair 8 out of 13 to escape. Captive Survivors are now sent directly into cages, and Survivor perks have been reworked into a new role system designed to encourage teamwork. 

[embedded content]

Cross-Progression

Cross-progression is being implemented on July 22. This means players can now sync their progress across multiple owned versions of the game. Everything can be shared across any platform except for Switch, which has a few restrictions as you can see in the chart below. 

Dead By Deadlight Gets Lara Croft Survivor Today, 2v8 Mode And Cross Progression On The Way

Click to enlarge

Today’s stream also revealed a new trailer and the release date for The Casting of Frank Stone, the single-player DBD game developed by Supermassive Games (Until Dawn, The Quarry). You can read all about that here.

Best AI-driven laptops and PCs

Since the 1990s, the most hardcore gamers have traditionally opted for PCs and laptops. Not only did this allow them to dive deeper into their favourite games, even creating things like mods, but PC gaming also offered the ability to tinker with hardware. Many PC gamers…

AI could unleash £119 billion in UK productivity

Workday has unveiled figures that suggest AI could unleash a £119 billion productivity boost for UK enterprises. This revelation comes at a crucial time, as the nation grapples with a productivity slump that has persisted for over a decade and a half. The report paints a…

AI data security: 8 essential steps for CISOs in the age of generative AI – CyberTalk

AI data security: 8 essential steps for CISOs in the age of generative AI – CyberTalk

EXECUTIVE SUMMARY:

Artificial intelligence and large language models are transforming how organizations operate. They’re also generating vast quantities of data – including synthetic text, code, conversational data and even multi-media content. This introduces increased potential for organizations to encounter hacking, data breaches and data theft.

This article outlines eight essential steps that cyber security stakeholders can take to strengthen AI data security in an age where AI usage is rapidly accelerating and the societal consensus on AI regulation remains elusive.

AI data security: 8 essential steps

1. Risk assessment. The foundation of any effective security strategy is, of course, a thorough risk assessment. CISOs should conduct a comprehensive evaluation of their organization’s AI systems, identifying potential vulnerabilities, threats, and their potential impact.

This assessment should encompass the entire AI lifecycle, from data acquisition and model development to deployment and monitoring. By understanding the specific risks associated with AI initiatives, cyber security teams can prioritize and implement targeted security and mitigation strategies.

2. Robust governance framework. Effective AI data security requires a strong governance structure. CISOs need to develop a comprehensive framework that outlines data ownership, access controls, usage policies, and retention guidelines. This framework should align with relevant regulations, while incorporating principles of data minimization and privacy-by-design. Clear governance not only minimizes the risk of data breaches, but also ensures compliance with legal and ethical codes.

3. Secure development and deployment practices. As AI systems and security features are developed, cyber security teams need to ensure secure coding practices, vulnerability testing and threat modeling (where possible). In addition, security controls need to be put in-place, as to protect AI models and infrastructure from unauthorized access or data loss. Prioritizing cyber security from the outset will enable organizations to reduce the probability that vulnerabilities will be introduced into production systems.

4. Protect training data. Cyber security professionals need to implement stringent security measures to protect the integrity and confidentiality of training data. This includes data anonymization, encryption and access controls, regular integrity checks to detect unauthorized modifications, and monitoring of data for adversarial inputs.

5. Enhanced network security. AI systems often require significant computational resources across distributed environments. CISOs must ensure that the network infrastructure supporting AI operations is highly secure. Key measures include implementing network segmentation to isolate AI systems, utilizing next-generation firewalls and intrusion detection/prevention systems, and ensuring regular patching and updates of all systems in the AI infrastructure.

6. Advanced authentication and access controls. Given the sensitive nature of AI systems and data, robust authentication and access control mechanisms are essential. Cyber security teams should implement multi-factor authentication, role-based access controls, just-in-time provisioning for sensitive AI operations, and privileged access management for AI administrators and developers. These measures help ensure that only authorized personnel can access AI systems and data, reducing the risk of insider threats and unauthorized data exposure.

7. AI-specific incident response and recovery plans. While prevention is crucial, organizations must also prepare for potential AI-related security incidents. Cyber security professionals should develop and regularly test incident response and recovery plans tailored to AI systems. These plans should address forensic analysis of compromised AI models or data, communication protocols for stakeholders and regulatory bodies, and business continuity measures for AI-dependent operations.

8. Continuous monitoring and adaptation. AI data security is an ongoing commitment that requires constant vigilance. Implementing robust monitoring systems and processes is essential to ensure the continued security and integrity of AI operations. This includes real-time monitoring of AI system behavior and performance, anomaly detection to identify potential security threats or breaches, continuous evaluation of AI model performance and potential drift, and monitoring of emerging threats in the AI landscape.

Further thoughts

As AI and large language models continue to advance, the security challenges they present will only grow more complex. The journey towards effective AI data security requires a holistic approach that encompasses technology, processes, and people. Stay ahead of the curve by implementing the aforementioned means of ensuring robust AI data security.

Prepare for what’s next with the power of artificial intelligence and machine learning. Get detailed information about Check Point Infinity here.

Get more CyberTalk insights about AI here. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

How to assess a general-purpose AI model’s reliability before it’s deployed

How to assess a general-purpose AI model’s reliability before it’s deployed

Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences.

To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task.

They do this by training a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.

When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of classification tasks.

Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.

“All models can be wrong, but models that know when they are wrong are more useful. The problem of quantifying uncertainty or reliability gets harder for these foundation models because their abstract representations are difficult to compare. Our method allows you to quantify how reliable a representation model is for any given input data,” says senior author Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).

He is joined on a paper about the work by lead author Young-Jin Park, a LIDS graduate student; Hao Wang, a research scientist at the MIT-IBM Watson AI Lab; and Shervin Ardeshir, a senior research scientist at Netflix. The paper will be presented at the Conference on Uncertainty in Artificial Intelligence.

Counting the consensus

Traditional machine-learning models are trained to perform a specific task. These models typically make a concrete prediction based on an input. For instance, the model might tell you whether a certain image contains a cat or a dog. In this case, assessing reliability could simply be a matter of looking at the final prediction to see if the model is right.

But foundation models are different. The model is pretrained using general data, in a setting where its creators don’t know all downstream tasks it will be applied to. Users adapt it to their specific tasks after it has already been trained.

Unlike traditional machine-learning models, foundation models don’t give concrete outputs like “cat” or “dog” labels. Instead, they generate an abstract representation based on an input data point.

To assess the reliability of a foundation model, the researchers used an ensemble approach by training several models which share many properties but are slightly different from one another.

“Our idea is like counting the consensus. If all those foundation models are giving consistent representations for any data in our dataset, then we can say this model is reliable,” Park says.

But they ran into a problem: How could they compare abstract representations?

“These models just output a vector, comprised of some numbers, so we can’t compare them easily,” he adds.

They solved this problem using an idea called neighborhood consistency.

For their approach, the researchers prepare a set of reliable reference points to test on the ensemble of models. Then, for each model, they investigate the reference points located near that model’s representation of the test point.

By looking at the consistency of neighboring points, they can estimate the reliability of the models.

Aligning the representations

Foundation models map data points in what is known as a representation space. One way to think about this space is as a sphere. Each model maps similar data points to the same part of its sphere, so images of cats go in one place and images of dogs go in another.

But each model would map animals differently in its own sphere, so while cats may be grouped near the South Pole of one sphere, another model could map cats somewhere in the Northern Hemisphere.

The researchers use the neighboring points like anchors to align those spheres so they can make the representations comparable. If a data point’s neighbors are consistent across multiple representations, then one should be confident about the reliability of the model’s output for that point.

When they tested this approach on a wide range of classification tasks, they found that it was much more consistent than baselines. Plus, it wasn’t tripped up by challenging test points that caused other methods to fail.

Moreover, their approach can be used to assess reliability for any input data, so one could evaluate how well a model works for a particular type of individual, such as a patient with certain characteristics.

“Even if the models all have average performance overall, from an individual point of view, you’d prefer the one that works best for that individual,” Wang says.

However, one limitation comes from the fact that they must train an ensemble of large foundation models, which is computationally expensive. In the future, they plan to find more efficient ways to build multiple models, perhaps by using small perturbations of a single model.

This work is funded, in part, by the MIT-IBM Watson AI Lab, MathWorks, and Amazon.

In-Paint3D: Image Generation using Lightning Less Diffusion Models

The advent of deep generative AI models has significantly accelerated the development of AI with remarkable capabilities in natural language generation, 3D generation, image generation, and speech synthesis. 3D generative models have transformed numerous industries and applications, revolutionizing the current 3D production landscape. However, many current…

The Role of Decentralized AI in Enhancing Cybersecurity

In the digital realm, where interconnectivity is the norm, cybersecurity has become a pressing issue. The once-revered traditional centralized systems, designed to safeguard sensitive information, have proven to be inadequate in the face of escalating cyber threats. However, decentralized AI, a product of blockchain technology, offers…

Professor Emeritus John Vander Sande, microscopist, entrepreneur, and admired mentor, dies at 80

Professor Emeritus John Vander Sande, microscopist, entrepreneur, and admired mentor, dies at 80

MIT Professor Emeritus John B. Vander Sande, a pioneer in electron microscopy and beloved educator and advisor known for his warmth and empathetic instruction, died June 28 in Newbury, Massachusetts. He was 80.

The Cecil and Ida Green Distinguished Professor in the Department of Materials Science and Engineering (DMSE), Vander Sande was a physical metallurgist, studying the physical properties and structure of metals and alloys. His long career included a major entrepreneurial pursuit, launching American Superconductor; forming international academic partnerships; and serving in numerous administrative roles at MIT and, after his retirement, one in Iceland.

Vander Sande’s interests encompassed more than science and technology; a self-taught scholar on 17th- and 18th-century furniture, he boasts a production credit in the 1996 film “The Crucible.”

He is perhaps best remembered for bringing the first scanning transmission electron microscope (STEM) into the United States. This powerful microscope uses a beam of electrons to scan material samples and investigate their structure and composition.

“John was the person who really built up what became the MIT’s modern microscopy expertise,” says Samuel M. Allen, the POSCO Professor Emeritus of Physical Metallurgy. Vander Sande studied electron microscopy during a postdoctoral fellowship at Oxford University in England with luminaries Sir Peter Hirsch and Colin Humphreys. “The people who wrote the first book on transmission electron microscopy were all there at Oxford, and John basically brought that expertise to MIT in his teaching and mentoring.”

Born in Baltimore, Maryland, in 1944, Vander Sande grew up in Westwood, New Jersey. He studied mechanical engineering at Stevens Institute of Technology, earning a bachelor’s degree in 1966, and switched to materials science and engineering at Northwestern University, receiving a PhD in 1970. Following his time at Oxford, Vander Sande joined MIT as assistant professor in 1971.

A vision for advanced microscopy

At MIT, Vander Sande became known as a leading practitioner of weak-beam microscopy, a technique refined by Hirsch to improve images of dislocations, tiny imperfections in crystalline materials that help researchers determine why materials fail.

His procurement of the STEM instrument from the U.K. company Vacuum Generators in the mid-1970s was a substantial innovation, allowing researchers to visualize individual atoms and identify chemical elements in materials.

“He showed the capabilities of new techniques, like scanning transmission electron microscopy, in understanding the physics and chemistry of materials at the nanoscale,” says Yet-Ming Chiang, the Kyocera Professor of Ceramics at DMSE. Today, MIT.nano stands as one of the world’s foremost facilities for advanced microscopy techniques. “He paved the way, at MIT, certainly, and more broadly, to those state-of-the-art instruments that we have today.”

The director of a microscopy laboratory at MIT, Vander Sande used instruments like that early STEM and its successors to study how manufacturing processes affect material structure and properties.

One focus was rapid solidification, which involves cooling materials quickly to enhance their properties. Tom Kelly, a PhD student in the late 1970s, worked with Vander Sande to explore how fast-cooling molten metal as powder changes its internal structure. They discovered that “precipitates,” or small particles formed during the rapid cooling, made the metal stronger.

“It took me at least a year to finally get some success. But we did succeed,” says Kelly, CEO of STEAM Instruments, a startup that is developing mass spectrometry technology, which measures and analyzes atoms emitted by substances. “That was John who brought that project and the solution to the table.”

Using his deep expertise in metals and other materials, including superconducting oxides, which can conduct electricity when cooled to low temperatures, Vander Sande co-founded American Superconductor with fellow DMSE faculty member Greg Yurek in 1987. The company produced high-temperature superconducting wires now used in renewable energy technology.

“In the MIT entrepreneurial ecosystem, American Superconductor was a pioneer,” says Chiang, who was part of the startup’s co-founding membership. “It was one of the early companies that was formed on the basis of research at MIT, in which faculty spun out a company, as opposed to graduates starting companies.”

To teach them is to know them

While Yurek left MIT to lead the American Superconductor full time as CEO, Vander Sande stayed on the faculty at DMSE, remaining a consultant to the company and board member for many years.

That comes as no surprise to his students, who recall a passionate and devoted educator and mentor.

“He was a terrific teacher,” says Frank Gayle, a former PhD student of Vander Sande’s who recently retired from his job as director at the National Institute of Standards and Technology. “He would take the really complex subjects, super mathematical and complicated, and he would teach them in a way that you felt comfortable as a student learning them. He really had a terrific knack for that.”

Chiang said Vander Sande was an “exceptionally clear” lecturer who would use memorable imagery to get concepts across, like comparing heterogenous nanoparticles, tiny particles that have a varied structure or composition, to a black-and-white Holstein cow. “Hard to forget,” Chiang says.

Powering Vander Sande’s teaching, Gayle said, was an aptitude for knowing the people he was teaching, for recognizing their backgrounds and what they knew and didn’t know. He likened Vander Sande to a dad on Take Your Kid to Work Day, demystifying an unfamiliar world. “He had some way of doing that, and then he figured out how to get the pieces together to make it comprehensible.”

He brought a similar talent to mentorship, with an emphasis on the individual rather than the project, Gayle says. “He really worked with people to encourage them to do creative things and encouraged their creativity.”

Kelly, who was a University of Wisconsin professor before becoming a repeat entrepreneur, says Vander Sande was an exceptional role model for young grad students.

“When you see these people who’ve accomplished a lot, you’re afraid to even talk to them,” he says. “But in reality, they’re regular people. One of the things I learned from John was that he’s just a regular person who does good work. I realized that, Hey, I can be a regular person and do good work, too.”

Another former grad student, Matt Libera, says he learned as much about life from Vander Sande as he did about materials science and engineering.

“Because he was not just a scientist-engineer, but really a well-rounded human being and shared a lot of experience and advice that went beyond just the science,” says Libera, a materials science and engineering professor at Stevens Institute of Technology, Vander Sande’s alma mater.

“A rare talent”

Vander Sande was equally dedicated to MIT and his department. In DMSE, he was on multiple committees, on undergraduates and curriculum development, and in 1991 he was appointed associate dean of the School of Engineering. He served in the position until 1999, taking over as acting dean twice.

“I remember that that took up a huge amount of his time,” Chiang says. Vander Sande lived in Newbury, Massachusetts, and he and his wife, Marie-Teresa, who long worked for MIT’s Industrial Liaison Program, would travel together to Cambridge by car. “He once told me that he did a lot of the work related to his deanship during that long commute back and forth from Newbury.”

Gayle says Vander Sande’s remarkable communication and people skills are what made him a good fit for leadership roles. “He had a rare talent for those things.”

He also was a bridge from MIT to the rest of the world. Vander Sande played a leading role in establishing the Singapore-MIT Alliance for Research and Technology, a teaching partnership that set up Institute-modeled graduate programs at Singaporean universities. And he was the director of MIT’s half of the Cambridge-MIT Institute, a collaboration with the University of Cambridge in the U.K. that focused on student and faculty exchanges, integrated research, and professional development. Retiring from MIT in 2006, he pursued academic projects in Ecuador, Morocco, and Iceland, and served as acting provost of Reykjavik University from 2009 to 2010.

He had numerous interests outside work, including college football and sports cars, but his greatest passion was for antiques, mainly early American furniture.

A self-taught expert in antiquarian arts, he gave lectures on connoisseurship and attended auctions and antique shows. His interest extended to his home, built in 1697, which had low ceilings that were inconvenient for the 6-foot-1 Vander Sande.

So respected was he for his expertise that the production crew for 20th Century Fox’s “The Crucible” sought him out. The film, about the Salem, Massachusetts, witch trials, was set in 1692. The crew made copies of furniture from his collection, and Vander Sande consulted on set design and decoration to ensure historical accuracy.

His passion extended beyond just historical artifacts, says Professor Emeritus Allen. He was profoundly interested in learning about the people behind them.

“He liked to read firsthand accounts, letters and stuff,” he says. “His real interest was trying to understand how people two centuries ago or more thought, what their lives were like. It wasn’t just that he was an antiques collector.”

Vander Sande is survived by his wife, Marie-Teresa Vander Sande; his son, John Franklin VanderSande, and his wife, Melanie; his daughter, Rosse Marais VanderSande Ellis, and her husband, Zak Ellis; and grandchildren Gabriel Rhys Pelletier, Sophia Marais VanderSande, and John Christian VanderSande.