10 ways generative AI drives stronger security outcomes – CyberTalk

10 ways generative AI drives stronger security outcomes – CyberTalk

EXECUTIVE SUMMARY:

Eighty-seven percent of cyber security professionals recognize the potential inherent in leveraging AI for security purposes. The growing volume and sophistication of cyber attacks point to the critical need for new and innovative ways to protect businesses from cyber skullduggery.

However, despite widespread and rabid enthusiasm for generative AI, generative AI adoption in the security space has remained somewhat constricted and slow. Why? The reality is that running mature, enterprise-ready generative AI is not an easy feat.

Managing generative AI systems requires skilled professionals, comprehensive governance structures and powerful infrastructure, among other things. Nonetheless, if organizational maturity is accounted for and attended to, generative AI can present robust opportunities through which to drive stronger cyber security outcomes.

10 ways generative AI drives stronger cyber security outcomes

1. Customized threat scenarios. When presented with news articles detailing a never-seen-before threat scenario, generative AI can process the information in such a way as to create a customized tabletop exercise.

When also given organization-specific information, the technology can generate tabletop scenarios that closely align with an organization’s interests and general risk profile. Thus, the AI can strengthen organizational abilities to plan for and contend with emerging cyber threats.

2. Persona-based risk assessment. When joining a new organization, cyber security leaders commonly connect with stakeholders in order to understand department-specific cyber risks.

This has effort its benefits, but only to an extent. Cyber security personnel can only reach out to high-level stakeholders and departmental heads for input so many times, at least, before seriously detracting from their work.

To the advantage of cyber security professionals, when set up to do so, generative AI can emulate various personas. If this sounds absurd, just hang in there. As a result, the AI can simulate different perspectives and evaluate risk scenarios accordingly.

For example, an AI model that emulates a cautious CFO may be able to provide security staff with insights into financial data security risks that would have otherwise remained overlooked. While new and still somewhat eerie, persona emulation can prompt businesses to examine more elusive risk types and to consider corresponding red teaming activities.

3. Dynamic honeypots. Honeypots decoy systems are designed to strategically misdirect hackers who are looking for high-value data. In essence, they send the hackers hunting in the wrong direction (so that security pros can find them and send them packing).

Generative AI can enhance the effectiveness of honeypot traps by dynamically creating new and different fake environments. This can help protect a given organization’s resources, as it helps to continuously confound and redirect hackers.

4. Policy development and optimization. Generative AI has the ability to analyze historical security incidents, regulations and organizational goals. As a result, it can recommend (or even autonomously develop) cyber security policies. Said policies can be tailored to align with business objectives, compliance requirements and a cyber security strategy.

(However, despite the utility of generative AI in this area, regular policy validation and human oversight are still critical.)

5. Malware detection. When it comes to malware detection, generative AI algorithms excel. They can closely monitor patterns, understand behaviors and zero in on anomalies.

Generative AI can detect new malware strains, including those that deploy unique self-evolving techniques and polymorphic code.

6. Secure code generation. Generative AI can assist with writing secure code. Generative AI tools can review existing codebases, find vulnerabilities and recommend patches or improvements.

Refusing to use generative AI for secure code development would be like “asking an office worker to use a typewriter instead of a computer,” says Albert Ziegler, principle researcher and member of the GitHub Next research and development team.

In terms of examples of what generative AI can do here, it can automatically refactor code to eliminate common security flaws and issues, like SQL injections or buffer overflows.

7. Privacy-preserving data synthesis. According to ArXiv, owned by Cornell University, generative AI’s abilities to create task-specific, synthetic training data has positive implications for privacy and cyber security.

For instance, generative AI can anonymize medical data, enabling researchers to study the material without the risk of accidentally exposing real data through insecure tools (or in some other way, compromising patient privacy).

8. Vulnerability prediction and prioritization. Generative AI and machine learning tools can assist with vulnerability management by analyzing existing databases, software code patterns, network configurations and threat intelligence. Organizations can then predict potential vulnerabilities in software (or network configurations) ahead of when they would otherwise be discovered.

9. Fraud detection. One novel application of generative AI is in fraud detection, as the technology can sift through massive datasets (nearly instantly). Thus, generative AI can flag and block suspicious online transactions as they pop-up, preventing possible economic losses.

PayPal is known to have already applied generative AI and ML to enhance its fraud detection capabilities. Over a three year period, this application of generative AI has reduced the company’s loss rate by half.

10. Social engineering countermeasures. The success of social engineering tactics, like phishing emails, depend on the manipulation of human emotions and the exploitation of trust. To combat phishing, generative AI can be used to develop realistic phishing simulations for the purpose of employee training.

Generative AI can also be used to develop deepfakes of known persons — for internal ethical use and training purposes only. Exposing employees to deepfakes in a controlled setting can help them become more adept at spotting deepfakes in the real-world.

Explore how else generative AI can drive stronger cyber security outcomes for your organization. Read about how Check Point’s new generative AI-based technology can benefit your team. Click here.

To receive compelling cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Hungry for Data: How Supply Chain AI Can Reach its Inflection Point

Artificial intelligence (AI) in supply chains is a chicken-or-the-egg thing. There are those who extol AI for its potential to create greater visibility into supply chain operations. In other words, AI first, visibility second. Which may have been true when pervasive, real-time supply chain visibility wasn’t…

Itamar Friedman, CEO & Co-Founder of CodiumAI – Interview Series

Itamar Friedman, is the CEO and Co-Founder of CodiumAI. Codium focuses on the “code integrity” side of code generation — generating automated tests, code explanations, and reviews. They have released research on generating code solutions for competitive programming challenges that outperform Google DeepMind. When and how…

5 Best AI Research Paper Summarizers (May 2024)

In the fast-paced world of academic research, keeping up with the ever-growing body of literature can be a daunting task. Researchers and students often find themselves inundated with lengthy research papers, making it challenging to quickly grasp the core ideas and insights. AI-powered research paper summarizers have…

The power of App Inventor: Democratizing possibilities for mobile applications

The power of App Inventor: Democratizing possibilities for mobile applications

In June 2007, Apple unveiled the first iPhone. But the company made a strategic decision about iPhone software: its new App Store would be a walled garden. An iPhone user wouldn’t be able to install applications that Apple itself hadn’t vetted, at least not without breaking Apple’s terms of service.

That business decision, however, left educators out in the cold. They had no way to bring mobile software development — about to become part of everyday life — into the classroom. How could a young student code, futz with, and share apps if they couldn’t get it into the App Store?

MIT professor Hal Abelson was on sabbatical at Google at the time, when the company was deciding how to respond to Apple’s gambit to corner the mobile hardware and software market. Abelson recognized the restrictions Apple was placing on young developers; Google recognized the market need for an open-source alternative operating system — what became Android. Both saw the opportunity that became App Inventor.

“Google started the Android project sort of in reaction to the iPhone,” Abelson says. “And I was there, looking at what we did at MIT with education-focused software like Logo and Scratch, and said ‘what a cool thing it would be if kids could make mobile apps also.’”

Google software engineer Mark Friedman volunteered to work with Abelson on what became “Young Android,” soon renamed Google App Inventor. Like Scratch, App Inventor is a block-based language, allowing programmers to visually snap together pre-made “blocks” of code rather than need to learn specialized programming syntax.

Friedman describes it as novel for the time, particularly for mobile development, to make it as easy as possible to build simple mobile apps. “That meant a web-based app,” he says, “where everything was online and no external tools were required, with a simple programming model, drag-and-drop user interface designing, and blocks-based visual programming.” Thus an app someone programmed in a web interface could be installed on an Android device.

App Inventor scratched an itch. Boosted by the explosion in smartphone adoption and the fact App Inventor is free (and eventually open source), soon more than 70,000 teachers were using it with hundreds of thousands of students, with Google providing the backend infrastructure to keep it going.

“I remember answering a question from my manager at Google who asked how many users I thought we’d get in the first year,” Friedman says. “I thought it would be about 15,000 — and I remember thinking that might be too optimistic. I was ultimately off by a factor of 10–20.” Friedman was quick to credit more than their choices about the app. “I think that it’s fair to say that while some of that growth was due to the quality of the tool, I don’t think you can discount the effect of it being from Google and of the effect of Hal Abelson’s reputation and network.”

Some early apps took App Inventor in ambitious, unexpected directions, such as “Discardious,” developed by teenage girls in Nigeria. Discardious helped business owners and individuals dispose of waste in communities where disposal was unreliable or too cumbersome.

But even before apps like Discardious came along, the team knew Google’s support wouldn’t be open-ended. No one wanted to cut teachers off from a tool they were thriving with, so around 2010, Google and Abelson agreed to transfer App Inventor to MIT. The transition meant major staff contributions to recreate App Inventor without Google’s proprietary software but MIT needing to work with Google to continue to provide the network resources to keep App Inventor free for the world.

With such a large user base, however, that left Abelson “worried the whole thing was going to collapse” without Google’s direct participation.

Friedman agrees. “I would have to say that I had my fears. App Inventor has a pretty complicated technical implementation, involving multiple programming languages, libraries and frameworks, and by the end of its time at Google we had a team of about 10 people working on it.”

Yet not only did Google provide significant funding to aid the transfer, but, Friedman says of the transfer’s ultimate success, “Hal would be in charge and he had fairly extensive knowledge of the system and, of course, had great passion for the vision and the product.”

MIT enterprise architect Jeffrey Schiller, who built the Institute’s computer network and became its manager in 1984, was another key part in sustaining App Inventor after its transition, helping introduce technical features fundamental to its accessibility and long-term success. He led the integration of the platform into web browsers, the addition of WiFi support rather than needing to connect phones and computers via USB, and the laying of groundwork for technical support of older phones because, as Schiller says, “many of our users cannot rush out and purchase the latest and most expensive devices.”

These collaborations and contributions over time resulted in App Inventor’s greatest resource: its user base. As it grew, and with support from community managers, volunteer know-how grew with it. Now, more than a decade since its launch, App Inventor recently crossed several major milestones, the most remarkable being the creation of its 100 millionth project and registration of its 20 millionth user. Young developers continue to make incredible applications, boosted now by the advantages of AI. College students created “Brazilian XôDengue” as a way for users to use phone cameras to identify mosquito larvae that may be carrying the dengue virus. High school students recently developed “Calmify,” a journaling app that uses AI for emotion detection. And a mother in Kuwait wanted something to help manage the often-overwhelming experience of new motherhood when returning to work, so she built the chatbot “PAM (Personal Advisor to Mothers)” as a non-judgmental space to talk through the challenges.

App Inventor’s long-term sustainability now rests with the App Inventor Foundation, created in 2022 to grow its resources and further drive its adoption. It is led by executive director Natalie Lao.

In a letter to the App Inventor community, Lao highlighted the foundation’s commitment to equitable access to educational resources, which for App Inventor required a rapid shift toward AI education — but in a way that upholds App Inventor’s core values to be “a free, open-source, easy-to-use platform” for mobile devices. “Our mission is to not only democratize access to technology,” Lao wrote, “but also foster a culture of innovation and digital literacy.”

Within MIT, App Inventor today falls under the umbrella of the MIT RAISE Initiative — Responsible AI for Social Empowerment and Education, run by Dean for Digital Learning Cynthia Breazeal, Professor Eric Klopfer, and Abelson. Together they are able to integrate App Inventor into ever-broader communities, events, and funding streams, leading to opportunities like this summer’s inaugural AI and Education Summit on July 24-26. The summit will include awards for winners of a Global AI Hackathon, whose roughly 180 submissions used App Inventor to create AI tools in two tracks: Climate & Sustainability and Health & Wellness. Tying together another of RAISE’s major projects, participants were encouraged to draw from Day of AI curricula, including its newest courses on data science and climate change.

“Over the past year, there’s been an enormous mushrooming in the possibilities for mobile apps through the integration of AI,” says Abelson. “The opportunity for App Inventor and MIT is to democratize those new possibilities for young people — and for everyone — as an enhanced source of power and creativity.”

Messaging your AI pricing model

Dive into AI pricing with Ismail Madni. Explore customer-centric strategies and real-world examples from Intercom and GitHub. Learn to craft pricing models and narratives that showcase your product’s value – a win-win for companies and customers alike….

OpenAI set to unveil AI-driven challenger to Google Search

Google’s long-standing supremacy in the search engine arena may soon be challenged as OpenAI, boosted by its partnership with Microsoft, is reportedly stepping up to launch its own AI-driven search product. According to two sources familiar with the matter who spoke to Reuters, OpenAI is scheduled…

From steel engineering to ovarian tumor research

From steel engineering to ovarian tumor research

Ashutosh Kumar is a classically trained materials engineer. Having grown up with a passion for making things, he has explored steel design and studied stress fractures in alloys.

Throughout Kumar’s education, however, he was also drawn to biology and medicine. When he was accepted into an undergraduate metallurgical engineering and materials science program at Indian Institute of Technology (IIT) Bombay, the native of Jamshedpur was very excited — and “a little dissatisfied, since I couldn’t do biology anymore.”

Now a PhD candidate and a MathWorks Fellow in MIT’s Department of Materials Science and Engineering, Kumar can merge his wide-ranging interests. He studies the effect of certain bacteria that have been observed encouraging the spread of ovarian cancer and possibly reducing the effectiveness of chemotherapy and immunotherapy.

“Some microbes have an affinity toward infecting ovarian cancer cells, which can lead to changes in the cellular structure and reprogramming cells to survive in stressful conditions,” Kumar says. “This means that cells can migrate to different sites and may have a mechanism to develop chemoresistance. This opens an avenue to develop therapies to see if we can start to undo some of these changes.”

Kumar’s research combines microbiology, bioengineering, artificial intelligence, big data, and materials science. Using microbiome sequencing and AI, he aims to define microbiome changes that may correlate with poor patient outcomes. Ultimately, his goal is to engineer bacteriophage viruses to reprogram bacteria to work therapeutically.

Kumar started inching toward work in the health sciences just months into earning his bachelor’s degree at IIT Bombay.

“I realized engineering is so flexible that its applications extend to any field,” he says, adding that he started working with biomaterials “to respect both my degree program and my interests.”

“I loved it so much that I decided to go to graduate school,” he adds.

Starting his PhD program at MIT, he says, “was a fantastic opportunity to switch gears and work on more interdisciplinary or ‘MIT-type’ work.”

Kumar says he and Angela Belcher, the James Mason Crafts Professor of biological engineering and materials science, began discussing the impact of the microbiome on ovarian cancer when he first arrived at MIT.

“I shared my enthusiasm about human health and biology, and we started brainstorming,” he says. “We realized that there’s an unmet need to understand a lot of gynecological cancers. Ovarian cancer is an aggressive cancer, which is usually diagnosed when it’s too late and has already spread.”

In 2022, Kumar was awarded a MathWorks Fellowship. The fellowships are awarded to School of Engineering graduate students, preferably those who use MATLAB or Simulink — which were developed by the mathematical computer software company MathWorks — in their research. The philanthropic support fueled Kumar’s full transition into health science research.

“The work we are doing now was initially not funded by traditional sources, and the MathWorks Fellowship gave us the flexibility to pursue this field,” Kumar says. “It provided me with opportunities to learn new skills and ask questions about this topic. MathWorks gave me a chance to explore my interests and helped me navigate from being a steel engineer to a cancer scientist.”

Kumar’s work on the relationship between bacteria and ovarian cancer started with studying which bacteria are incorporated into tumors in mouse models.

“We started looking closely at changes in cell structure and how those changes impact cancer progression,” he says, adding that MATLAB image processing helps him and his collaborators track tumor metastasis.

The research team also uses RNA sequencing and MATLAB algorithms to construct a taxonomy of the bacteria.

“Once we have identified the microbiome composition,” Kumar says, “we want to see how the microbiome changes as cancer progresses and identify changes in, let’s say, patients who develop chemoresistance.”

He says recent findings that ovarian cancer may originate in the fallopian tubes are promising because detecting cancer-related biomarkers or lesions before cancer spreads to the ovaries could lead to better prognoses.

As he pursues his research, Kumar says he is extremely thankful to Belcher “for believing in me to work on this project.

“She trusted me and my passion for making an impact on human health — even though I come from a materials engineering background — and supported me throughout. It was her passion to take on new challenges that made it possible for me to work on this idea. She has been an amazing mentor and motivated me to continue moving forward.”

For her part, Belcher is equally enthralled.

“It has been amazing to work with Ashutosh on this ovarian cancer microbiome project,” she says. “He has been so passionate and dedicated to looking for less-conventional approaches to solve this debilitating disease. His innovations around looking for very early changes in the microenvironment of this disease could be critical in interception and prevention of ovarian cancer. We started this project with very little preliminary data, so his MathWorks fellowship was critical in the initiation of the project.”

Kumar, who has been very active in student government and community-building activities, believes it is very important for students to feel included and at home at their institutions so they can develop in ways outside of academics. He says that his own involvement helps him take time off from work.

“Science can never stop, and there will always be something to do,” he says, explaining that he deliberately schedules time off and that social engagement helps him to experience downtime. “Engaging with community members through events on campus or at the dorm helps set a mental boundary with work.”

Regarding his unusual route through materials science to cancer research, Kumar regards it as something that occurred organically.

“I have observed that life is very dynamic,” he says. “What we think we might do versus what we end up doing is never consistent. Five years back, I had no idea I would be at MIT working with such excellent scientific mentors around me.”

A better way to control shape-shifting soft robots

Imagine a slime-like robot that can seamlessly change its shape to squeeze through narrow spaces, which could be deployed inside the human body to remove an unwanted item.

While such a robot does not yet exist outside a laboratory, researchers are working to develop reconfigurable soft robots for applications in health care, wearable devices, and industrial systems.

But how can one control a squishy robot that doesn’t have joints, limbs, or fingers that can be manipulated, and instead can drastically alter its entire shape at will? MIT researchers are working to answer that question.

They developed a control algorithm that can autonomously learn how to move, stretch, and shape a reconfigurable robot to complete a specific task, even when that task requires the robot to change its morphology multiple times. The team also built a simulator to test control algorithms for deformable soft robots on a series of challenging, shape-changing tasks.

Their method completed each of the eight tasks they evaluated while outperforming other algorithms. The technique worked especially well on multifaceted tasks. For instance, in one test, the robot had to reduce its height while growing two tiny legs to squeeze through a narrow pipe, and then un-grow those legs and extend its torso to open the pipe’s lid.

While reconfigurable soft robots are still in their infancy, such a technique could someday enable general-purpose robots that can adapt their shapes to accomplish diverse tasks.

“When people think about soft robots, they tend to think about robots that are elastic, but return to their original shape. Our robot is like slime and can actually change its morphology. It is very striking that our method worked so well because we are dealing with something very new,” says Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author of a paper on this approach.

Chen’s co-authors include lead author Suning Huang, an undergraduate student at Tsinghua University in China who completed this work while a visiting student at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor of EECS at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory. The research will be presented at the International Conference on Learning Representations.

Controlling dynamic motion

Scientists often teach robots to complete tasks using a machine-learning approach known as reinforcement learning, which is a trial-and-error process in which the robot is rewarded for actions that move it closer to a goal.

This can be effective when the robot’s moving parts are consistent and well-defined, like a gripper with three fingers. With a robotic gripper, a reinforcement learning algorithm might move one finger slightly, learning by trial and error whether that motion earns it a reward. Then it would move on to the next finger, and so on.

But shape-shifting robots, which are controlled by magnetic fields, can dynamically squish, bend, or elongate their entire bodies.

An orange rectangular-like blob shifts and elongates itself out of a three-walled maze structure to reach a purple target.
The researchers built a simulator to test control algorithms for deformable soft robots on a series of challenging, shape-changing tasks. Here, a reconfigurable robot learns to elongate and curve its soft body to weave around obstacles and reach a target.

Image: Courtesy of the researchers

“Such a robot could have thousands of small pieces of muscle to control, so it is very hard to learn in a traditional way,” says Chen.

To solve this problem, he and his collaborators had to think about it differently. Rather than moving each tiny muscle individually, their reinforcement learning algorithm begins by learning to control groups of adjacent muscles that work together.

Then, after the algorithm has explored the space of possible actions by focusing on groups of muscles, it drills down into finer detail to optimize the policy, or action plan, it has learned. In this way, the control algorithm follows a coarse-to-fine methodology.

“Coarse-to-fine means that when you take a random action, that random action is likely to make a difference. The change in the outcome is likely very significant because you coarsely control several muscles at the same time,” Sitzmann says.

To enable this, the researchers treat a robot’s action space, or how it can move in a certain area, like an image.

Their machine-learning model uses images of the robot’s environment to generate a 2D action space, which includes the robot and the area around it. They simulate robot motion using what is known as the material-point-method, where the action space is covered by points, like image pixels, and overlayed with a grid.

The same way nearby pixels in an image are related (like the pixels that form a tree in a photo), they built their algorithm to understand that nearby action points have stronger correlations. Points around the robot’s “shoulder” will move similarly when it changes shape, while points on the robot’s “leg” will also move similarly, but in a different way than those on the “shoulder.”

In addition, the researchers use the same machine-learning model to look at the environment and predict the actions the robot should take, which makes it more efficient.

Building a simulator

After developing this approach, the researchers needed a way to test it, so they created a simulation environment called DittoGym.

DittoGym features eight tasks that evaluate a reconfigurable robot’s ability to dynamically change shape. In one, the robot must elongate and curve its body so it can weave around obstacles to reach a target point. In another, it must change its shape to mimic letters of the alphabet.

Animation of orange blob shifting into shapes such as a star, and the letters “M,” “I,” and “T.”
In this simulation, the reconfigurable soft robot, trained using the researchers’ control algorithm, must change its shape to mimic objects, like stars, and the letters M-I-T.

Image: Courtesy of the researchers

“Our task selection in DittoGym follows both generic reinforcement learning benchmark design principles and the specific needs of reconfigurable robots. Each task is designed to represent certain properties that we deem important, such as the capability to navigate through long-horizon explorations, the ability to analyze the environment, and interact with external objects,” Huang says. “We believe they together can give users a comprehensive understanding of the flexibility of reconfigurable robots and the effectiveness of our reinforcement learning scheme.”

Their algorithm outperformed baseline methods and was the only technique suitable for completing multistage tasks that required several shape changes.

“We have a stronger correlation between action points that are closer to each other, and I think that is key to making this work so well,” says Chen.

While it may be many years before shape-shifting robots are deployed in the real world, Chen and his collaborators hope their work inspires other scientists not only to study reconfigurable soft robots but also to think about leveraging 2D action spaces for other complex control problems.