3 current ransomware trends (and how to take action) – CyberTalk

3 current ransomware trends (and how to take action) – CyberTalk

EXECUTIVE SUMMARY:

Ransomware is one of the most disruptive and financially damaging cyber threats that modern organizations face. As a cyber security community, we’ve made great strides in combating ransomware attackers, data encryption and extortion. However, as expected, cyber criminals have responded by evolving their tactics.

This article explores the latest ransomware trends to remain aware of and mitigate. From emerging social engineering techniques to vicious voice-cloning schemes, discover the nascent strategies that criminals are employing to extort victims.

Forewarned is forearmed, as the saying goes. Are you keeping a pulse on adversary behavior? By understanding the current modus operandi of ransomware groups, organizations can effectively elevate their cyber security posture and stay a step ahead of attackers.

3 current ransomware trends

1. Phishing is old, but this kind of sophistication (and by extension cyber criminal success) is brand new. Experts are seeing that cyber criminals who collect breached data can use AI to parse through the information. Criminals can then organize it in such a way as to conduct highly targeted spear phishing attacks.

Instead of a single cyber criminal tricking a single individual into handing over private details through a targeted spear-phishing attack, as in days of old, a cyber criminal can now leverage AI to do it for them. What was once a manual process has been automated, multiplying the results exponentially.

2. Voice cloning technology has been around for some time, but improved AI technologies mean that a small clip from an online video enables cyber criminals to replicate a voice with chilling levels of accuracy. This has previously led to wire fraud incidents, other unauthorized financial transactions, and ransom situations. Deepfake voice attacks are becoming increasingly difficult to detect and a represent a growing danger for organizations.

3. Ransomware-minded cyber criminals are constantly seeking out new software-based vulnerabilities to exploit. Cyber criminals are actively scanning for and exploiting bugs in exposed services, web applications, cloud environments and remote access solutions. While this reality isn’t new, some organizations have been slow to implement systems and processes that can close these types of security gaps. Be sure that your organization isn’t one of them.

Countering ransomware threats

Forward thinking organizations are adopting AI-driven cyber security solutions. These advanced tools leverage machine learning and natural language processing capabilities to detect and mitigate sophisticated phishing attempts, contextualize potential threats, and to proactively identify and remediate vulnerabilities, among other things.

Industry leaders are also emphasizing the utility of a multi-layered security approach. This refers to combining AI-driven defenses with robust incident response plans, network segmentation, data encryption and comprehensive vulnerability management programs.

A more severe ransomware onslaught

In light of recent law enforcement actions targeting affiliate networks, some ransomware operators may reduce the number of affiliates that they work with and replace them with AI-based models that can perform certain kinds of tasks. In turn, ransomware operators may force-multiply activities and negative outcomes.

While the full impact of such a transition may take months or years to manifest, it highlights the need for organizations to remain vigilant and focused on elevating cyber security strategies. As a cyber security leader, concentrate efforts around threat intelligence gathering, continuous monitoring and proactive vulnerability management.

As noted previously, embrace cutting edge technologies to fortify prevention and defense mechanisms. Learn more about leveraging AI to make your organization more resilient. Click here or read our latest eBook on the subject.

Lastly, subscribe to the CyberTalk.org newsletter for timely insights, cutting-edge analyses and more, delivered straight to your inbox each week.

Unlocking mRNA’s cancer-fighting potential

Unlocking mRNA’s cancer-fighting potential

What if training your immune system to attack cancer cells was as easy as training it to fight Covid-19? Many people believe the technology behind some Covid-19 vaccines, messenger RNA, holds great promise for stimulating immune responses to cancer.

But using messenger RNA, or mRNA, to get the immune system to mount a prolonged and aggressive attack on cancer cells — while leaving healthy cells alone — has been a major challenge.

The MIT spinout Strand Therapeutics is attempting to solve that problem with an advanced class of mRNA molecules that are designed to sense what type of cells they encounter in the body and to express therapeutic proteins only once they have entered diseased cells.

“It’s about finding ways to deal with the signal-to-noise ratio, the signal being expression in the target tissue and the noise being expression in the nontarget tissue,” Strand CEO Jacob Becraft PhD ’19 explains. “Our technology amplifies the signal to express more proteins for longer while at the same time effectively eliminating the mRNA’s off-target expression.”

Strand is set to begin its first clinical trial in April, which is testing a proprietary, self-replicating mRNA molecule’s ability to express immune signals directly from a tumor, eliciting the immune system to attack and kill the tumor cells directly. It’s also being tested as a possible improvement for existing treatments to a number of solid tumors.

As they work to commercialize its early innovations, Strand’s team is continuing to add capabilities to what it calls its “programmable medicines,” improving mRNA molecules’ ability to sense their environment and generate potent, targeted responses where they’re needed most.

“Self-replicating mRNA was the first thing that we pioneered when we were at MIT and in the first couple years at Strand,” Becraft says. “Now we’ve also moved into approaches like circular mRNAs, which allow each molecule of mRNA to express more of a protein for longer, potentially for weeks at a time. And the bigger our cell-type specific datasets become, the better we are at differentiating cell types, which makes these molecules so targeted we can have a higher level of safety at higher doses and create stronger treatments.”

Making mRNA smarter

Becraft got his first taste of MIT as an undergraduate at the University of Illinois when he secured a summer internship in the lab of MIT Institute Professor Bob Langer.

“That’s where I learned how lab research could be translated into spinout companies,” Becraft recalls.

The experience left enough of an impression on Becraft that he returned to MIT the next fall to earn his PhD, where he worked in the Synthetic Biology Center under professor of bioengineering and electrical engineering and computer science Ron Weiss. During that time, he collaborated with postdoc Tasuku Kitada to create genetic “switches” that could control protein expression in cells.

Becraft and Kitada realized their research could be the foundation of a company around 2017 and started spending time in the Martin Trust Center for MIT Entrepreneurship. They also received support from MIT Sandbox and eventually worked with the Technology Licensing Office to establish Strand’s early intellectual property.

“We started by asking, where is the highest unmet need that also allows us to prove out the thesis of this technology? And where will this approach have therapeutic relevance that is a quantum leap forward from what anyone else is doing?” Becraft says. “The first place we looked was oncology.”

People have been working on cancer immunotherapy, which turns a patient’s immune system against cancer cells, for decades. Scientists in the field have developed drugs that produce some remarkable results in patients with aggressive, late-stage cancers. But most next-generation cancer immunotherapies are based on recombinant (lab-made) proteins that are difficult to deliver to specific targets in the body and don’t remain active for long enough to consistently create a durable response.

More recently, companies like Moderna, whose founders also include MIT alumni, have pioneered the use of mRNAs to create proteins in cells. But to date, those mRNA molecules have not been able to change behavior based on the type of cells they enter, and don’t last for very long in the body.

“If you’re trying to engage the immune system with a tumor cell, the mRNA needs to be expressing from the tumor cell itself, and it needs to be expressing over a long period of time,” Becraft says. “Those challenges are hard to overcome with the first generation of mRNA technologies.”

Strand has developed what it calls the world’s first mRNA programming language that allows the company to specify the tissues its mRNAs express proteins in.

“We built a database that says, ‘Here are all of the different cells that the mRNA could be delivered to, and here are all of their microRNA signatures,’ and then we use computational tools and machine learning to differentiate the cells,” Becraft explains. “For instance, I need to make sure that the messenger RNA turns off when it’s in the liver cell, and I need to make sure that it turns on when it’s in a tumor cell or a T-cell.”

Strand also uses techniques like mRNA self-replication to create more durable protein expression and immune responses.

“The first versions of mRNA therapeutics, like the Covid-19 vaccines, just recapitulate how our body’s natural mRNAs work,” Becraft explains. “Natural mRNAs last for a few days, maybe less, and they express a single protein. They have no context-dependent actions. That means wherever the mRNA is delivered, it’s only going to express a molecule for a short period of time. That’s perfect for a vaccine, but it’s much more limiting when you want to create a protein that’s actually engaging in a biological process, like activating an immune response against a tumor that could take many days or weeks.”

Technology with broad potential

Strand’s first clinical trial is targeting solid tumors like melanoma and triple-negative breast cancer. The company is also actively developing mRNA therapies that could be used to treat blood cancers.

“We’ll be expanding into new areas as we continue to de-risk the translation of the science and create new technologies,” Becraft says.

Strand plans to partner with large pharmaceutical companies as well as investors to continue developing drugs. Further down the line, the founders believe future versions of its mRNA therapies could be used to treat a broad range of diseases.

“Our thesis is: amplified expression in specific, programmed target cells for long periods of time,” Becraft says. “That approach can be utilized for [immunotherapies like] CAR T-cell therapy, both in oncology and autoimmune conditions. There are also many diseases that require cell-type specific delivery and expression of proteins in treatment, everything from kidney disease to types of liver disease. We can envision our technology being used for all of that.”

New software enables blind and low-vision users to create interactive, accessible charts

New software enables blind and low-vision users to create interactive, accessible charts

A growing number of tools enable users to make online data representations, like charts, that are accessible for people who are blind or have low vision. However, most tools require an existing visual chart that can then be converted into an accessible format.

This creates barriers that prevent blind and low-vision users from building their own custom data representations, and it can limit their ability to explore and analyze important information.

A team of researchers from MIT and University College London (UCL) wants to change the way people think about accessible data representations.

They created a software system called Umwelt (which means “environment” in German) that can enable blind and low-vision users to build customized, multimodal data representations without needing an initial visual chart.

Umwelt, an authoring environment designed for screen-reader users, incorporates an editor that allows someone to upload a dataset and create a customized representation, such as a scatterplot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into nonspeech audio.

The system, which can represent a variety of data types, includes a viewer that enables a blind or low-vision user to interactively explore a data representation, seamlessly switching between each modality to interact with data in a different way.

The researchers conducted a study with five expert screen-reader users who found Umwelt to be useful and easy to learn. In addition to offering an interface that empowered them to create data representations — something they said was sorely lacking — the users said Umwelt could facilitate communication between people who rely on different senses.

“We have to remember that blind and low-vision people aren’t isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, an electrical engineering and computer science (EECS) graduate student and lead author of a paper introducing Umwelt. “I am hopeful that Umwelt helps shift the way that researchers think about accessible data analysis. Enabling the full participation of blind and low-vision people in data analysis involves seeing visualization as just one piece of this bigger, multisensory puzzle.”

Joining Zong on the paper are fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in the Computer Science and Artificial Intelligence Laboratory. The paper will be presented at the ACM Conference on Human Factors in Computing.

De-centering visualization

The researchers previously developed interactive interfaces that provide a richer experience for screen reader users as they explore accessible data representations. Through that work, they realized most tools for creating such representations involve converting existing visual charts.

Aiming to decenter visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began co-designing Umwelt more than a year ago.

At the outset, they realized they would need to rethink how to represent the same data using visual, auditory, and textual forms.

“We had to put a common denominator behind the three modalities. By creating this new language for representations, and making the output and input accessible, the whole is greater than the sum of its parts,” says Hajas.

To build Umwelt, they first considered what is unique about the way people use each sense.

For instance, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to a sonification, the experience is linear since data are converted into tones that must be played back one at a time.

“If you are only thinking about directly translating visual features into nonvisual features, then you miss out on the unique strengths and weaknesses of each modality,” Zong adds.

They designed Umwelt to offer flexibility, enabling a user to switch between modalities easily when one would better suit their task at a given time.

To use the editor, one uploads a dataset to Umwelt, which employs heuristics to automatically creates default representations in each modality.

If the dataset contains stock prices for companies, Umwelt might generate a multiseries line chart, a textual structure that groups data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.

The default heuristics are intended to help the user get started.

“In any kind of creative tool, you have a blank-slate effect where it is hard to know how to begin. That is compounded in a multimodal tool because you have to specify things in three different representations,” Zong says.

The editor links interactions across modalities, so if a user changes the textual description, that information is adjusted in the corresponding sonification. Someone could utilize the editor to build a multimodal representation, switch to the viewer for an initial exploration, then return to the editor to make adjustments.

Helping users communicate about data

To test Umwelt, they created a diverse set of multimodal representations, from scatterplots to multiview charts, to ensure the system could effectively represent different data types. Then they put the tool in the hands of five expert screen reader users.

Study participants mostly found Umwelt to be useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that decreased the time it took them to analyze data. The users agreed that Umwelt could help them communicate about data more easily with sighted colleagues.

“What stands out about Umwelt is its core philosophy of de-emphasizing the visual in favor of a balanced, multisensory data experience. Often, nonvisual data representations are relegated to the status of secondary considerations, mere add-ons to their visual counterparts. However, visualization is merely one aspect of data representation. I appreciate their efforts in shifting this perception and embracing a more inclusive approach to data science,” says JooYoung Seo, an assistant professor in the School of Information Sciences at the University of Illinois at Urbana-Champagne, who was not involved with this work.

Moving forward, the researchers plan to create an open-source version of Umwelt that others can build upon. They also want to integrate tactile sensing into the software system as an additional modality, enabling the use of tools like refreshable tactile graphics displays.

“In addition to its impact on end users, I am hoping that Umwelt can be a platform for asking scientific questions around how people use and perceive multimodal representations, and how we can improve the design beyond this initial step,” says Zong.

This work was supported, in part, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship.