Researchers at the University of California – San Diego have introduced CARMEN, a Cognitively Assistive Robot for Motivation and Neurorehabilitation, designed to address the growing challenge of Mild Cognitive Impairment (MCI) in older adults. MCI affects approximately 20% of individuals over 65, serving as a potential…
A new way to miniaturize cell production for cancer treatment
Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a novel way to produce clinical doses of viable autologous chimeric antigen receptor (CAR) T-cells in a ultra-small automated closed-system microfluidic chip, roughly the size of a pack of cards.
This is the first time that a microbioreactor is used to produce autologous cell therapy products. Specifically, the new method was successfully used to manufacture and expand CAR-T cells that are as effective as cells produced using existing systems in a smaller footprint and less space, and using fewer seeding cell numbers and cell manufacturing reagents. This could lead to more efficient and affordable methods of scaling-out autologous cell therapy manufacturing, and could even potentially enable point-of-care manufacturing of CAR T-cells outside of a laboratory setting — such as in hospitals and wards.
CAR T-cell therapy manufacturing requires the isolation, activation, genetic modification, and expansion of a patient’s own T-cells to kill tumor cells upon reinfusion into the patient. Despite how cell therapies have revolutionized cancer immunotherapy, with some of the first patients who received autologous cell therapies in remission for more than 10 years, the manufacturing process for CAR-T cells has remained inconsistent, costly, and time-consuming. It can be prone to contamination, subject to human error, and requires seeding cell numbers that are impractical for smaller-scale CAR T-cell production. These challenges create bottlenecks that restrict both the availability and affordability of these therapies despite their effectiveness.
In a paper titled “A high-density microbioreactor process designed for automated point-of-care manufacturing of CAR T cells” published in the journal Nature Biomedical Engineering, SMART researchers detailed their breakthrough: Human primary T-cells can be activated, transduced, and expanded to high densities in a 2-mililiter automated closed-system microfluidic chip to produce over 60 million CAR T-cells from donors with lymphoma, and over 200 million CAR T-cells from healthy donors. The CAR T-cells produced using the microbioreactor are as effective as those produced using conventional methods, but in a smaller footprint and less space, and with fewer resources. This translates to lower cost of goods manufactured (COGM), and potentially to lower costs for patients.
The groundbreaking research was led by members of the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group at SMART. Collaborators include researchers from the Duke-NUS Medical School; the Institute of Molecular and Cell Biology at the Agency for Science, Technology and Research; KK Women’s and Children’s Hospital; and Singapore General Hospital.
“This advancement in cell therapy manufacturing could ultimately offer a point-of-care platform that could substantially increase the number of CAR T-cell production slots, reducing the wait times and cost of goods of these living medicines — making cell therapy more accessible to the masses. The use of scaled-down bioreactors could also aid process optimization studies, including for different cell therapy products,” says Michael Birnbaum, co-lead principal investigator at SMART CAMP, associate professor of biological engineering at MIT, and a co-senior author of the paper.
With high T-cell expansion rates, similar total T-cell numbers could be attained with a shorter culture period in the microbioreactor (seven to eight days) compared to gas-permeable culture plates (12 days), potentially shortening production times by 30-40 percent. The CAR T-cells from both the microfluidic bioreactor and gas-permeable culture plates only showed subtle differences in cell quality. The cells were equally functional in killing leukemia cells when tested in mice.
“This new method suggests that a dramatic miniaturization of current-generation autologous cell therapy production is feasible, with the potential of significantly alleviating manufacturing limitations of CAR T-cell therapy. Such a miniaturization would lay the foundation for point-of-care manufacturing of CAR T-cells and decrease the “good manufacturing practice” (GMP) footprint required for producing cell therapies — which is one of the primary drivers of COGM,” says Wei-Xiang Sin, research scientist at SMART CAMP and first author of the paper.
Notably, the microbioreactor used in the research is a perfusion-based, automated, closed system with the smallest footprint per dose, smallest culture volume and seeding cell number, as well as the highest cell density and level of process control attainable. These microbioreactors — previously only used for microbial and mammalian cell cultures — were originally developed at MIT and have been advanced to commercial production by Millipore Sigma.
The small starting cell numbers required, compared to existing larger automated manufacturing platforms, means that smaller amounts of isolation beads, activation reagents, and lentiviral vectors are required per production run. In addition, smaller volumes of medium are required (at least tenfold lower than larger automated culture systems) owing to the extremely small culture volume (2 milliliters; approximately 100-fold lower than larger automated culture systems) — which contributes to significant reductions in reagent cost. This could benefit patients, especially pediatric patients who have low or insufficient T-cell numbers to produce therapeutic doses of CAR T-cells.
Moving forward, SMART CAMP is working on further engineering sampling and/or analytical systems around the microbioreactor so that CAR-T production can be performed with reduced labor and out of a laboratory setting, potentially facilitating the decentralized bedside manufacturing of CAR T-cells. SMART CAMP is also looking to further optimize the process parameters and culture conditions to improve cell yield and quality for future clinical use.
The research was conducted by SMART and supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.
“If” CSS Gets Inline Conditionals
A few sirens went off a couple of weeks ago when the CSS Working Group (CSSWG) resolved to add an if() conditional to the CSS Values Module Level 5 specification. It was Lea Verou’s X post that same day that …
“If” CSS Gets Inline Conditionals originally…
Transforming network security | Three ways AI can boost infrastructure operations – CyberTalk
By Ulrica de Fort-Menares, Vice President of Product Management for Infrastructure Assurance at BlueCat Networks
Artificial intelligence (AI) has the power to reshape how you operate your network security infrastructure.
Firewalls have been a first line of defense in network security for many years and must always be operational. Maintaining five nines, or service availability 99.999% of the time, requires skilled network security practitioners. However, many enterprises have a limited number of security experts and struggle to find enough skilled expertise to manage their increasingly complex network infrastructure.
An AI-powered, knowledge-based expert system can expand team skills so that they’re available around the clock and can help your enterprise more easily manage highly complex network security infrastructure.
In this article, we’ll explore three ways that AI can boost your network security operations and augment limited resources. Specifically, we’ll look at how you can:
- Use a knowledge-based expert system to find hidden issues in your security infrastructure before they become bigger problems
- Combine that system with automation to automatically troubleshoot complex problems, much like a human would
- Utilize machine learning models to detect anomalies in an enterprise environment
Find hidden issues with a knowledge-based expert system
A knowledge-based system is a form of AI that encodes the collected knowledge of human experts to detect and solve difficult problems. Knowledge-based systems generally consist of a data repository or knowledge base, an inference or rules engine to locate and process data, and a user interface. Knowledge-based systems can assist with expert decision-making, easily process large quantities of data, and reveal insights or create new knowledge from existing information.
When applied to network security, a knowledge-based system contains in-depth knowledge, culled from human experts’ technical practices and experiences, of how security infrastructure should work and behave. Like a firewall engineer, it can analyze data, detect issues, and prioritize alerts, just with much greater speed and at a much larger scale than what a human is capable of. A system based on the knowledge of human experts can assist with identifying problems and can help network security teams troubleshoot technical issues. It can augment team skills, allowing teams to do more with less.
Let’s look at a specific example of a network security application:
A knowledge-based system can know how important a Border Gateway Protocol (BGP) peer is to route traffic to the internet and that detecting BGP issues is more than just monitoring the peer state. It can also ensure that the routing process learns routes from its BGP peer and passes the information to the secure gateway’s routing table. Further, it can alert you the moment it detects a hidden route condition.
Another benefit of a knowledge-based system is its sophisticated rule engine, which can detect complex problems. Building on the same BGP example, the system has knowledge about a clustered environment. If the passive member of the cluster does not have any active routes, it is OK. But if the active member of the cluster has zero active routes, it is not OK. The system operates on more than just a simple if-then-else construct.
Auto-triage with a knowledge-based expert system
Perhaps one of the most important uses of AI is to help us automate tasks. By coupling a knowledge-based expert system with an automation engine, we can perform automated troubleshooting. The system applies a broad base of domain-specific expertise and makes intelligent decisions about the situation. Much like a human would, it walks down a decision tree to diagnose a complex problem.
Let’s explore this further using the example of a BGP peer going down. The system runs investigative steps. It follows a troubleshooting workflow with branches gathered from industry experts and fed into the system. Applying domain knowledge is key to determining what relevant information to analyze.
In this example, multiple conditions and scenarios are considered, as the troubleshooting steps have different branches based on the configuration. The steps to troubleshoot a Layer 2 BGP connectivity issue are very different from one in Layer 3. As you can see from this workflow, troubleshooting a down BGP peer isn’t exactly a straightforward task.
Using a knowledge-based expert system to automatically diagnose a problem augments IT teams and improves mean time to resolution.
Detect anomalies with machine learning models
Any nuances in the operational conditions of security infrastructure can signal unacceptable levels of business risk. Therefore, anomaly detection is an important tool for identifying rare events or outliers that may be significant.
For example, if a firewall is receiving a sudden increase in “non-syn-tcp” first packets, it may be indicative of an asymmetric routing issue in the network environment. The ability to detect these rare conditions or outliers can avoid bigger problems.
While machine learning is one of many kinds of AI, it is typically most used for detecting anomalies. One of the simplest and oldest ways to detect anomalies is to use statistical methods, such as standard deviation or z-score. However, these methods have some limitations, such as being sensitive to outliers, assuming a fixed distribution, and not capturing complex patterns in the data.
In this chart, we are looking at the number of concurrent connections over four months. A human can easily identify the three outliers, but a machine needs to be trained. The outliers are:
- A dramatic increase in the number of connection counts in late October
- A similar increase in the middle of November
- Then, a dramatic decrease of connection counts over the Christmas holidays
Deep learning for anomaly detection can apply to security infrastructure in novel ways. For example, we can examine data relating connection counts with CPU usage to find common patterns. With deep learning methods, we can provide even higher fidelity alerts around anomalies.
The autoregressive integrated moving average model is known for forecasting stock market returns. But we can leverage this algorithm and machine learning to make predictions about your security infrastructure based on historical data. For example, the system can determine at what point your device needs upgrading to support your number of concurrent connections. This can greatly simplify capacity planning.
Summary
Without automation, security teams would spend countless hours gathering diagnostics and data just to keep firewalls and other security infrastructure up and running. Still, a typical security engineer can spend a notable portion of their time identifying and remediating known errors. Security teams often have limited resources, resulting in an even greater need for automated diagnostics and issue detection.
With an AI-powered solution, you can leverage machine learning models and a knowledge-based expert system to detect potential issues before they become bigger problems and troubleshoot these anomalies in your environment like a human would. And it can serve up recommended remediations that security engineers would otherwise have to find and implement manually.
While its capabilities are relatively nascent, even today’s AI has the power to transform the way you operate your network security infrastructure.
Investigating the past to see technology’s future
The MIT Program in Science, Technology, and Society (STS) recently organized and hosted a two-day symposium, The History of Technology: Past, Present, and Future.
The symposium was held June 7-8 at MIT’s Wong Auditorium, and featured scholars from a variety of institutions with expertise in the history of technology. Each presented their ideas about the intersection of science, technology, and society, the field’s needs, and opportunities for its future development.
“We’re pleased to provide a venue in which these kinds of conversations can occur,” said Deborah Fitzgerald, STS program head and former dean of MIT’s School of Humanities, Arts, and Social Sciences.
The symposium opened with welcoming remarks from Fitzgerald and MIT Professor Merritt Roe Smith. Fitzgerald and Smith are both Leverett Howell and William King Cutten Professors of the History of Technology at MIT.
“These kinds of gatherings — of old friends and colleagues and several generations of students — create new opportunities to advance scholarship, create connections, and keep abreast of what’s happening in the field,” Smith said. “Seeing the future through the lens of our shared pasts adds an important perspective on current innovations.”
More than 20 scholars made presentations during the symposium. The topics and speakers included:
- David Lucsko PhD ’05, professor of history at Auburn University: “How Things Work and Why It Matters — or, Why Poring over Automotive Wiring Diagrams from the 1970s Isn’t Actually a Colossal Waste of Time;”
- Dave Unger, an independent public historian: “Tools for Imagining a Better World: Social Technology, Organizational Dark Matter, and Reading for Difference;”
- Gregory Clancey, associate professor at the National University of Singapore: “The History of Technology in an Age of Mass Extinction;” and
- Ruth Schwartz Cowan, professor emerita at the University of Pennsylvania: “Does the History of Technology have a Paradigm?”
How Wealth Managers Can Build Trust Through the Power of Automation and AI
Building trust between wealth managers and their clients has traditionally been credited to effective communication and understanding between the two partners. Over time, wealth managers’ have become increasingly spread thin because of larger portfolios of more demanding clients. The knock-on effect is that clients can feel…
MARKLLM: An Open-Source Toolkit for LLM Watermarking
LLM watermarking, which integrates imperceptible yet detectable signals within model outputs to identify text generated by LLMs, is vital for preventing the misuse of large language models. These watermarking techniques are mainly divided into two categories: the KGW Family and the Christ Family. The KGW Family…
Pioneering Open Models: Nvidia, Alibaba, and Stability AI Transforming the AI Landscape
Artificial intelligence (AI) is profoundly transforming the world, and innovative companies like Nvidia, Alibaba, and Stability AI are among the leaders of this transformation. These companies are making advanced models accessible to a broader audience, advancing innovation, promoting transparency, and enabling diverse applications across industries. This…
Vizrt CaptureCast Saves the Day – Videoguys
The article titled “Class of 2024: University of Utah, SJ Quinney College of Law” by Macy O’Hearn for AVNetwork, discusses how the University of Utah’s SJ Quinney College of Law cost-effectively expanded the availability of their lecture capture (LC) service to more classroom locations. The college invested in an NDI-based LC solution from Vizrt called CaptureCast, which can consume the growing number of NDI-based sources added to the learning spaces using both software and hardware NDI solutions. The main challenge was gaining traction and support for the AVoIP (specifically NDI) solutions from the central campus resources, as the college needed resources not available on staff, with AV system coding support being the most critical need to enable coding the legacy matrix switch out of the equation and adding support for the NDI-based solutions into the existing touch panel UI.
What were the main goals of this project at the University of Utah’s SJ Quinney College of Law?
The main goals of the project at the University of Utah’s SJ Quinney College of Law were to cost-effectively continue to support established lecture capture processes and expand the availability of the lecture capture service to more classroom locations.
What was the main challenge the college faced in implementing the new lecture capture solution?
The main challenge the college faced in implementing the new lecture capture solution was gaining traction and support for AVoIP (specifically NDI) solutions from central campus resources. As a smaller department, the college needed resources not available on staff, with AV system coding support being the most critical need. This support would enable coding the legacy matrix switch out of the equation and adding support for the NDI-based solutions into the existing touch panel UI.
What equipment was used in the new lecture capture solution?
The equipment used in the new lecture capture solution at the University of Utah’s SJ Quinney College of Law included Magewell Pro Convert HDMI Plus, NewTek NDI Tools screen capture and webcam tools, and the NewTek CaptureCast Server.
Read the full article by by Macy O’Hearn for AVNetwork HERE
Learn more about Vizrt below:
SenseTime SenseNova 5.5: China’s first real-time multimodal AI model
SenseTime has unveiled SenseNova 5.5, an enhanced version of its LLM that includes SenseNova 5o—touted as China’s first real-time multimodal model. SenseNova 5o represents a leap forward in AI interaction, providing capabilities on par with GPT-4o’s streaming interaction features. This advancement allows users to engage with…