MIT community members elected to the National Academy of Inventors for 2023

MIT community members elected to the National Academy of Inventors for 2023

The National Academy of Inventors (NAI) recently announced the election of more than 160 individuals to their 2023 class of fellows. Among them are two members of the MIT Koch Institute for Integrative Cancer Research, Professor Daniel G. Anderson and Principal Research Scientist Ana Jaklenec. In addition, 11 MIT alumni were also recognized.

The highest professional distinction accorded solely to academic inventors, election to the NAI recognizes individuals who have created or facilitated outstanding inventions that have made a tangible impact on quality of life, economic development, and the welfare of society.  

“Daniel and Ana embody some of the Koch Institute’s core values of interdisciplinary innovation and drive to translate their discoveries into real impact for patients,” says Matthew Vander Heiden, director of the Koch Institute. “Their election to the academy is very well-deserved, and we are honored to count them both among the Koch Institute’s and MIT’s research community.”

Daniel Anderson is the Joseph R. Mares (1924) Professor of Chemical Engineering, and a core member of the Institute for Medical Engineering and Science. He is a leading researcher in the fields of nanotherapeutics and biomaterials. Anderson’s work has led to advances in a range of areas, including medical devices, cell therapy, drug delivery, gene therapy, and material science, and has resulted in the publication of more than 500 papers, patents, and patent applications. He has founded several companies, including Living Proof, Olivo Labs, Crispr Therapeutics (CRSP), Sigilon Therapeutics, Verseau Therapeutics, oRNA, and VasoRx. He is a member of National Academy of Medicine, the Harvard-MIT Division of Health Science and Technology, and is an affiliate of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT and Harvard.

Ana Jaklenec, a principal research scientist and principal investigator at the Koch Institute, is a leader in the fields of bioengineering and materials science, focused on controlled delivery and stability of therapeutics for global health. She is an inventor of several drug delivery technologies that have the potential to enable equitable access to medical care globally. Her lab is developing new manufacturing techniques for the design of materials at the nano- and micro-scale for self-boosting vaccines, 3D printed on-demand microneedles, heat-stable polymer-based carriers for oral delivery of micronutrients and probiotics, and long-term drug delivery systems for cancer immunotherapy. She has published over 100 manuscripts, patents, and patent applications and has founded three companies: Particles for Humanity, VitaKey, and OmniPulse Biosciences.

The 11 MIT alumni who were elected to the NAI for 2023 include:

  • Michel Barsoum PhD ’85 (Materials Science and Engineering);
  • Eric Burger ’84 (Electrical Engineering and Computer Science);
  • Kevin Kelly SM ’88, PhD ’91 (Mechanical Engineering);
  • Ali Khademhosseini PhD ’05 (Biological Engineering);
  • Joshua Makower ’85 (Mechanical Engineering);
  • Marcela Maus ’97 (Biology);
  • Milos Popovic SM ’02, PhD ’08 (Electrical Engineering and Computer Science);
  • Milica Radisic PhD ’04 (Chemical Engineering);
  • David Reinkensmeyer ’88 (Electrical Engineering);
  • Boris Rubinsky PhD ’81 (Mechanical Engineering); and
  • Paul S. Weiss ’80, SM ’80 (Chemistry).

Since its inception in 2012, the NAI Fellows program has grown to include 1,898 exceptional researchers and innovators, who hold over 63,000 U.S. patents and 13,000 licensed technologies. NAI Fellows are known for the societal and economic impact of their inventions, contributing to major advancements in science and consumer technologies. Their innovations have generated over $3 trillion in revenue and generated 1 million jobs.    

“This year’s class of NAI Fellows showcases the caliber of researchers that are found within the innovation ecosystem. Each of these individuals are making significant contributions to both science and society through their work,” says Paul R. Sanberg, president of the NAI. “This new class, in conjunction with our existing fellows, are creating innovations that are driving crucial advancements across a variety of disciplines and are stimulating the global and national economy in immeasurable ways as they move these technologies from lab to marketplace.” 

AI agents help explain other AI systems

AI agents help explain other AI systems

Explaining the behavior of trained neural networks remains a compelling puzzle, especially as these models grow in size and sophistication. Like other scientific challenges throughout history, reverse-engineering how artificial intelligence systems work requires a substantial amount of experimentation: making hypotheses, intervening on behavior, and even dissecting large networks to examine individual neurons. To date, most successful experiments have involved large amounts of human oversight. Explaining every computation inside models the size of GPT-4 and larger will almost certainly require more automation — perhaps even using AI models themselves. 

Facilitating this timely endeavor, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel approach that uses AI models to conduct experiments on other systems and explain their behavior. Their method uses agents built from pretrained language models to produce intuitive explanations of computations inside trained networks.

Central to this strategy is the “automated interpretability agent” (AIA), designed to mimic a scientist’s experimental processes. Interpretability agents plan and perform tests on other computational systems, which can range in scale from individual neurons to entire models, in order to produce explanations of these systems in a variety of forms: language descriptions of what a system does and where it fails, and code that reproduces the system’s behavior. Unlike existing interpretability procedures that passively classify or summarize examples, the AIA actively participates in hypothesis formation, experimental testing, and iterative learning, thereby refining its understanding of other systems in real time. 

Complementing the AIA method is the new “function interpretation and description” (FIND) benchmark, a test bed of functions resembling computations inside trained networks, and accompanying descriptions of their behavior. One key challenge in evaluating the quality of descriptions of real-world network components is that descriptions are only as good as their explanatory power: Researchers don’t have access to ground-truth labels of units or descriptions of learned computations. FIND addresses this long-standing issue in the field by providing a reliable standard for evaluating interpretability procedures: explanations of functions (e.g., produced by an AIA) can be evaluated against function descriptions in the benchmark.  

For example, FIND contains synthetic neurons designed to mimic the behavior of real neurons inside language models, some of which are selective for individual concepts such as “ground transportation.” AIAs are given black-box access to synthetic neurons and design inputs (such as “tree,” “happiness,” and “car”) to test a neuron’s response. After noticing that a synthetic neuron produces higher response values for “car” than other inputs, an AIA might design more fine-grained tests to distinguish the neuron’s selectivity for cars from other forms of transportation, such as planes and boats. When the AIA produces a description such as “this neuron is selective for road transportation, and not air or sea travel,” this description is evaluated against the ground-truth description of the synthetic neuron (“selective for ground transportation”) in FIND. The benchmark can then be used to compare the capabilities of AIAs to other methods in the literature. 

Sarah Schwettmann PhD ’21, co-lead author of a paper on the new work and a research scientist at CSAIL, emphasizes the advantages of this approach. “The AIAs’ capacity for autonomous hypothesis generation and testing may be able to surface behaviors that would otherwise be difficult for scientists to detect. It’s remarkable that language models, when equipped with tools for probing other systems, are capable of this type of experimental design,” says Schwettmann. “Clean, simple benchmarks with ground-truth answers have been a major driver of more general capabilities in language models, and we hope that FIND can play a similar role in interpretability research.”

Automating interpretability 

Large language models are still holding their status as the in-demand celebrities of the tech world. The recent advancements in LLMs have highlighted their ability to perform complex reasoning tasks across diverse domains. The team at CSAIL recognized that given these capabilities, language models may be able to serve as backbones of generalized agents for automated interpretability. “Interpretability has historically been a very multifaceted field,” says Schwettmann. “There is no one-size-fits-all approach; most procedures are very specific to individual questions we might have about a system, and to individual modalities like vision or language. Existing approaches to labeling individual neurons inside vision models have required training specialized models on human data, where these models perform only this single task. Interpretability agents built from language models could provide a general interface for explaining other systems — synthesizing results across experiments, integrating over different modalities, even discovering new experimental techniques at a very fundamental level.” 

As we enter a regime where the models doing the explaining are black boxes themselves, external evaluations of interpretability methods are becoming increasingly vital. The team’s new benchmark addresses this need with a suite of functions with known structure, that are modeled after behaviors observed in the wild. The functions inside FIND span a diversity of domains, from mathematical reasoning to symbolic operations on strings to synthetic neurons built from word-level tasks. The dataset of interactive functions is procedurally constructed; real-world complexity is introduced to simple functions by adding noise, composing functions, and simulating biases. This allows for comparison of interpretability methods in a setting that translates to real-world performance.      

In addition to the dataset of functions, the researchers introduced an innovative evaluation protocol to assess the effectiveness of AIAs and existing automated interpretability methods. This protocol involves two approaches. For tasks that require replicating the function in code, the evaluation directly compares the AI-generated estimations and the original, ground-truth functions. The evaluation becomes more intricate for tasks involving natural language descriptions of functions. In these cases, accurately gauging the quality of these descriptions requires an automated understanding of their semantic content. To tackle this challenge, the researchers developed a specialized “third-party” language model. This model is specifically trained to evaluate the accuracy and coherence of the natural language descriptions provided by the AI systems, and compares it to the ground-truth function behavior. 

FIND enables evaluation revealing that we are still far from fully automating interpretability; although AIAs outperform existing interpretability approaches, they still fail to accurately describe almost half of the functions in the benchmark. Tamar Rott Shaham, co-lead author of the study and a postdoc in CSAIL, notes that “while this generation of AIAs is effective in describing high-level functionality, they still often overlook finer-grained details, particularly in function subdomains with noise or irregular behavior. This likely stems from insufficient sampling in these areas. One issue is that the AIAs’ effectiveness may be hampered by their initial exploratory data. To counter this, we tried guiding the AIAs’ exploration by initializing their search with specific, relevant inputs, which significantly enhanced interpretation accuracy.” This approach combines new AIA methods with previous techniques using pre-computed examples for initiating the interpretation process.

The researchers are also developing a toolkit to augment the AIAs’ ability to conduct more precise experiments on neural networks, both in black-box and white-box settings. This toolkit aims to equip AIAs with better tools for selecting inputs and refining hypothesis-testing capabilities for more nuanced and accurate neural network analysis. The team is also tackling practical challenges in AI interpretability, focusing on determining the right questions to ask when analyzing models in real-world scenarios. Their goal is to develop automated interpretability procedures that could eventually help people audit systems — e.g., for autonomous driving or face recognition — to diagnose potential failure modes, hidden biases, or surprising behaviors before deployment. 

Watching the watchers

The team envisions one day developing nearly autonomous AIAs that can audit other systems, with human scientists providing oversight and guidance. Advanced AIAs could develop new kinds of experiments and questions, potentially beyond human scientists’ initial considerations. The focus is on expanding AI interpretability to include more complex behaviors, such as entire neural circuits or subnetworks, and predicting inputs that might lead to undesired behaviors. This development represents a significant step forward in AI research, aiming to make AI systems more understandable and reliable.

“A good benchmark is a power tool for tackling difficult challenges,” says Martin Wattenberg, computer science professor at Harvard University who was not involved in the study. “It’s wonderful to see this sophisticated benchmark for interpretability, one of the most important challenges in machine learning today. I’m particularly impressed with the automated interpretability agent the authors created. It’s a kind of interpretability jiu-jitsu, turning AI back on itself in order to help human understanding.”

Schwettmann, Rott Shaham, and their colleagues presented their work at NeurIPS 2023 in December.  Additional MIT coauthors, all affiliates of the CSAIL and the Department of Electrical Engineering and Computer Science (EECS), include graduate student Joanna Materzynska, undergraduate student Neil Chowdhury, Shuang Li PhD ’23, Assistant Professor Jacob Andreas, and Professor Antonio Torralba. Northeastern University Assistant Professor David Bau is an additional coauthor.

The work was supported, in part, by the MIT-IBM Watson AI Lab, Open Philanthropy, an Amazon Research Award, Hyundai NGV, the U.S. Army Research Laboratory, the U.S. National Science Foundation, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.

The Case for Decentralizing Your AI Tech Stack

So much of the conversation on AI development has become dominated by a futuristic and philosophical debate – should we approach general artificial intelligence, where AI will become advanced enough to perform any task the way a human could? Is that even possible? While the acceleration…

ChatGPT Meets Its Match: The Rise of Anthropic Claude Language Model

Over the past year, generative AI has exploded in popularity, thanks largely to OpenAI’s release of ChatGPT in November 2022. ChatGPT is an impressively capable conversational AI system that can understand natural language prompts and generate thoughtful, human-like responses on a wide range of topics. However,…

Someone Is Already Making A Steamboat Willie Mickey Mouse Horror Game

Someone Is Already Making A Steamboat Willie Mickey Mouse Horror Game

On January 1, Steamboat Willie, the cartoon that introduced the world to Mickey Mouse, entered the public domain, meaning the first version of Disney’s iconic mascot is (mostly) free to use by anyone. And just like when the film Winnie the Pooh: Blood and Honey took advantage of that whimsical character entering the public domain, someone has already started making a twisted, horrifying version of Mickey. In this case, it’s a video game.

It’s called Infestation: Origins, and it’s the debut title by Nightmare Forge Games. The game is an episodic four-player co-op survival adventure in which players control exterminators tasked with killing mutated vermin – and one of them happens to be the terrifying, person-sized monster modeled after Mickey. Similar to games like Phasmophobia and Lethal Company, you’ll rely on various tools and surveillance equipment to track the source of the infestations before neutralizing them. You’ll also have to maintain power for certain pieces of equipment while evading the giant, murderous rodents.  

[embedded content]

The game was announced on New Year’s Day but has already courted some controversy. It was originally titled Infestation 88 (as seen in the trailer) until it received blowback online for its perceived ties to Neo-Nazis (88 is a common dogwhistle), among other references in the trailer. Nightmare Forge Games quickly denied this and issued a statement to IGN  saying it was unaware of the reference and apologizing for any unintentional endorsement of Neo-Nazis, explaining the title referred to the game’s late 1980s setting. The team states:

“We want to apologize for our ignorance on this topic and appreciate that it was brought to our attention so we could address it. There is no intentional use of Nazi symbolism in our game nor studio, and we’ll continue to address any concerns as they arise. We strongly stand against Nazism and hate in any form.”

Outlets such as Motherboard have also pointed out red flags indicating the game could be shovelware, such as its apparent use of asset flipping (via the Unity store) and AI-generated text-to-speech voice work. “As an indie studio, we do rely on some purchased assets from the Unreal and Unity stores,” Nightmare Forge Games told Motherboard. “However, there is a lot of work going into this project that we’re hopeful will be evident upon release.” The developer states the AI voiceovers in particular are placeholders and plans to hire real voice actors in the future. 

Following the failure of similarly slapped together games like The Day Before, curious players should perhaps approach Infestation: Origins with some degree of caution. Infestation: Origins is coming to PC via Steam and will first launch in Early Access sometime this year.

Avid Media Composer Today with AI Tools & Better Than Ever Team Collab – Videoguys

In this episode of Videoguys Live, join Gary as he delves into the realm of Avid Media Composer, unraveling the myriad licensing options available and showcasing the innovative AI tools like PhraseFind and ScriptSync. Get a firsthand look at how these tools can elevate your editing experience. Gary also sheds light on Avid Nexis and its role in enhancing team collaboration through shared storage solutions. Whether you’re a seasoned editor or a newcomer, this episode offers valuable insights to boost your editing skills and streamline teamwork.

[embedded content]


Are you new to​ Avid Media Composer?​
If you do not own an Avid Media Composer license or subscription, need to purchase a new license or subscription for an additional user, or have a license that has expired and cannot be renewed then you can choose from a new subscription or license today.

Videoguys note: If you have a subscription license the renewals are the same price but there are special “REN” products that you will want to purchase to extend your current license term without activating a new license​

By purchasing the correct renewal SKU your Avid activation ID stays the same and your renewal automatically continues from your current expiration date even if you renew a bit early

NEW Media Composer Subscriptions
RENEWAL Media Composer Subscriptions

Get More From Avid Media Composer Ultimate
Included in Media Composer Ultimate:

  • PhraseFind AI Option
    Find the right clips faster than ever with the help of AI
  • ScriptSync AI Option
    Save hours editing by matching your content to your script
  • Symphony Option
    Advanced color grading with precision control

Get more from your Team:

  • Avid Team Plans
    Take the complexity out of managing licenses with Avid Team Plans and manage your team from a single admin console
  • Avid NEXIS Storage Solutions for Collaborative Teams
    Shared Storage solutions are perfect for teams of 3 or more Avid editors starting with the Avid Nexis PRO

Avid Editor – Avid Continues to Emphasize Subscriptions

If you already own a Perpetual license you can renew… or switch to subscription & save…
This license means you bought and own the software and if you keep an active Support Plan then you can continue to update it as new features are released. If you have Media Composer Perpetual License with an active support plan you can renew that for updates or Crossgrade now to a subscription license.

FAQs

– This renewal is for your Avid Perpetual license and will give you a year of Avid Standard Support that includes all updates

– You may purchase this renewal at any time, activate it in your Avid account and it will add a year to your current expiration

– Please activate your redemption code within 7 days of purchase to avoid any confusion in the Avid system

Avid Media Composer Today with AI Tools & Better Than Ever Team Collab – Videoguys

Media Composer PhraseFind AI Option

  • Work faster and easier with new features​
  • Find the right clips fast​
  • Experience seamless integration​
  • Speed up your workflow​
  • Work in different languages​

Media Composer ScriptSync AI Option

  • Work faster and easier with new features
  • Save hours of time
  • Sync and edit text
  • Find the best takes fast
  • Work in different languages

Check Out “Let’s Edit With Media Composer”
Subscribe to Kevin McAuliffe’s FREE YouTube channel and look for Patreon Subscription Offers for advanced​

AVID NEXIS STORAGE
Shared media storage optimized for media workflows

  • Enable everyone across your video, audio, design, social media, and marketing teams to collaborate more effectively
  • Eliminate the time wasted searching for and copying media 
  • Capitalize on new opportunities with all your media always within easy reach for rediscovery, repurposing, and reuse

AVID NEXIS | PRO PLUS
The Ultimate Real-time Collaborative Shared Storage Solution

  • Perfect for small prost-production groups, house of worship, government, corporate & media education environments
  • Connect your team, share media and sequences, and work together on the same projects in real time

Enterprise F Series

  • Avid NEXIS | F2—For small- to mid-size organizations, this 10/25GbE HDD engine provides up to 480 MB/s of bandwidth and 60–140 TB of storage per media pack
  • Avid NEXIS | F2X—This new expansion unit for F2 provides an additional 60, 100, or 140 TB of storage capacity, plus up to two hot spare drives; the bundle of F2 and F2X can be used to expand existing E4 engines, including mirror configurations
  • Avid NEXIS | F5—For large-scale production, this high-density 25/50GbE HDD engine offers the greatest scalability, up to 3.2 GB/s of total bandwidth, and 240 TB–1.12 PB of storage per engine 
  • Avid NEXIS | F5 NL—For nearline and archive, this ultra-high-density 10GbE HDD engine offers integrated media management, proxy archive, and 640 TB–1.28 PB of redundant storage per engine

We will help you find a local dealer!
We have a network of dealers nationwide!
Call us for a referral 800-323-2325