Protein designers join call for responsible AI development – Technology Org

UW Medicine researchers in protein design have joined more than 100 other senior scientists from more than 20 countries to pledge to use artificial intelligence “to the benefit of society and refrain from research that is likely to cause overall harm or enable misuse of our technologies.”

Protein designers join call for responsible AI development – Technology Org

Several senior scientists are calling for the use of ethics in AI-driven protein design.

The statement, “Community Values, Guiding Principles, and Commitments for the Responsible Development of AI for Protein Design,” emerged from discussions that occurred during and after a summit on AI safety convened by the Institute for Protein Design at the University of Washington School of Medicine in October 2023. 

David Baker, professor of biochemistry at UW Medicine and director of the Institute, said, “I view this as a crucial step for the scientific community. The responsible use of AI for protein design will unlock new vaccines, medicines and sustainable materials that benefit the world. As scientists, we must ensure this happens while minimizing the chance that our tools could be misused to cause harm.”

The statement’s signatories include Frances Arnold, a professor of chemical engineering, bioengineering, and biochemistry at the California Institute of Technology who won a share of the Nobel Prize in chemistry in 2018 for her use of directed evolution to engineer enzymes; Harvard geneticist George Church, a leading expert in genomics and synthetic biology; and Eric Horvitz, Microsoft’s chief scientific officer and an AI expert.

Along with Baker, UW signatories include Neil King, assistant professor of biochemistry, whose work includes the computational design of protein-based vaccines, and Gaurav Bhardwaj, assistant professor of medicinal chemistry and an expert in computational peptide design.

Protein design technology makes it possible for researchers to create new proteins that are custom-made to perform specialized tasks. The approach has already been used to create new medicines, vaccines, specialized enzymes and biomaterials. 

AI, with its ability to rapidly analyze protein structures and design new ones, has greatly accelerated the field and expanded the number of labs who can engage in protein design. But the power and widespread availability of AI technology has raised concerns that either through carelessness or malevolence AI could be used to produce dangerous products, including bioweapons.

In the statement, the signatories said they believe the potential benefits of AI outweigh the risks but that a “proactive risk management approach may be required to mitigate the potential of developing AI technologies that could be misused, intentionally or otherwise, to cause harm.”

To prevent misuse the statement lays out a set of values and principles to ensure responsible development of AI technologies in the field of protein design. These include working with governments, the public and other stakeholders to ensure AI research benefits society, screening synthetic DNA in development to detect sequences that might pose a hazard, and conducting research in the open so AI research can be evaluated and scrutinized, limiting access only when AI systems “present identified meaningful and unresolved risks.”

By signing the statement, the signatories are not implying that their institution has endorsed the statement.

Source: University of Washington