Researchers warn that AI-powered drug detection systems could be repurposed to create chemical weapons

Researchers warn that AI-powered drug detection systems could be repurposed to create chemical weapons

In 2020 Collaborations Pharmaceuticals, a company specializing in researching new drug candidates for rare and communicable diseases, received an unusual request. The private company in Raleigh, NC was asked to make a presentation at an international conference on chemical and biological weapons. The talk was about how artificial intelligence software, typically used to develop drugs to treat, for example, Pitt-Hopkins syndrome or Chagas disease, could be sidetracked for more nefarious purposes.

In responding to the invitation, Collaborations CEO Sean Ekins began brainstorming with Fabio Urbina, a senior scientist at the company. It didn’t take long before they got an idea: what if, instead of using animal toxicology data to avoid dangerous side effects for a drug, Collaborations put its AI-based MegaSyn software to work by generating a compendium of molecules. toxic that were similar to VX, a notorious nerve agent?

The team handled MegaSyn overnight and produced 40,000 substances, including not just VX but other known chemical weapons, as well as many entirely new potentially toxic substances. All it took was some programming, open source data, a 2015 Mac computer, and less than six hours of machine time. “It just felt a little surreal,” says Urbina, noting how the software output was similar to the company’s commercial drug development process. “It was no different from something we had done before: use these generative models to generate promising new drugs.”

Collaborations presented the work at Spiez CONVERGENCE, a conference in Switzerland held every two years to assess new trends in biological and chemical research that could pose a threat to national security. Urbina, Ekins, and their colleagues even posted a peer-reviewed comment on the company’s research in the magazine Intelligence of the machine of nature—And went on to deliver a briefing on the findings to the White House’s Office of Science and Technology Policy. “Our sense is that [the research] it could be a useful springboard for policy development in this area, ”says Filippa Lentzos, co-director of the Center for Science and Security Studies at King’s College London and co-author of the paper.

The eerie resemblance to the company’s daily chore was striking. Researchers had previously used MegaSyn to generate molecules with therapeutic potential that have the same molecular target as VX, Urbina says. These drugs, called acetylcholinesterase inhibitors, can help treat neurodegenerative conditions such as Alzheimer’s. For their study, the researchers simply asked the software to generate VX-like substances without entering the exact structure of the molecule.

Many drug discovery AIs, including MegaSyn, use artificial neural networks. “Basically, the neural network tells us which routes to take to lead to a specific destination, which is biological activity,” says Alex MacKerell, director of the Computer-Aided Drug Design Center at the University of Maryland School of Pharmacy, which is not been involved in research. Artificial intelligence systems “rate” a molecule based on certain criteria, such as how well it inhibits or activates a specific protein. A higher score indicates to researchers that the substance may be more likely to have the desired effect.

In his study, the company’s scoring method revealed that many of the new molecules generated by MegaSyn it was expected to be more toxic than VX, a knowledge that made both Urbina and Ekins uncomfortable. They wondered if they had already crossed an ethical boundary even by running the program and decided to do nothing else to narrow the results computationally, much less test substances in any way.

“I think their ethical intuition was exactly right,” says Paul Root Wolpe, a bioethicist and director of the Center for Ethics at Emory University, who was not involved in the research. Wolpe often writes and reflects on issues related to emerging technologies such as artificial intelligence. Once the authors felt they could prove this was a potential threat, he says, “their obligation was not to push it any further.”

But some experts say the research wasn’t enough to answer important questions about whether using artificial intelligence software to find toxins could practically lead to the development of a true bioweapon.

“The development of real guns in weapons programs of the past has shown, time and time again, that what seems possible in theory may not be possible in practice,” comments Sonia Ben Ouagrham-Gormley, associate professor at the Schar School of Policy. and the government biodefence program at George Mason University, which was not involved in the research.

Despite this challenge, the ease with which an AI can rapidly generate large amounts of potentially dangerous substances could still accelerate the process of creating lethal biological weapons, says Elana Fertig, associate director of quantitative sciences at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University, which was not involved in the research.

To make it more difficult for people to misuse these technologies, the paper’s authors propose several ways to monitor and control who may use these technologies and how they are used, including waiting lists that would require users to go through a process. pre-screening to verify their credentials before they could access patterns, data or code that could easily be misused.

They also suggest introducing AI for drug discovery to the public through an application programming interface (API), which is an intermediary that allows two software to talk to each other. A user should specifically request molecule data from the API. In an email to American scientist, Ekins wrote that an API could be structured to generate only molecules that minimize potential toxicity and “require users [apply] the tools / models in a specific way “. Users who would have access to the API could also be limited and a limit could be set on the number of molecules a user could generate at one time. However, Ben Ouagrham-Gormley argues that without proving that the technology could readily favor the development of biological weapons, such regulation could be premature.

For their part, Urbina and Ekins see their work as a first step in drawing attention to the problem of the misuse of this technology. “We don’t want to portray these things as bad because they actually have a lot of value,” says Ekins. “But there is that dark side. There’s that note of caution and I think it’s important to take that into account. “

Leave a Comment

Your email address will not be published.