Legal Knowledge Management and the Rise of Artificial Intelligence
Robotic Refactoring the Workplace
Why Your Next Insurance Claims Processor Could be a Robot
Building an AI Based Machine Learning for Global Economics
The Forgotten Element in Your Big Data Strategy
HK Bain, CEO, Digitech Systems
Natural Language Processing and the Future of Customer Service
Eugene Willard, CIO, LaserShip, Inc.
AI is not just for Silicon Valley: Powering Changes in the Energy...
Chris Shelton, President, AES Next, The AES Corporation
Beyond the Hype - Exploring AI Potential and Pitfalls
Chuck Monroe, Head Of AI Enterprise Solutions, Wells Fargo
Thank you for Subscribing to CIO Applications Weekly Brief
How does AI Benefit Drug Testing?
Testing on animals is an integral part of the drug and chemical compound development and approval procedure of today. This is because scientists cannot accurately predict the properties of new chemicals, let alone how they interact with living cells. A new paper published in Toxicological Sciences, a research journal, shows that predicting the attributes of new compounds is possible if we use the data on past tests and experiments. The artificially intelligent system could be trained to predict the toxicity of unknown chemicals based on previous animal tests, which lead to results that are sometimes more accurate and reliable than the actual test.
The use of AI in drug development is not a new phenomenon. With 28 pharma companies and 93 startups spending several millions to apply machine learning and AI to the process of drug discovery, the industry needs an AI-based disruption. AI can aid in designs and decisions regarding compounds that can be made and tested, leading to fewer experiments, which saves time and money.
While people think it is difficult to use AI in the field of testing as it is messy and complicated, these are the main factors contributing to the use of AI. AI enables one to use messy, imperfect, difficult situations to determine the accuracy of predictions. Bayesian approaches embrace the uncertainty in messy data and work best in such situations.
Big data makes it possible to produce a tool that is more predictive than animal tests like dropping compounds into rabbits’ eyes to check for the presence of irritants, or feeding them to rats to identify lethal doses. This predictive approach was made possible by feeding a vast amount of data to their AI, that was harnessed from datasets collected by the European Chemicals Agency (ECHA) under the REACH (registration, evaluation, authorization, and restriction of chemicals) law of 2007. While this data is widely available, the format is not readable to most computers. This data was reformatted by Thomas Hartung, a toxicologist at Johns Hopkins University in Baltimore, and his crew, to make it feedable to machines, which provided information on 10,000 chemicals and their properties, gathered in over 800 animal tests. The system can now predict the toxicity for several thousand chemicals for nine different test types, and provide information on everything from inhalation damage to effect on aquatic ecosystems.
Reducing drug development-related animal tests is not just a noble cause for humanity and animal rights, but it also shortens the process and makes it cost-effective. The Interagency Coordinating Committee on the Validation of Alternative Methods prepared a roadmap to replace animal use in toxicity testing in February 2018.
Even as computer systems and AI are gradually entering and replacing a majority of standard safety tests annually carried out on animals, other long-term effects like carcinogenic tendency or impact on fertility are yet to be considered. However, the current prospects are highly beneficial to both animal rights activists and consumers.