Artificial Intelligence may replace rats in lab tests

Researchers have shown that artificial intelligence will be able to predict some aspects of a product’s toxicity, eliminating the need to test it on animals. This method can however only be used to replace specific tests.

The algorithm developed by the researchers would be as effective, if not more so, than some tests to verify the hazards of a chemical compound.

These toxicity studies are necessary to evaluate the risks of new products found on the market.

Even if the properties of atoms or molecules are known, at the scale of an entire organism, the possible interactions are too complex to be able to guess all the impacts that a product may have.

For example, to market a pesticide, dozens of tests on a large number of animals are needed to assess the risks to the skin, lungs, mucous membranes, mouth, or eyes, and even to determine lethal doses before to allow its use in the presence of humans.

Some tests have to be repeated dozens of times before getting a clear result about the risks of a product, mainly because of differences between laboratories.

In Europe, 2011 figures show that 57% of animals used in toxicology are used for this type of test. In addition, these experiments can cost tens of millions of dollars to the companies or research centers that carry them out.

Researchers have been trying for a long time to move from an animal model to computer models, which use the structure of a molecule or compare it with some already known compounds, in order to deduce the risks of new products.

Until now, however, these methods have remained very subjective, used on a case-by-case basis, generally taking longer and requiring several confirmations by specialists.

However, in recent years, artificial intelligence has made leaps and bounds in several areas. Scientists have turned to the technology, hoping to improve the effectiveness of predictions of toxicity without having to use animals.

As part of this study, the researchers used machine learning, a technique for which the designers of an algorithm must provide it with a huge amount of data, so that after a period of learning, programs may find their own methods to answer the question asked.

In order to obtain the necessary big data, researchers gathered information about 10,000 chemicals based on 800,000 animal tests. All the information came from data from the European Chemicals Agency.

After the training period, the researchers found themselves with an algorithm that can effectively predict, at about 87% of the time, the effects of a given product on the nine most monitored toxicology factors, namely the skin, the lungs etc.

This figure is similar to what is usually achieved in animal testing. So, with regard to the risks to the body and the dose that this implies, this algorithm could replace direct animal testing.

In addition to reducing the number of animals in research, it could also allow the evaluation of products for which tests are normally more limited, such as when the evaluated substances are too rare or too expensive to produce for use. in large studies.

It will still be several years before legislators begin to accept AI results as evidence of the safety of a product.

In addition, this algorithm can only comment on a relatively simple toxicity and can not establish whether a product is carcinogenic or may cause infertility.

Shakes Gilles

Editor of The Talking Democrat. He enjoys bike riding, kayaking and playing soccer. On a slow weekend, you'll find him with a book by the lake.