Researchers study why neural networks are efficient in their predictions

Researchers study why neural networks are efficient in their predictions

Artificial intelligence, machine learning and neural networks are terms that are increasingly being used in daily life. Face recognition, object detection, and person classification and segmentation are common tasks for machine learning algorithms which are now in widespread use.

Underlying all these processes is machine learning, which means that computers can capture the essential properties or the key characteristics of processes in which the relationships between objects are very complex. The learning process involves good and bad examples with no previous knowledge about the objects or the underlying laws of physics.

However, since it is a blind optimization process, machine learning is like a black box: computers make decisions they regard as valid but it is not understood why one decision is taken and not another, so the internal mechanism of the method is still unclear. As a result, the predictions made by machine learning for critical situations are risky and by no means reliable because the results can be deceptive.

In this study, the research group made up of Vladimir Baulin, from the URV’s Department of Chemical Engineering, Marc Werner (Leibniz Institute of Polymer Research in Dresden) and YachongGuo (University of Nanjing, China) has tested the predictions of a neural network to check whether they coincide with actual results. To this end, they chose a well defined practical example: the neural network had to design a polymer molecule that would cross the lipid membrane in as short a time as possible. The lipid membrane is a natural barrier that protects cells from damage and external components. To monitor the neural network’s prediction, the researchers developed a novel numerical method that uses an exhaustive enumeration system that determines all the possibilities of polymer composition by directly programming the high-performance graphic cards in parallel calculations.

“The traditional processor of a computer can contain a maximum of 12-24 nuclei for calculations, but graphic cards are designed to make parallel calculations of image and video pixels, and they have thousands of calculation cores optimized for parallel calculations,” explains Vladimir Baulin. This enormous computational power generates thousands of millions of polymer combinations in just a few seconds or minutes. In this way a map can be generated that contains all the possible combinations and, therefore, how the neural network chooses the correct result can be monitored.

“What is surprising is that such a simple, minimum network as the neural network can find the composition of a molecule,” Baulin points out. “This is probably due to the fact that physical systems obey the laws of nature, which are intrinsically symmetrical and self-similar. This drastically reduces the number of possible parameter combinations that are then captured by the neural networks.”

Therefore, comparing the result of the neural network with the actual result not only makes it possible to check the prediction but also shows how the predictions evolve if the task is changed. And, in turn, this shows how neural networks make decisions and how they ‘think.’

Source:More information: Marco Werner et al, Neural network learns physical rules for copolymer translocation through amphiphilic barriers, npj Computational Materials (2020). DOI: 10.1038/s41524-020-0318-5

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Çok Okunan Yazılar