CSIC researchers overcome challenges to apply

In 2022, the European Union gave the green light for the first time to a system that uses artificial intelligence (AI) to analyze chest x-rays without the intervention of radiologists. The tool automatically reports patients with no abnormalities and sends images labeled questionable to specialists for review, reducing the workload of healthcare workers. AI, the branch of computing that designs algorithms to carry out human intelligence taskshas revolutionized the way we relate to technology and our environment, and its applications in health are more and more promising, such as the diagnosis of disease, the detection and identification of injuries or the development of medicines.

Based on Lara Loret And Maryam Coboresearchers skilled in AI apply medical imaging diagnoses, “despite some successful case developments, there are still many problems that must be overcome in order to implement the algorithm machine learning or deep learning is becoming a reality in everyday clinical practice”. Scientists overcame AI challenges to make that leap artificial intelligence and medicine (CSIC- Cataract). New book in the collection What do we know? He spoke about challenges related to medical data and the algorithms that use them and raised cross-cutting issues such as security or ethics.

Sophisticated tools that require quality data

Deep learning, namely algorithms based on multi-layered neural networks that learn from large amounts of data, has provided an unprecedented boost to the fields of machine vision and natural language processing. It already has direct impact in the biomedical field, because the analysis and interpretation of medical images is one of the basic areas for diagnosis. Lloret and Cobo demonstrated the paradox between this huge potential and limited data availability that meets the operating criteria required by the computer database. “Information collected by healthcare workers is often unstructured or oriented towards further analysis, but purely of clinical interest, so today we came across a large number of small datasets coming from many hospital institutions that are not connected to each other. barely trained ”, they say. Another problem that still needs to be resolved is personal. It is possible to work with anonymous data, but there is information, such as genetics, that is essentially impossible to anonymize without completely or partially losing its usefulness. The researchers explained that privacy assurance using AI algorithms is still under development.

Algorithms do nothing more than learn from data and, if biased, this fact will have a direct impact on the decisions they make. “There is statistical and social bias”, illustrated the researchers from the Institute of Physics of Cantabria (IFCA). The first occurs when the distribution the data entered into the learning system does not reflect the actual population distribution. For example, training an AI system to detect lung cancer on images of patients between the ages of 20 and 50 would cause the algorithm to not work well on patients older than that age.

Social bias refers to the inequality it can give rise to poor results by the algorithm for certain population groups. Many of the areas where AI-based systems have shown better results are also where problems of discrimination against vulnerable groups have been reported. This is the case for a system trained to detect skin lesions, where most of the images used are from white patients, or an app for predicting heart attack that is trained primarily on males, when it is known that the symptoms of these conditions differ markedly between males and females.

That black box from the algorithm

For artificial intelligence to help decision making it is very important for people to trust the system and understand how it works. This is where the concept of interpretabilitywhich refers to the process of one’s understanding of the decisions made by a model or algorithm, and which contrasts with a type model black box.

The authors emphasize that “models with high interpretability allow us to understand how they achieve certain results and are preferable to models that require more experience and specialized knowledge to understand.” Moreover, the researchers made it clear interpretability is essential to doing science. If we use the model type black box we would miss the details of what the algorithm sees, and how it processes decisions, and, consequently, possible scientific findings. Understanding the learning process of AI systems in the health sector can help medical professionals to better understand how the human body works and make more informed decisions,” they said.

Safety and ethics in AI applied to medicine

Working with this technology also carries risks related to security. In fact, they are known cyber attack case as suffered in 2017 by the UK’s National Health Service and which forced the cancellation of hundreds of scheduled appointments and interventions. Likewise, it has a strong ethical component, which is transversal to all areas where AI is applied, and which, like security, requires an adequate statutory and legal framework because its use completely affects our daily life.

The writer is talking about responsible artificial intelligence and they define it as “a system which makes it possible to determine whether a decision is made according to certain norms and to know who is responsible if this is not complied with”. And if an AI algorithm makes a wrong medical decision or leads to an undesired outcome, Who is responsible: the creators of the system, the doctors who use it, or the patients who receive it? Cobo and Lloret comment that these questions have not been satisfactorily resolved, but they insist that they are essential for establishing a legal and regulatory framework for the use of artificial intelligence.

Another challenge facing AI is related to the fear of the unknown. “The intelligence of AI systems is far from being comparable to that of humans, but it is precisely this association, the fictitious humanization of learning algorithms, that seems to generate discomfort”, the authors argue. “Perhaps we are overreacting to the implications of using this technology. If we look at AI systems as they are today and what they will be for a long time to come, that is, as computer programs that have been trained on sets of data to perform certain tasks, a lot of the mysticism is lost. .”

In the coming years we will begin to see more and more AI-based diagnostic systems approved for use in real clinical environments, and the first may be related to medical imaging. For this reason, Lloret and Cobo stress that “it is important to address the challenges mentioned above and develop the necessary tools and structures to maximize this technique in a safe and effective manner”.

artificial intelligence and medicine is number 145 of the dissemination collection ‘What do we know?’ (CSIC-Waterfall). To request an interview with the author or further information, contact: comunicacion@csic.es (91 568 14 77).

About the Author

Maryam Cobo He is an FPU pre-doctoral researcher. He developed his thesis on optimizing medical imaging diagnostic systems with artificial intelligence techniques at the Cantabrian Institute of Physics. Graduate in Physics from University of Cantabria and Master of Data Science from University of Cantabria and Menéndez Pelayo International University.

Lara Loret He is a tenured scientist at the Cantabrian Institute of Physics. PhD in Particle Physics from the University of Oviedo with a thesis on the search for the Higgs boson, she is currently working on deep learning oriented primarily towards medical imaging diagnosis. In addition, coordinating the Masters in Data Science at the University of Cantabria and the International University of Menéndez Pelayo, with the CSIC collaboration.

CSIC Scientific Culture

Stuart Martin

"Internet trailblazer. Troublemaker. Passionate alcohol lover. Beer advocate. Zombie ninja."

Leave a Reply

Your email address will not be published. Required fields are marked *