Spotlight: An AI Method to Determine Asymptomatic COVID-19 Patients

By Ruchi Jhonsa, Ph.D.

It is the year 1918. A wave of infection is spreading across the world that is killing millions of people daily. With no clue about the infectious agent and its mode of spread, control efforts worldwide are limited to non-pharmaceutical interventions such as isolation, quarantine, use of disinfectants, and social distancing. Finally, when the pandemic came to a stop, 500 million people were infected, and 50 million were dead. Fast forward to 2020, the coronavirus pandemic has created similar havoc in the world today. Around 45 million are infected, and 2 million are dead worldwide. However, these numbers are much smaller than the 1918 pandemic, thanks to efforts made by current science and technology. 

 

Role of Artificial Intelligence

While pharmaceutical interventions are playing a big part in controlling the infection, technology is also doing its bit. From chronic diseases and cancer to coronavirus risk assessment, there are endless opportunities where technology is helping. One technology that is gaining momentum is Artificial intelligence (AI). AI offers many advantages over traditional analytics and clinical decision-making techniques due to its precise and accurate algorithms trained on human data.

An interesting way in which AI is helping in the current pandemic is by identifying asymptomatic coronavirus patients who exhibit no physical symptoms of the disease. This population is the riskiest as they spread the virus without knowing. A team of scientists from MIT has decoded a way of identifying this patient population by using the sound of their forced cough. According to the team, “people who are asymptomatic may differ from healthy individuals in the way they cough. These differences are not decipherable to the human ear. But it turns out they can be picked by artificial intelligence.”

The model can help potential COVID-19 patients with and without symptoms determine whether they have the infection or not. For increasing its reach, the developers of the model are planning to incorporate it into a user-friendly application, which upon FDA approval can be used on a large scale as a prescreening, non-invasive tool. The test is not a confirmation of the COVID-19 infection but would inform the user about possible infection. The confirmation can then be made at a clinic with a formal test. This research was published on 29th September in the IEEE Journal of Engineering in Medicine and Biology and was done in collaboration with Takeda Pharmaceuticals. 

 

Behind the Scenes

The idea of developing a model that could detect COVID-19 infection using cough was actually based on another study from the group where they trained algorithms to diagnose a condition such as Alzheimer’s accurately.

Alzheimer’s disease is a condition that is not only associated with memory loss but also weakens the vocal cord owing to the disease-associated neuromuscular degeneration. The model developed by the group detects Alzheimer’s by combining three different models that are trained to detect either the changes in vocal cord strength or emotional states or lung and respiratory performance. The changes in vocal cord were determined by training ResNet50 neural network on sounds coming from audiobook with more than 1000 hours of speech to pick out the words “them” from other words like “the” and “then.”

The idea is that the sound “mmmm” can indicate if a person has a strong or weak vocal cord. The changes in emotional states were determined by training a second neural network on a large dataset of actors that conveyed various emotions, such as neutral, calm, happy, and sad. The reason for testing emotional states was that people with neurological decline have more frustration sentiments than happiness or calm. And finally, the changes in lung and respiratory performance were determined using a third neural network that was trained on the audio of coughs from various people.

To this framework, the team fed audio recordings of various participants, including those with Alzheimer’s, and found that that model worked better at identifying Alzheimer’s patients than existing models. This also validated the biomarkers on which the platform was built and deemed them effective at identifying Alzheimer’s.

 

Birth of a New Idea

The researchers were working on this project when the pandemic struck. While for many, it was bad news, Subirana’s group, the corresponding author of the publication, found an opportunity to expand the scope of their newly developed AI framework. “The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s, in fact, sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for Covid.”

The team initiated the project by collecting recordings of cough from many people, including those affected by COVID-19. To make sample collection easy, the group set up a website where they asked the participants to fill in details about their symptoms, whether they have COVID-19 or not, and in the end, attach an audio clip of their forced cough recorded through cellphone or web-enabled device.

To date, the researchers have collected more than 200,000 forced cough recordings, which, according to the corresponding author, makes it the largest research cough dataset. Around 2500 recordings were from patients who had confirmed COVID-19 infection, of which some were symptomatic and some asymptomatic.

The team then added 2500 recordings from COVID-19 patients to a pool of recordings that had 2500 recordings picked randomly from 200,000 total recordings. From a total of 5000 recordings, 4000 were used to train the AI, and the rest was used as an experiment for testing the AI. Surprisingly, the team discovered that the AI framework originally meant for detecting signs of Alzheimer’s was also able to differentiate cough recording of COVID-19 patients from non-COVID-19 patients. It successfully picked up patterns in the four biomarkers-vocal cord strength, sentiment, lung and respiratory function, and muscular degradation and detected 98.5 percent cough from people with confirmed COVID-19 and accurately detected all asymptomatic patients.  

 

Potential of the AI Model 

Screening of asymptomatic COVID-19 patients is of the utmost importance to control the pandemic. Such models could be employed to track asymptomatic patients before they infect others. The team is working with a company to integrate their model in an app that anyone can use at the convenience of their home. While this is happening, they are also making their model more robust by training it on data collected from hospitals around the world. They mention in their paper, “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”

Related Article: Prof. Han van den Bosch: The Mammoth Task of Vaccinating the World

References
  1. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9208795
  2. https://news.mit.edu/2020/covid-19-cough-cellphone-detection-1029

©www.geneonline.news. All rights reserved. Contact: service@geneonlineasia.com