Page d'accueil // FSTM // Actualités // [Article series] The experts behind Luxembourg's COVID-19 fight

[Article series] The experts behind Luxembourg's COVID-19 fight

twitter linkedin facebook email this page
Publié le vendredi 19 juin 2020

Vladimir Despotović, Postdoctoral Researcher at the Department of Computer Science at the University of Luxembourg, is co-principal investigator of the COVID-19 research project: “Covid-19 Detection by Cough and Voice Analysis (CDCVA)”.

In this project funded by the Luxembourg National Research Fund (FNR), Dr. Vladimir Despotović will combine audio technologies with artificial intelligence to analyse voice, breathing and cough of healthy and non-healthy participants, in order to identify respiratory symptoms related to COVID-19. 

1) Could you tell us more about your background and expertise?

I joined the University of Luxembourg in September 2019 as a Postdoctoral Researcher at the Department of Computer Science, Knowledge Discovery and Mining (MINE) research group. Previously I held academic positions at the University of Belgrade, Serbia, and Paderborn University, Germany.

My research lies at the intersection of machine learning and speech/audio signal processing, with the special focus on eHealth applications. I am working on development of voice-based medical assistive technologies for people with speech and motor disabilities due to neurological disorders, such as multiple sclerosis, cerebral palsy, stroke or Parkinson’s disease. Recognition and understanding of severely disordered speech that deviates from standard pronunciation poses many challenges, and requires learning speech representations in an unsupervised way, without a transcription of the spoken data, or the pronunciation lexicon. The applications are broad, ranging from voiced controlled home automation that helps patients in performing daily activities, to entertainment applications, such as voice-driven computer games. 

Currently, I act as the Management Committee member for Luxembourg for the COST project Multi3Generation: Multi-task, Multilingual, Multi-modal Language Generation, which deals with development of language generation models used in human-computer interaction, based on multi-modal processing and reasoning (e.g. using textual, auditory and visual inputs, or sensory data streams). 

2) How is your expertise relevant in the current COVID context?

COVID-19 is a respiratory condition, affecting breathing and voice, and causing, among other symptoms, dry cough, sore throat, excessively breathy voice and typical breathing patterns. These are all conditions that can make patients’ voices distinctive, creating recognizable voice signatures. 

Applying state-of-the-art techniques to analyse the voice beyond what the human ear can hear, the aim of our project is to discover the vocal signatures that repeat in voices and coughs of COVID-19 patients, and to provide additional low-cost and easy-to-use tools that don’t require an in-person visit to hospital or laboratory, and avoid potential exposure to SARS-CoV-2 of both the individuals being tested and the medical stuff. 

While the proposed solution is not intended to be a replacement for standard medical tests, it can provide monitoring of people on a very large scale, in a short period of time. It might be used as a recommender system for selection of candidates that will have priority to standard laboratory testing, when the number of medical tests is limited, and need to be used sparely. 

3) What is your specific role in ongoing COVID projects?

I am a Co-Principal Investigator of the project Covid-19 Detection by Cough and Voice Analysis (CDCVA). The project is realised in cooperation with the Luxembourg Institute of Science and Technology (LIST), and with the support of the Luxembourg Institute of Health (LIH).

The objective of the project is two-fold: (i) we will lunch web-based platform for collection of voice and cough recordings of COVID-19 patients and healthy individuals; and ii) develop AI-driven voice technologies to identify respiratory symptoms related to COVID-19

The results will be cross-validated with the voice data collected within the Predi-COVID study cofunded by FNR and André Losch Foundation which, within one of research axes, aims to identify vocal biomarkers of frequently observed symptoms in people with COVID-19, that could be used for the remote monitoring of patients at home. 

4) Could you tell us more about your collaborators?

We are a team of speech, computer vision and data scientists and engineers who will work on profiling patients from their voices, a research field that intersects Signal Processing with Artificial Intelligence and Machine Learning. 

Dr Muhannad Ismael, the Principal Investigator, works as the Research Associate at LIST. He has background in computer vision, mixed reality and machine learning. Dr Roderick McCall is the Lead Researcher at LIST, Chair of working group on social and ethical issues in entertainment computing at the International Federation for Information Processing (IFIP), and PI and/or coordinator of several EU (H2020), FNR (Luxembourg) and BMBF (Germany) funded projects. He has background in augmented and mixed realities, human-computer interaction, serious games and behaviour change technologies. Maël Cornil is the Research Engineer at LIST, with the background in mixed reality and data mining. The project is supported by Dr Guy Fagherazzi, Research Leader of the "Digital Epidemiology Hub" at the Department of Population Health, LIH. He has a strong expertise in the analysis of large population based studies making use of digital technologies and Artificial Intelligence methods.