Classification of multimodal lung disease data based on late fusion of modalities

Бесплатный доступ

With the development of technology, high-quality X-rays have become available for the diagnosis of lung diseases with the help of radiologists. However, the diagnostic process takes a lot of time and depends on the availability of specialists in a medical institution. At the same time, patient information may include not only chest X-rays of different quality, but also the results of medical tests, doctor’s notes and prescriptions, information about taking medications, and others. In this study, we propose a model for the classification of pulmonary diseases based on multimodal data on clinical studies of patients and radiographic images. When preparing the data, we used various methods of generating artificial samples for both images and tabular data on the results of laboratory studies. We have proposed a method for establishing a correspondence for generated samples between modals. The proposed multimodal model has a late fusion architecture. We conducted experiments on datasets with one modality and two modalities. Our model showed accuracy 5.5% higher than models based on single-modality (91.3% vs. 86.11% on a dataset of 1,156 patients).

Еще

Multimodal data, lung diseases, deep learning, late fusion

Короткий адрес: https://sciup.org/147243210

IDR: 147243210   |   DOI: 10.14529/cmse240105

Статья научная