MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection
Bibi, Sobia; Khan, Muhammad Attique; Shah, Jamal Hussain; Damaševičius, Robertas; Alasiry, Areej; Marzougui, Mehrez; Alhaisoni, Majed; Masood, Anum
Peer reviewed, Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3122872Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Sammendrag
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.