Main Article Content
Classification of skin diseases is becoming highly vital in modern healthcare diagnosis systems since humans have been impacted by several types of skin disorders. The identification and classification of different skin disorders have been performed by few transfer learning models. Among those, Segmentation and Classification Network (SegClassNet) model can use dilated convolution and dropout layer in encoder-decoder network to segment the skin lesion images. Also, ResNet18-based Deep Convolutional Neural Network (DCNN) was used to classify the skin disease images. But, this DCNN utilizes the classical loss functions which restrain the network to learn discriminative features from skin lesion images. Hence, this article proposes a novel model called Fine-tuned SegClassNet (F-SegClassNet) by adjusting the ResNet18 layers with a combined triplet and group loss. First, a modified SegNet is employed to segment the augmented skin lesion images. Also, dropout layer is used to avoid the overfitting problem. Then, the embedding from segmented images is learned into the Euclidean space by using DCNN ResNet-18 model. Then, distance is computed among corresponding segmented images from Euclidean space for learning discriminative features of skin disease images using combined triplet and group loss function. Moreover, the segmented input images are classified by using these distances. Finally, the experimental results demonstrate that the proposed model attains an average accuracy of 93.37% for HAM dataset than the other existing models.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.