Enhanced Brain Tumour Classification in 3D MRI Images Using Vision Transformers

Main Article Content

Prathipati. Silpa Chaitanya
Susanta Kumar Satpathy

Abstract

To optimize the treatment strategies and improve patient outcomes it is necessary to accurately classify brain tumours. The traditional approach for analysing 3D MRI images with convolutional neural networks (CNNs) sometimes fail to capture a complete picture, lacking overall context or requiring powerful computer resources. In this study, we propose a novel technique that uses Vision Transformers (ViTs) to characterize brain tumours into many classes from 3D MRI data. Our method divides an MRI volume into small patches that are transformed and processed by a transformer model. In our approach, the attention mechanisms of ViTs are effectively utilized to capture both local and global information. This has helped us overcome CNN limitations in handling complex tumour manifestations as well as vast amounts of data. We use positional encodings for spatial preservation and sequence transformer encoder layers for better feature extraction. Using the [CLS] token representation enables final categorization with improved accuracy and resilience in tumour typing. In this experimental article, we present an advanced model based on ViT which outperforms customary CNN methods or other state-of-the-art approaches resulting in more efficient and reliable automated systems of brain tumour detection. This research illustrates how vision transformers could revolutionize medical imaging while providing an alternative way of classifying brain tumours than conventional deep learning techniques would do.

Article Details

Section
Articles