View a PDF of the paper titled LHU-Net: a Lean Hybrid U-Net for Cost-efficient, High-performance Volumetric Segmentation, by Yousef Sadegheih and 4 other authors
View PDF
HTML (experimental)
Abstract:The rise of Transformer architectures has advanced medical image segmentation, leading to hybrid models that combine Convolutional Neural Networks (CNNs) and Transformers. However, these models often suffer from excessive complexity and fail to effectively integrate spatial and channel features, crucial for precise segmentation. To address this, we propose LHU-Net, a Lean Hybrid U-Net for volumetric medical image segmentation. LHU-Net prioritizes spatial feature extraction before refining channel features, optimizing both efficiency and accuracy. Evaluated on four benchmark datasets (Synapse, Left Atrial, BraTS-Decathlon, and Lung-Decathlon), LHU-Net consistently outperforms existing models across diverse modalities (CT/MRI) and output configurations. It achieves state-of-the-art Dice scores while using four times fewer parameters and 20% fewer FLOPs than competing models, without the need for pre-training, additional data, or model ensembles. With an average of 11 million parameters, LHU-Net sets a new benchmark for computational efficiency and segmentation accuracy. Our implementation is available on GitHub: this https URL
Submission history
From: Afshin Bozorgpour [view email]
[v1]
Sun, 7 Apr 2024 22:58:18 UTC (16,231 KB)
[v2]
Wed, 11 Sep 2024 14:35:58 UTC (16,681 KB)
[v3]
Wed, 16 Jul 2025 12:28:25 UTC (2,530 KB)