EdgeSAM: Transforming Machine Learning for High-Speed, Efficient Image Segmentation on Edge Devices

TL;DR:

  • EdgeSAM, an optimized variant of the Segment Anything Model (SAM), targets high-speed, efficient image segmentation on edge devices.
  • Developed by S-Lab Nanyang Technological University and Shanghai Artificial Intelligence Laboratory.
  • It achieves real-time, interactive segmentation while maintaining accuracy on smartphones and other edge devices.
  • Utilizes prompt-aware knowledge distillation and a CNN-based backbone for superior performance.
  • Outperforms the original SAM and Mobile-SAM by 40-fold and 14-fold, respectively, on edge devices.
  • Represents a significant leap in machine learning for image segmentation on resource-constrained edge devices.

Main AI News:

In the ever-evolving landscape of artificial intelligence, the Segment Anything Model (SAM) has been a game-changer, proficiently segmenting images for object detection and recognition across diverse computer vision tasks. However, SAM, while powerful, has faced limitations when it comes to optimizing its performance on resource-constrained edge devices. The collaborative efforts of researchers from S-Lab Nanyang Technological University and Shanghai Artificial Intelligence Laboratory have yielded a groundbreaking solution in the form of EdgeSAM.

EdgeSAM, a meticulously optimized variant of SAM, sets out to conquer the challenges posed by edge devices, such as smartphones, by ensuring real-time, interactive segmentation without compromising accuracy. This achievement stems from its adept utilization of knowledge distillation techniques and a tailored convolutional neural network (CNN) backbone that harmonizes perfectly with on-device AI accelerators.

The heart of EdgeSAM’s success lies in its prompt-aware knowledge distillation approach, which aligns seamlessly with SAM’s output masks. In addition, EdgeSAM introduces custom prompts designed specifically for the mask decoder. This innovation, coupled with a purely CNN-based backbone, propels EdgeSAM ahead of Mobile-SAM, delivering a remarkable boost in processing speed for real-time edge deployment.

Notably, EdgeSAM doesn’t stop at speed improvements. It excels in efficient execution without sacrificing performance. By distilling the original ViT-based SAM image encoder into a CNN-based architecture tailored for edge devices, EdgeSAM fully harnesses SAM’s knowledge. This research also incorporates prompt encoder and mask decoder distillation, with the integration of box and point prompts in the loop. To address dataset bias issues, a lightweight module has been thoughtfully added, ensuring a holistic approach to optimization.

The evaluation of EdgeSAM’s capabilities includes thorough investigations into prompt-in-the-loop knowledge distillation and the impact of a lightweight Region Proposal Network with granularity priors through ablation studies. The results are nothing short of astonishing.

EdgeSAM achieves an extraordinary 40-fold increase in speed compared to the original SAM, leaving Mobile-SAM trailing behind by a staggering 14 times when deployed on edge devices. The superior performance is consistent across diverse prompt combinations and datasets, establishing EdgeSAM as the epitome of efficacy in real-world applications. Notably, when tested on NVIDIA 2080 Ti and the latest iPhone 14, EdgeSAM proves its mettle, outpacing SAM and MobileSAM by over 40 times and around 14 times, respectively.

Conclusion:

EdgeSAM represents a transformative leap in machine learning for image segmentation on edge devices. Its innovative approach to knowledge distillation, lightweight architecture, and unmatched speed make it the go-to choice for realizing real-time interactive segmentation on resource-constrained edge devices, setting new standards for performance and efficiency in the field of computer vision.

Source