Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence

Stanford University

We evaluate 11 VLMs across a diverse set of surgical tasks

Abstract

Large Vision-Language Models offer a new paradigm for AI-driven image understanding, enabling models to perform tasks without task-specific training. This flexibility holds particular promise across medicine, where expert-annotated data is scarce. Yet, VLMs' practical utility in intervention-focused domains—especially surgery, where decision-making is subjec- tive and clinical scenarios are variable—remains uncertain. Here, we present a comprehensive analysis of 11 state-of-the-art VLMs across 17 key visual understanding tasks in surgical AI—from anatomy recognition to skill assessment—using 13 datasets spanning laparoscopic, robotic, and open procedures. In our experiments, VLMs demonstrate promising gener- alizability, at times outperforming supervised models when deployed outside their training setting. In-context learning, incorporating examples during testing, boosted performance up to three-fold, suggesting adaptability as a key strength. Still, tasks requiring spatial or temporal reasoning remained difficult. Beyond surgery, our findings offer insights into VLMs' potential for tackling complex and dynamic scenarios in clinical and broader real-world applications.

BibTeX

@article{rau2025systematic,
      author    = {Rau, Anita and Endo, Mark and Aklilu, Josiah and Heo, Jaewoo and Saab, Khaled and Paderno, Alberto and Jopling, Jeffrey and Holsinger, F Christopher and Yeung-Levy, Serena},
      title     = {Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence},
      journal   = {arXiv preprint arXiv:2504.02799},
      year      = {2025},
    }