Description

This project aims to develop a deep learning-based system for analyzing surgical videos using multimodal LLM. The scope includes detecting surgical phases, recognizing instruments, identifying anomalies, and generating real-time or post-operative summaries. Expected outcomes include improved surgical workflow analysis, automated documentation, and enhanced training for medical professionals.

The project will explore state-of-the-art Video LLM architectures and develop new model specific for the surgical video understanding, along with software packages such as PyTorch, TensorFlow, OpenCV, and Hugging Face’s Transformers. The research need is to improve the interpretability and efficiency of surgical video analysis, leveraging multimodal learning to combine visual and textual understanding.

We need high-performance computing (HPC) clusters, large-scale storage, and GPU accelerators will be leveraged to train and fine-tune the models efficiently.

Researcher(s)
Institution
UCSC
Status
Received
Preferred Semester
Spring Semester