Quantitative Evaluation of Multimodal LLMs in Pediatric Radiology Report Generation
Zhiguang Ding · Xu Cao
Abstract
Pediatric radiology presents unique challenges due to the distinct physiological and anatomical characteristics of children, setting it apart from general adult-focused radiology. While recent applications of Multimodal Large Language Models (MLLMs)—such as GPT-4o and LLaVA-Med—have shown promise in radiology report generation, they predominantly rely on adult datasets with limited pediatric coverage. In this study, we address this gap by evaluating MLLMs on pediatric chest X-ray pneumonia cases, demonstrating the critical need for dedicated pediatric training data to ensure robust, age-specific performance in MLLM-driven radiology applications.
Chat is not available.
Successful Page Load