通知公告
当前位置: 学院首页 > 通知公告 > 正文
学术报告《AG-AI: Anatomy-guided AI for body-wide medical image analysis》
            2019-10-31

时间:2019年11月10日(周日)9:30-11:00

地点:信息馆401

特邀嘉宾:Jayaram K. Udupa教授 

嘉宾简介:

Jayaram Udupa received a bachelor’s degree in Electronics and Communication Engineering from Mysore University, India, in 1972, and a Ph.D. in Computer Science from the Indian Institute of Science, Bangalore, in 1976 with a best thesis award. From the beginning of his career, his research focus has been developing theory, algorithms, and large software systems for image processing, 3D visualization, image analysis, and in utilizing these in numerous medical application areas toward quantitative radiology. He has made numerous seminal contributions to these areas continuously for 43 years and contributed significantly to theinitiation and success of several conferences such as the SPIE Medical Imaging Symposium and MICCAI since the early days of medical image processing. He has published 204 papers, 240 full conference papers, 2 books, 26 book chapters, secured 7 patents, given 290 invited lectures worldwide, and trained ~80 Ph.D. students and post-doctoral fellows. He is a Life Fellow of the IEEE and a Fellow of the American Institute of Medical and Biological Engineering (AIMBE), a Professor of Radiological Sciences and Chief of the Medical Image Processing Section, Department of Radiology, University of Pennsylvania, Philadelphia.

内容简介:

To make body-wide Quantitative Radiology (QR) a reality in clinical practice for improving clinical care, computerized body-wide automatic anatomy recognition (AAR) in medical images becomes essential. With the goal of building a general system that is not tied to any specific organ, body region, image modality, or application, we will describe an anatomy guided AI (AG-AI) methodology developed over the past 12 years for localizing and delineating all major objects in different body regions. The fundamental premise of the system is that rich prior anatomic knowledge can be exploited to selectively train deep neural networks to small sub regions of the image rather than over its entire domain which can significantly improve sensitivity, specificity, and accuracy of object recognition (localization) and delineation. The methodology embodies the following key ideas: (a) Exploiting the large collection of existing patient images. (b) Formulating precise anatomic definition of each body region and all its major objects and delineating them following these definitions to create a comprehensive library of images and objects. (c) Building hierarchical fuzzy anatomy models of object assemblies body-wide by fully exploiting detailed knowledge of the form, size, and positional relationships of objects. (d) Recognizing (locating) objects in given images by employing the hierarchical anatomy models. (e) Refining object localization based on deep neural networks (DNN) that are trained based on AAR recognition results. (f) Delineating objects following refined recognition based on deep neural networks that are trained on refined recognition results. (g) Evaluating recognition/delineation accuracy as a function of image/object quality.

The AG-AI system has been tested on four body regions involving 10 large on-going medical applications in cancer, surgery planning, sleep medicine, etc. via CT, MRI, and PET/CT imaging modalities. The system performs significantly better than the previous purely anatomy model-based system and just the blind DNN approach applied to the whole image without anatomy guidance. In the latter case, the appetite of DNN approaches for data sets is also eased considerably since natural-intelligence-directed prior knowledge sharpens the formulation and domain of the problem considerably.