RADIATION-INDUCED ACOUSTIC SIGNAL DENOISING USING A SUPERVISED DEEP LEARNING FRAMEWORK FOR IMAGING AND THERAPY MONITORING

Author(s): Zhuoran Jiang, Siqi Wang, Yifei Xu, Leshan Sun, Gilberto Gonzalez, Yong Chen, Q. Jackie Wu, Liangzhong Xiang, Lei Ren​

ABSTRACT

Radiation-induced acoustic (RA) imaging is a promising technique for visualizing radiation energy deposition in tissues, enabling new imaging modalities and real-time therapy monitoring. However, it requires measuring hundreds or even thousands of averages to achieve satisfactory signal-to-noise ratios (SNRs). This repetitive measurement increases ionizing radiation dose and degrades the temporal resolution of RA imaging, limiting its clinical utility. In this study, we developed a general deep inception convolutional neural network (GDI-CNN) to denoise RA signals to substantially reduce the number of averages. The multi-dilation convolutions in the network allow for encoding and decoding signal features with varying temporal characteristics, making the network generalizable to signals from different radiation sources. The proposed method was evaluated using experimental data of X-ray-induced acoustic, protoacoustic, and electroacoustic signals, qualitatively and quantitatively. Results demonstrated the effectiveness and generalizability of GDI-CNN: for all the enrolled RA modalities, GDI-CNN achieved comparable SNRs to the fully-averaged signals using less than 2% of the averages, significantly reducing imaging dose and improving temporal resolution. The proposed deep learning framework is a general method for few-frame-averaged acoustic signal denoising, which significantly improves RA imaging’s clinical utilities for low-dose imaging and real-time therapy monitoring.

Click HERE to view publication