Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Self-knowledge distillation (SKD) is a recent and promising machine learning approach where a shallow student network is trained to distill its own knowledge. By contrast, in traditional knowledge distillation a student model distills its knowledge from a large teacher network model, which involves vast computational complexity and a large storage size. Consequently, SKD is a useful approach to model medical imaging problems with scarce data. We propose an original SKD framework to predict where a sonographer should look next using a multi-modal ultrasound and gaze dataset. We design a novel Wide Feature Distillation module, which is applied to intermediate feature maps in the form of transformations. The module applies a more refined feature map filtering which is important when predicting gaze for the fetal anatomy variable in size. Our architecture design includes ReSL loss that enables a student network to learn useful information whilst discarding the rest. The proposed network is validated on a large multi-modal ultrasound dataset, which is acquired during routine first trimester fetal ultrasound scanning. Experimental results show the novel SKD approach outperforms alternative state-of-the-art architectures on all saliency metrics.

Original publication




Conference paper

Publication Date



13565 LNCS


117 - 127