Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Cardiotocography (CTG) is a widely used technique to monitor fetal heart rate (FHR) during labour and assess the health of the baby. However, visual interpretation of CTG signals is subjective and prone to error. Automated methods that mimic clinical guidelines have been developed, but they failed to improve detection of abnormal traces. This study aims to classify CTGs with and without severe compromise at birth using routinely collected CTGs from 51,449 births at term from the first 20 min of FHR recordings. Three 1D-CNN and LSTM based architectures are compared. We also transform the FHR signal into 2D images using time-frequency representation with a spectrogram and scalogram analysis, and subsequently, the 2D images are analysed using a 2D-CNNs. In the proposed multi-modal architecture, the 2D-CNN and the 1D-CNN-LSTM are connected in parallel. The models are evaluated in terms of partial area under the curve (PAUC) between 0-10% false-positive rate; and sensitivity at 95% specificity. The 1D-CNN-LSTM parallel architecture outperformed the other models, achieving a PAUC of 0.20 and sensitivity of 20% at 95% specificity. Our future work will focus on improving the classification performance by employing a larger dataset, analysing longer FHR traces, and incorporating clinical risk factors.

Original publication

DOI

10.3390/bioengineering10060730

Type

Journal article

Journal

Bioengineering (Basel)

Publication Date

19/06/2023

Volume

10

Keywords

CNN, CTG, FHR, LSTM, deep learning