Blood Pressure Estimation Using PPG Signals
Blood pressure (BP) has long been recognized as a possible risk factor for cardiovascular disease. One of the most useful parameters for the early diagnosis, prevention, and treatment of cardiovascular diseases is blood pressure measurement. BP measurement currently relies primarily on cuff-based techniques, which are inconvenient and uncomfortable for users. Although some of the current prototype cuffless BP measurement techniques can achieve acceptable overall accuracy, they require an electrocardiogram (ECG) and a photoplethysmograph (PPG), making them unsuitable for true wearable applications. Developing a single PPG-based cuffless BP estimation algorithm with sufficient accuracy would thus be clinically and practically beneficial.
In this article, we will implement a deep learning approach to achieve our goals. The method for blood pressure estimation is to use a photoplethysmograph (PPG) sensor, which measures the blood flow through the skin by detecting changes in light absorption. This method is non-invasive and can be used to estimate blood pressure using a single-channel PPG signal.
Dataset:
The dataset “BloodPressureDataset” on Kaggle, contains data collected from patients with hypertension. The dataset includes the following features:
Patient ID : a unique identifier for each patient
Gender : the gender of the patient (male or female)
Age : the age of the patient
SBP : the top number in a blood pressure reading, indicating the pressure in the arteries when the heart beats
DBP : the bottom number in a blood pressure reading, indicating the pressure in the arteries when the heart is at rest
Pulse rate : the number of times the heart beats per minute
PPG signal : a single channel photoplethysmograph signal collected from the patient's finger
Data preparation:
We will use these three features(DBP, SBP, and PPG signal) from the dataset in our experiment. The dataset includes 500 samples of patients with a sampling frequency of 125hz, with each sample containing the above information. The ppg signal has to be resampled to get signal episodes, in our case we’ll choose 8 seconds on a single episode for the dataset because of the 125hz frequency, therefore, we will have 1000(8 * 125) data points in a single episode.
Methodology:
The proposed algorithm is a process for estimating arterial blood pressure (ABP) waveforms from photoplethysmograph (PPG) signals. The algorithm is composed of several stages:
- PPG segment extraction: We extract PPG segments 8 seconds long from the original dataset.
- Preprocessing: The extracted PPG segments are then subjected to minimal preprocessing operations such as filtering, to reduce the irregularities and noise present in the signal.
- Approximation Network: The filtered signal is then passed through an Approximation Network, which is a 1D U-Net neural network, that approximates the ABP waveforms based on the input PPG signals.
- Refinement Network: The preliminary rough estimate of ABP obtained from the Approximation Network is further refined through a Refinement Network.
- ABP computation: Finally, the values of systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) are computed in a simple manner using the estimated ABP waveform.
Overall, the pipeline is depicted in the figure, it starts with PPG signals, then segments them, preprocesses them, uses Approximation and Refinement Networks to estimate the ABP, and finally computes the values of SBP, DBP, and MAP.
Model Selection
After the pre-processing steps, an approximation network is a neural network used in the algorithm to estimate the arterial blood pressure (ABP) waveform from preprocessed photoplethysmograph (PPG) signals. The network is a one-dimensional, deeply supervised U-Net model which is a type of CNN model that is typically used for image segmentation tasks. We have adapted the U-Net architecture to reconstruct one-dimensional signals, which is primarily a regression task. The final convolutional layer uses a linear activation function to produce a regression output. Deep supervision is a technique that is used in the Approximation Network to reduce overall errors by guiding the learning process of the hidden layers.
The output of the Approximation Network sometimes deviates from the target, therefore, an additional network called the Refinement Network is used to refine the output of the Approximation Network. The Refinement Network is a one-dimensional MultiResUNet model, which is an improved version of the U-Net model. The MultiResUNet model includes Multi-Residual or MultiRes blocks and Residual or Res paths, which involve a compact form of multiresolution analysis and impose additional convolutional operations along the shortcut connections to reduce the disparity between the feature maps of the corresponding levels of encoders and decoders.
Similar to the Approximation Network, the Refinement Network comprises one-dimensional versions of convolution, pooling, and upsampling operations, and the final convolutional layer uses linear activation. However, this model is not deeply supervised.
Training Methodology
In this study, we used a specific training methodology for the Approximation and Refinement networks. The loss function used for the Approximation network is Mean Absolute Error (MAE) and for the Refinement network is Mean Squared Error (MSE). To minimize these losses, the Adam optimizer is used, which adaptively computes different learning rates for individual parameters based on the estimates of the first and second moments of the gradients. We used the default parameters of the Adam optimizer as mentioned in the original paper. The models were trained for 100 epochs with a patience of 20, which means that if the model’s performance does not improve for 20 epochs, the training process stops.
We used the K-Fold Cross Validation method, which is a technique to evaluate the performance of the model by dividing the data into k-folds, where k-1 folds are used for training and the remaining fold is used for validation. This process is repeated k times, with each fold serving as the validation set once. In this study, we used 10-fold cross-validation, which means that the data is divided into 10 parts, and each part serves as the validation set once. This approach is repeated 10 times using different data splits, and thus 10 models are developed. The best-performing model is selected and is then evaluated against the independent test set.
Results:
An example is provided in the figure below, which shows that the output of the Approximation network is not as accurate as the output of the Refinement network. It found that the mean absolute error of this blood pressure waveform construction is 4.604 ± 5.043 mmHg over the entire test dataset. Additionally, the mean absolute error of diastolic blood pressure, mean arterial pressure, and systolic blood pressure prediction is 3.449 ± 6.147 mmHg, 2.310 ± 4.437 mmHg, and 5.727 ± 9.162 mmHg respectively.
Evaluation:
The presented results in terms of predicting diastolic blood pressure (DBP), mean arterial pressure (MAP), and systolic blood pressure (SBP) from photoplethysmograph (PPG) signals. We have used a Bland-Altman plot to show that the error of predicting DBP, MAP, and SBP for 90% of the samples lies between [15.063:-11.825], [10.357:-9.095], and [19.366:-22.5308] mmHg, respectively.
It shows that the correlation between the predictions and the ground truth is evident through the regression plots for DBP, MAP, and SBP predictions. The values of the Pearson Correlation Coefficient (PCC) for DBP, MAP, and SBP predictions are 0.78, 0.88, and 0.87 respectively, indicating a good positive correlation.
If you enjoy reading this, please like this article. If you want to discuss how to achieve these results, connect with me on LinkedIn: https://www.linkedin.com/in/lucky-verma/