![](https://crypto4nerd.com/wp-content/uploads/2023/08/1EuKzEBrTFZZaTo4YRbagmw-1024x544.png)
There are 3 main intuitions that you may take from the paper.
Aim: Develop a robust and data-driven automatic motion artifacts (MA) removal technique from electrodermal activity (EDA) signal also known as galvanic skin response that is generated due to the activation of sweat glands in the body in reaction to some kind of emotional activity.
Method: Paper Proposed a deep convolutional autoencoder (DCAE) approach, trained on 4 different types of datasets including Gaussian white noise (GWN) and realistic MA data recorded in the lab.
Results: DCAE model outperformed three previous methods i.e. wavelet method, low pass filtering method, and exponential smoothing method, showing significantly higher signal-to-noise-power-ratio improvement (SNRimp) and lower mean squared error (MSE) on MA-corrupted data. The reconstructed EDAs had a mean correlation value of 0.78 with reference clean data from the motionless hand, indicating higher effectiveness in removing MAs compared to other methods with only a correlation value of 0.68 for raw MA-corrupted data.
The technical terms like electrodermal activity, motion artifacts, and autoencoders seem confusing right?
so let’s begin with understanding each term.
Electrodermal Activities (EDA)
Electrodermal activity (EDA) is a measure of neurally mediated effects on sweat gland permeability, observed as changes in the resistance of the skin to a small electrical current, or as differences in the electrical potential between different parts of the skin.
Motion Artifacts
The electrodermal activities are recorded through sensors, but what happens if the object moves while the sensors are recording?
The sensors couldn’t get clear data right?
Thus the disrupted data obtained through the sensors that tarnish the original quality of the image are called motion artifacts.
The paper introduces the Deep convolutional Autoencoders for the removal of unwanted signals known as motion artifacts while measuring the electrodermal activities. Data were trained and validated on 4 different types of datasets and Gaussian white noise and realistic motion artifacts were used in denoising autoencoders for the signal corruption.
Chen et al. proposed a stationary wavelet transform-based algorithm for motion artifact removal, relying on the assumption of a few artifacts.
Shukla et al. modified the method by using a Laplace distribution for wavelet coefficient modeling, but the assumption of limited artifacts remains.
A recent study used a one-class support vector machine and k-nearest neighbor for automatic motion artifact detection and linear interpolation for data filling.
Exponential smoothing and low pass filtering were attempted for MA removal but failed to handle high-intensity artifacts and indiscriminately distorted data.
The study proposes a deep convolutional autoencoder (DCAE) for robust motion artifact removal in EDA signals.
The paper uses autoencoders, so what are the autoencoders?
An autoencoder is an unsupervised artificial neural network used for dimensionality reduction and data reconstruction. It comprises an encoder that maps input data to a lower-dimensional representation and a decoder that reconstructs the original data from the encoded representation. Autoencoders are commonly used for feature extraction, denoising, and anomaly detection tasks in various fields, including image processing, signal analysis, and natural language processing.
Amongst different types of autoencoders, https://www.baeldung.com/cs/autoencoders-explained the paper uses just two types
Denoising Autoencoer
Convolutional Autoencoder
A denoising AE comprises an encoder and decoder. The encoder maps noisy input data to an efficient coding, while the decoder reconstructs an output close to the original data. The proposed DCAE (Deep Convolutional Autoencoder ) model’s encoder consists of five convolutional blocks, reducing the input dimension by half at each block. In the convolutional blocks, no pooling layers were used to avoid losing information about signal dynamics, and instead, stride-based convolution was employed for dimension reduction. The decoder is symmetric to the encoder and uses transposed convolution to upsample the encoded vector. the skip connections were made between the encoder and decoder blocks to transfer signal details and aid in recovering the clean signal.
Mean squared error (MSE) between uncorrupted and reconstructed signals, along with L1 regularization, was used as the loss criterion for the DCAE network. The purpose of incorporating L1 regularization was to prevent over-smoothing due to the MSE loss function. The loss function includes both the MSE loss and L1 regularization term, calculated for each training example in the dataset.
4 different datasets were used which are described in the table below.
they were trained and tested individually along with Gaussian white noise and realistic Motion Artifacts.
The graphs show the validation using two different types of noise and the 3 other models rather than DCAE. Signal-to-noise ratio improvement was used as performance metrics also correlation between the reconstructed signals was used as performance metrics.
The following bar diagram shows the comparison of different methods of SNR and MSE when using GWN
The following bar diagram shows the comparison of different methods of SNR and MSE when using realistic motion artifacts.
A comparison of the correlation coefficient among different methods is given below it is done on the CMAD II dataset.
On testing with the CNS-OT dataset that contained frequent motion artifacts, the following waveforms were obtained.
Overall DCAE gave a better performance in every aspect of the motion artifacts removal than other models.
there are various advantages and Limitations of the study.
Advantages
- DCAE can recognize and remove almost any type of motion artifact.
- Used 1D convolution and shorter training time approx 5 in for 3300 samples thus, computationally efficient.
- Comparison of the different models thus that intuitions about the different models are provided
- It is cited by many papers which explain the use of DCAE and machine learning stuff in the field of EDA.
Limitations
The model might not work well if the motion MA events persist consistently for a long period.
according to the paper Correlation Analysis of Different Measurement Places of Galvanic Skin Response in Test Groups Facing Pleasant and Unpleasant Stimuli, the two forearms might not be an ideal place to take reference EDA signals as the two may not give the required values. It explains the left finger and right foot have a maximum correlation between the signals generated.
Moreover, the paper Automatic artifact recognition and Correction for electrodermal activity based on LSTM-CNN Models highlights more of the limitations.
- The original paper did not provide a continuous clean signal, which is needed to compute the phasic component and assess arousal.
- The paper did not compare their results with signals manually cleaned by experts, the most common method for removing artifacts.
- The performances of the different methods are not comparable because there is no public data benchmark.
- Only five subjects and 39 segments of the work include a clean EDA signal for evaluation, and the validation focused on the signal-to-noise ratio.
To address the issues and to get better artifact removal for the given sets of data I propose the study of Attention based Deep Convolutional Autoencoders.
So what is an attention-based autoencoder?
Attention Based Autoencoders
An attention-based autoencoder is an extension of the standard autoencoder architecture that incorporates attention mechanisms. The traditional autoencoder compresses the input data into a fixed-length latent representation, often leading to information loss, especially when dealing with long sequences or complex patterns.
In contrast, attention mechanisms dynamically weigh different parts of the input data based on their importance or relevance. During the encoding phase, the attention-based autoencoder focuses on specific regions of the input data, allowing it to capture important features effectively. Similarly, during decoding, attention is used to highlight relevant parts of the latent representation to reconstruct the output.
This attention mechanism enables the model to handle long sequences more efficiently, as it can selectively attend to relevant information, avoiding unnecessary computations on less significant parts. As a result, attention-based autoencoders have shown superior performance in tasks involving natural language processing, speech recognition, machine translation, and time-series data analysis.
By incorporating attention, the model can better capture dependencies, context, and intricate patterns in the data, making it a powerful tool for tasks that require understanding and processing sequential information.
various papers explain that attention-based autoencoders perform better than the traditional methods and the multilayer autoencoders.
AGD-Autoencoder:AttentionGated DeepConvolutionalAutoencoderfor BrainTumorSegmentation
- Used attention-based autoencoders to provide better results.
Low-loss data compression using deep learning framework with attention-based autoencoder
- 25% reduced reconstruction loss than multi-layer autoencoder.
DeepComp: A Hybrid Framework for Data Compression Using Attention Coupled Autoencoder
- it outperforms multilayer autoencoders by 48%. DeepComp also outperforms the reconstruction performance of the multilayer autoencoder.
Fault-Attention Generative Probabilistic Adversarial Autoencoder for Machine Anomaly Detection
- Reduces loss in feature extraction
- Outperforms other methods.
This is all about the review of the paper if you have anything to comment on or anything more to learn about the paper you can find the original paper link here. I am always open to suggestions and for any queries, you can contact me by email.