Health Dat
Health Dat
ABSTRACT Human effects are complex phenomena, which are studied for pervasive healthcare and well-
being. The legacy pen and paper-based affective state determination methods are limited in their scientific
explanation of causes and effects. Therefore, due to advances in intelligence technology, researchers are
trying to apply some advanced artificial intelligence (AI) methods to realize individuals’ affective states.
To recognize, realize, and predict a human’s affective state, domain experts have studied facial expressions,
speeches, social posts, neuroimages, and physiological signals. However, with the advancement of the
Internet of Medical Things (IoMT) and wearable computing technology, on-body non-invasive medical
sensor observations are an effective source for studying users’ effects or emotions. Therefore, this paper
proposes an IoMT-based emotion recognition system for affective state mining. Human psychophysiological
observations are collected through electromyography (EMG), electro-dermal activity (EDA), and electro-
cardiogram (ECG) medical sensors and analyzed through a deep convolutional neural network (CNN) to
determine the covert affective state. According to Russell’s circumplex model of effects, the five basic
emotional states, i.e., happy, relaxed, disgust, sad, and neutral, are considered for affective state mining.
An experimental study is performed, and a benchmark dataset is used to analyze the performance of
the proposed method. The higher classification accuracy of the primary affective states has justified the
performance of the proposed method.
INDEX TERMS Affective computing, convolutional neural network, emotion recognition, healthcare IoT,
Internet of Medical Things.
2169-3536 ⌦ 2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 75189
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
M. G. R. Alam et al.: Healthcare IoT-Based Affective State Mining Using a Deep CNN
Therefore, affective states are inherently thought of as non- can’t express internal affective states, speech analysis is not
scientific and affective disorders are assessed through a feasible for continuous emotion analysis, individuals’ are
psychological questionnaire, e.g., the Korean Mood Disor- not posting on the social network at every moment, and
der Questionnaire (K-MDQ) [4]. According to neuroimag- people may hide information in cases of the questionnaire-
ing research, the brain of a human consists approximately based affective state assessment. In contrast, an internal
100 billion neurons and glial cells, forming a well- affective state can be mined through medical sensor-based
structured communication network among themselves [5]. physiological observations. Moreover, psychophysiological
As emotion is characterized by functional mental activ- observation based affective mood classification methods
ity, determining the true emotional state is very chal- are also feasible for continuous and real-time monitoring
lenging. However, with advances in technology, scientific of affective states. Therefore, this research focused on the
methodologies and tools are successfully being applied analysis of arousal, and valence [28] levels in physiolog-
in emotion recognition [6], [7], activity recognition and ical signals to mine the hidden affective states or emo-
monitoring [8], [9], stress measurement [10], depression tions. Here, IoMT-based [29] physiological sensors such as
assessment [11], [12], and mental healthcare [13]–[15]. This Electro-Dermal Activity (EDA), Electromyography (EMG),
research applies signal processing [5], big-data manage- and Electrocardiogram (ECG) are applied to extract the levels
ment [16], Internet of Medical Things (IoMT) [17] and of valence, and arousal of a user. However, the key chal-
advanced machine learning [18] methodologies for mining lenge is mapping such medical sensor observations to user
the affective states of individuals. emotions, i.e., determining the biosensor signal patterns for
each of the affective states. Therefore, the deep Convolutional
Neural Network (CNN) model is used for mining the affective
state from large volumes of medical sensor observations.
The major contributions of this research are
1) an IoMT-based Affective State Mining (IASM) frame-
work, 2) a testbed of real-time emotion recognition, 3) a
performance study on DEAP dataset:
1) Affective state mining framework: Convolutional
neural network (CNN)-based affective state min-
ing framework is presented for real-time emotion
recognition. The IoMT-based framework is composed
of a Hadoop Distributed File System (HDFS)-based
storage module, biosignal processing and feature
extraction module, and a distributed CNN-based affec-
tive state miming module. Unlike legacy CNNs, in the
proposed novel CNN architecture, the extracted fea-
tures of the signal processing unit and high-level fea-
tures from the last pooling layer are fused and fed into
the fully connected layer to extract global discrimi-
native features. Therefore, this affective state mining
FIGURE 1. The five basic affective states in Russell’s emotion circumplex.
framework enables a platform of distributed processing
and mining of massive data streams generated by the
The Geneva Emotion Wheel [19] represents 40 emotions, on-body medical sensors of a user.
where valence and arousal levels are considered as the axes 2) Performance study on the DEAP dataset: The proposed
of 2D representation. However, it is difficult to classify each CNN based affective mood mining approach is applied
of these 40 emotions in the natural sciences, medical sciences to the benchmark DEAP (Database of Emotion Anal-
and engineering perspectives and therefore only four to six ysis through Physiological Signals) dataset to study
basic emotions are studied in the facial expression recogni- the performance of the proposed approach. The higher
tion and image processing domains. Similarly, according to accuracy in the classification of valence and arousal
Russell’s circumplex [20] of emotion, this paper studied five justifies the performance gain of the proposed affective
basic emotions such as Happy, Sad, Relaxed, Disgusted, and state mining approach.
Neutral, as shown in Fig. 1. 3) Real-time emotion recognition test-bed: A test bed
Most of the existing studies detect affective states by of real-time emotion recognition is developed via
employing facial expression [21]–[23], mood recognition by LuaJIT programming and the Torch scientific comput-
cognitive analysis [24], speech pattern mining based emo- ing library. BioSignalPlux sensors and API are used
tion recognition [25], sentiment analysis from social network for collecting medical-grade sensor data from users.
posts [26], and psychophysiological observation-based affec- The trained model of the proposed CNN is applied for
tive mood classification [27]. However, facial expressions online emotion classification.
The remaining sections of this paper are organized as classifier is applied to discriminate ten emotion classes with
follows: Section II reviews the related works. Section III a maximum of 55% accuracy. In [39], the higher order cross-
discusses the details of the IoMT-based affective state min- ings of EEG are analyzed for emotion recognition. Quadratic
ing framework. A comprehensive discussion of the proposed discriminant analysis (QDA) and an SVM classifier are used
emotion classification methodology and performance study to discriminate six basic emotion classes. The single and
using DEAP dataset are stated in Section IV. The prototype multi-channel EEG signal models achieve 62.3% and 83.33%
test-bed implementation of real-time emotion recognition and classification accuracy, respectively. However, the EEG sig-
performance evaluation are presented in section V. Section VI nal has a lower spatial resolution [40] and implements sophis-
concludes the paper along with future directions. ticated methods to analyze the EEG signal observations,
which requires a laboratory or clinical environment for data
II. RELATED WORKS collection and experimental study. Therefore, EEG signal-
According to the survey presented in [30], human affects based affective state mining approaches are generally not
has been studied by ancient Greek philosophers. In this feasible for an ambient assisted-living environment. Further-
ancient theory of affects, emotions are described as the more, unlike a fully functional 16 or 32 electrode EEG sensor,
causes of a change of mind. Beyond this, the theory of the EDA, ECG and EMG sensors are wearable as a wrist or
human emotions has been vastly studied in social psychol- chest band, which are suitable for ubiquitous monitoring.
ogy. Psychologists introduced self-assessment-based emo- Liu et al. [41] proposed a physiology-based affect recog-
tion measurement scales and defined different emotions as nition system for patients with autism spectrum disor-
affective states. The Self-Assessment Manikin (SAM) [31] der (ASD). They used skin conductance (SC), ECG, EMG,
is one of the pioneering picture oriented affective state mea- and skin temperature (ST) sensors to analyze the three dif-
surement scales for measuring arousal, pleasure, and domi- ferent affective states. An SVM classifier is used for suc-
nance induced by certain stimuli. However, there are some cessful classification of 82.9% of affective states. However,
impediments in self-report and questionnaire-based affective the study was solely on children with ASD and defined
state measurement models, e.g., off-line and abstract meth- affective states such as anxiety, engagement, and liking. In a
ods, where subjects can hide necessary information, provide recent study [42], radio frequencies are used for human heart-
misleading information, or not remember information. In a beat recognition, and measured heart-beat signals can be used
self-report, subjects may feel cautious in disclosing personal for emotion recognition. That study considered four basic
traits, or the questions may be misapprehended or misinter- emotion classes and applied a 1-norm support vector machine
preted. Moreover, it is also difficult to judge the significance (l1 SVM ) classifier to achieve 72% classification accuracy
of a solicited qualitative answer to a question in self-report- in a person independent experimental setup. However, heart-
based methods of affective state determination. On the other rate variability determination through radio frequency is a
hand, the biomedical sensor-based emotion recognition can passive measurement approach, which is unable to recognize
overcome the limitations mentioned above. individual emotions in a crowd, e.g., in a hospital, apartment
In addition to psychological emotion assessment scales, or outdoor environment.
several scientific studies have been carried out to detect Koelstra et al. [43] proposed physiological signal-based
human emotions automatically. Facial expression-based emo- emotion analysis, where the authors used 40 channels of
tion recognition is studied in [22], where the authors’ used biomedical and peripheral sensors, including EEG, ECG and
a maximum entropy Markov model (MEMM) [15] to rec- EMG signals [43]. That work is a pioneer in physiological
ognize basic emotions. A decision tree-based typical facial signal-based emotion analysis and utilized a naive Bayes clas-
expression recognition (FER) system is proposed in [32]. sifier to classify the valence and arousal levels. They gained
Support vector machines (SVMs) were used in [33] for facial 61.8% and 65.1% accuracy in valence and arousal level clas-
expression-based emotion recognition. A hidden Markov sification, respectively. The authors used an exceedingly large
model (HMM)-based FER is studied in [34], where the amount of sensors, including the complex 32-channel EEG
authors’ extract the emotions from time-sequential facial signals, which requires an extensive environmental setup and
expression images of a video. However, such facial image is thus still not feasible for real-life usage. Therefore, this
analysis-based emotion recognition system analyzes facial paper proposed a convolutional neural network (CNN) [44]
expressions [35], not emotions. In other words, FER can based emotion recognition method, where wearable and
measure the valence level but not the arousal level of human light-weight electromyography (EMG), electro-dermal activ-
emotions. ity (EDA) and electrocardiogram (ECG) sensors are used to
To overcome the limitations of the traditional self- observe the physiological changes in different affective states.
assessment-based psychological emotion recognition and This research determines the valence and arousal [45] lev-
facial image analysis-based emotional expression recogni- els from physiological observations induced while watching
tion, the body sensor network [36], [37] or physiological the video stimuli of different affective states.The affective
observation-based affective state recognition system has been valence is described through a scale of unpleasantness to
studied. AlZoubi et al. [38] used brain electroencephalo- pleasantness, whereas the affective arousal is defined through
gram (EEG) signals to determine the affective state. An SVM a range of calming to exciting [45]. Based on the valence and
FIGURE 3. Bio-signal processing and feature extraction: (a) ECG peak detection for HRV determination and QRS complex
identification (b) raw ECG signal.
FIGURE 6. The measured skin conductance levels (SCLs) of the electro-dermal activity (EDA) sensor.
FIGURE 7. The EDA signal feature determined through differentiating the measured skin
conductance levels (SCLs).
Moreover, ⌘nO is the gradient of the error with respect to The error signal is passed along the backward direction as
the input O and can be determined through (10). In backward well as the pooling layer and is determined by (11), where g0
propagation, ⌘nO is determined in the convolution layer. is the derivative of the aggregate function g of (7).
O
|W |
⌘(n 1)m+1:nm = ⌘nP g0n (11)
@E @E @P X
⌘nO = = = ⌘nP i+1 Wi (10) In backward propagation, the determined error signal is
@On @P @On
i=1 passed through the activation layer (ReLU), followed by the
FIGURE 9. The mean integral of the EMG signal to estimate muscular efficiency.
convolution layer and the input layer. Therefore, the input processing unit and high-level features from the last pooling
O
layer receives ⌘(n 1)m+1:nm as an input using the following layer are fused and fed into the fully connected layer to extract
equation (12), where • is used to denote the Hadamard the global discriminative features. This feature adds novelty
product. to the proposed CNN-based IASM model. The output of the
⇣ ⌘ fully connected layer produces the predicted class labels of
⌘O = ⌘P g0 • ReLU 0 (P) ⇤ W (12) emotions. In the training phase, the hyper-parameters of the
proposed CNN are updated through back-propagation and
Using (9), the gradient of the error @W
@E
= rW E of the
the stochastic gradient methods. The trained CNN model is
full network is determined, and the network parameters are
stored and applied for real-time emotion recognition. To ana-
upgraded through (13).
lyze real-time emotions, the collected biosensor observations
W =W ⇣ rW E (13) are fed into the trained CNN, which predicts the real-time
arousal and valence levels of the induced affective state of
In the proposed IoMT-based affective state mining frame-
the user.
work, the biosignals sample data are partitioned into train-
ing, and testing sets. The described convolutional neural
network (CNN) is used to train the proposed IASM model. IV. PERFORMANCE EVALUATION
To extract the high-level features, different kernels are used in The performance of the proposed emotion recognition
the convolution layer to produce various feature maps. Addi- approach, i.e., biomedical sensor observation-based affective
tionally, to adopt non-linear properties into the decision func- state mining using a convolutional neural network, is studied
tion, the rectified linearization (ReLU) method is used on the using a benchmark DEAP dataset.
output of the convolution layer and then, the feature maps are
down-sampled in pooling layer. To achieve the convergence A. DESCRIPTION OF DATASET
of the deep CNN model, several blocks of convolution, ReLU, The standard DEAP dataset [43] is used to evaluate the
and pooling operations are executed iteratively. The fully con- performance of the proposed IASM framework for affective
nected layer is embedded at the end of the CNN model. In the state mining. The DEAP is a limited access dataset, which
legacy CNN architecture, the outputs of the last pooling layer is developed for physiological signal-based human emotion
are fed into the fully connected layer. However, in the pro- recognition. Data were collected from 32 subjects. The total
posed CNN architecture, the extracted features of the signal 40 video clips each of one-minute length are used as stimuli.
FIGURE 10. The convolution and pooling operations of the CNN model for IoMT-based affective state mining.
The physiological observations of 32 channels EEG, 12 chan- states in DEAP dataset is presented in Fig. 11. Five-fold
nels peripheral sensors, and one channel status are recorded cross-validation is applied in this study. The area under the
for affective states recognition. However, this research used ROC curve (AUC) in the overall ROC plot is 0.9474, 0.9433,
only 3 peripheral sensors i.e., electro-dermal activity (EDA), 0.9389, 0.9155, and 0.9112 respectively for Sad, Happy, Dis-
electromyography (EMG) and photoplethysmogram (PPG) gusted, Neutral and Relaxed affective states, respectively.
observations for evaluating the performance of the proposed
model of affective state mining. A 9-point Likert scale is used B. TESTBED IMPLEMENTATION
to collect the ratings of arousal, valence, liking, and domi- A testbed is implemented to evaluate the performance of
nance levels of the video stimuli. Therefore, the dimension the proposed IASM system via a convolutional neural net-
of the used dataset is 32 ⇥ 40 ⇥ 3 ⇥ 8064; where 32 is the work (CNN). The Torch 7.0 deep learning framework is used
number of subjects, 40 is the number of video stimuli, 3 is the for the implementation. The Lua programming language is
number of sensor channels, and 8064 is the number of sample used for development of the CNN model. After installing
data points. Torch, LuaJIT is installed, which is a just-in-time interpreter
The performance of the proposed IoMT-based affective for compilation and interpretation of the Lua language. More-
state mining approach IASM is shown in Fig. 11. The receiver over, for processing vast amounts of data, GPU programming
operating characteristic (ROC) curves of the five affective is designed using the CUDA and CuDNN packages on top
FIGURE 11. The receiver operating characteristic (ROC) curves of five affective states.
FIGURE 12. The PPG signal pattern of the five basic affective states.
of the Torch interface. For the big-data loading interface, To expose emotions of an individual, the video stim-
PyTorch API is used, which enables a Python 3.0 interface uli of DEAP dataset are used. The EDA, ECG, PPG, and
for data loading and storing. zEMG IoMT-sensors are placed on the human body to
FIGURE 13. The EDA signal pattern of the five basic affective states.
FIGURE 14. The EMG signal pattern of the five basic affective states.
collect bio-signal observations while the subject is watch- To prepare the training and testing dataset, the subjects are
ing the video stimuli. As the electroencephalogram (ECG) requested to rate their arousal and valence levels using a
and photoplethysmogram (PPG) sensors are complementary, 9-point Likert scale, while watching the video stimuli.
the ECG sensor observations are used instead of PPG, which The collected PPG, EDA and EMG signal patterns are
is used in the DEAP dataset. The significance of using ECG shown in Fig. 12, Fig. 13 and Fig. 14, respectively for a
signal over PPG is that the ECG facilitates to extract poten- subject. Refereeing those figures, it is shown that there is
tial discriminative features, e.g., p(NN 50), poincare(SD) (see no pattern to individual biosensor observations which can
section III(B)), other than the normal heart rate variability. distinguish the Happy, Relaxed, Disgusted, Sad and Neutral
moods. The figures demonstrate the difficulty of finding psy- suitability of using the IASM framework for real-time emo-
chophysiological markers for affective state mining. There- tion recognition. Even though the accuracy of the proposed
fore, we need to combine the sensor observations to uncover deep CNN-based affective state mining method is relatively
patterns for emotion classification. Hence, the proposed non- improved, there are still different approaches that can be taken
linear deep convolutional neural network model of IASM to enhance the performance gain further. The biomedical sen-
framework is trained and applied for identifying the real-time sor data are continuous and sequential, and therefore, the use
affective states of users. of a time-sequential deep learning architecture e.g., deep
recurrent neural network may enhance the classification
TABLE 1. Confusion matrix of the proposed affective state mining accuracy.
approach PAC using DEAP dataset [43] (Unit:%).
REFERENCES
[1] D. Hill, Emotionomics: Leveraging Emotions for Business Success. Lon-
don, U.K.: Kogan Page, 2010.
[2] M. Chen, P. Zhou, and G. Fortino, ‘‘Emotion communication system,’’
IEEE Access, vol. 5, pp. 326–337, 2017.
[3] R. W. Picard, ‘‘Affective computing. M.I.T Media Laboratory
Perceptual Computing Section,’’ Massachusetts Inst. Technol.,
Cambridge, MA, USA, Tech. Rep. 321, 1995. [Online]. Available:
https://affect.media.mit.edu/pdfs/95.picard.pdf
[4] S. O. Bae, M. D. Kim, J. G. Lee, J.-S. Seo, S.-H. Won, Y. S. Woo,
J.-H. Seok, W. Kim, S. J. Kim, K. J. Min, D.-I. Jon, Y. C. Shin,
The performance of the developed IASM prototype is W. M. Bahk, and B.-H. Yoon, ‘‘Is it useful to use the korean version of
the mood disorder questionnaire for assessing bipolar spectrum disorder
shown in Table 1. The EDA observations induced very clear
among Korean college students?’’ Asia–Pacific Psychiatry, vol. 6, no. 2,
discriminative factors in the case of negative arousal, which pp. 170–178, Jun. 2014.
is the key for sad emotions. Therefore, the proposed IASM [5] M. G. R., E. J. Cho, E.-N. Huh, and C. S. Hong, ‘‘Cloud based mental
framework has shown highest 90% classification accuracy in state monitoring system for suicide risk reconnaissance using wearable
bio-sensors,’’ in Proc. 8th Int. Conf. Ubiquitous Inf. Manage. Commun.,
the case of mining sad emotions. The standard emotion cir- Jan. 2014, Art. no. 56.
cumplex of Russell didn’t define the neutral emotion through [6] M. M. Hassan, M. G. R. Alam, M. Z. Uddin, S. Huda, A. Almogren,
arousal and valence axes. For simplicity, the valence and and G. Fortino, ‘‘Human emotion recognition using deep belief network
architecture,’’ Inf. Fusion, vol. 51, pp. 10–18, Nov. 2019.
arousal levels between 4 to 6 of the 9-point Likert scale are [7] M. Z. Uddin, M. M. Hassan, A. Almogren, M. Zuair, G. Fortino, and
considered as neutral emotion in the proposed IASM model. J. Torresen, ‘‘A facial expression recognition system using robust face
However, because of such linear definition, the CNN classi- features from depth videos and deep learning,’’ Comput. Electr. Eng.,
vol. 63, pp. 114–125, Oct. 2017.
fier suffers to discriminate neutral emotion from other affec- [8] H. Ghasemzadeh, P. Panuccio, S. Trovato, G. Fortino, and R. Jafari,
tive states and produces a classification accuracy of only 84%. ‘‘Power-aware activity monitoring using distributed wearable sensors,’’
In summary, the proposed IASM model has achieved 87.5% IEEE Trans. Human-Mach. Syst., vol. 44, no. 4, pp. 537–544, Aug. 2014.
classification accuracy in general, compare to the 72% of the [9] Z. Wang, D. Wi, R. Gravina, G. Fortino, Y. Jiang, and K. Tang, ‘‘Kernel
fusion based extreme learning machine for cross-location activity recogni-
radio frequency-based emotion analyzer [42], 82.9% of the tion,’’ Inf. Fusion, vol. 37, pp. 1–9, Sep. 2017.
SVM-based emotion classifier [41], and 61.8% valence and [10] C. Kappeler-Setz, F. Gravenhorst, J. Schumm, B. Arnrich, and G. Tröster,
65.1% arousal level classification accuracy of DEAP [43]. ‘‘Towards long term monitoring of electrodermal activity in daily life,’’
Pers. Ubiquitous Comput., vol. 17, no. 2, pp. 261–271, Feb. 2013.
The CNN-based depth features of the proposed IASM play [11] O. Mayora, ‘‘The MONARCA project for bipolar disorder treatment,’’
key roles in achieving the overall classification accuracy J. Cyber Therapy Rehabil., vol. 1, no. 4, pp. 14–15, 2011.
of 87.5%. [12] H. Imaoka, K. Inoue, Y. Inoue, H. Hazama, T. Tanaka, and N. Yamane,
‘‘R-R intervals of ECG in depression,’’ Psychiatry Clinical Neurosciences,
vol. 39, no. 4, pp. 485–487, Dec. 1985.
V. CONCLUSION [13] R. Gravina and G. Fortino, ‘‘Automatic methods for the detection of accel-
The wearable internet-of-medical-things market is rapidly erative cardiac defense response,’’ IEEE Trans. Affect. Comput., vol. 7,
growing, especially for health status monitoring and ath- no. 3, pp. 286–298, Jul./Sep. 2016.
[14] M. G. R. Alam, S. F. Abedin, M. A. Ameen, and C. S. Hong, ‘‘Web of
letic training. Affective state recognition from wearable objects based ambient assisted living framework for emergency psychiatric
biosensors can complement context-aware recommendation, state prediction,’’ Sensors, vol. 16, no. 9, P. 1431, 2016.
mood stabilization, and stress and depression management, [15] M. G. R. Alam, R. Haw, S. S. Kim, M. A. K. Azad, S. F. Abedin, and
C. S. Hong, ‘‘EM-Psychiatry: An ambient intelligent system for psychiatric
especially for mental well-being. This research used the emergency,’’ IEEE Trans. Ind. Inform., vol. 12, no. 6, pp. 2321–2330,
observations of biomedical sensors, which are lightweight Dec. 2016.
and readily available within consumer products to infer [16] G. Harerimana, B. Jang, J. W. Kim, and H. K. Park, ‘‘Health big data
analytics: A technology survey,’’ IEEE Access, vol. 6, pp. 65661–65678,
human emotions. The proposed affective state mining method 2018.
ensures a higher accuracy of 87.5% compared to the state-of- [17] C. Xie, P. Yang, and Y. Yang, ‘‘Open knowledge accessing method in
the-art physiology-based emotion recognition methods. The IoT-based hospital information system for medical record enrichment,’’
depth level features, which are generated through a convo- IEEE Access, vol. 6, pp. 15202–15211, 2018.
[18] M. Z. Uddin and M. M. Hassan, ‘‘Activity recognition for cognitive assis-
lutional neural network, play key roles in discriminating the tance using body sensors data and deep convolutional neural network,’’
affective states. The prototype implementation justifies the IEEE Sensors J., to be published.
[19] V. Sacharin, K. Schlegel, and K. R. Scherer, ‘‘Geneva Emotion Wheel [42] M. Zhao, F. Adib, and D. Katabi, ‘‘Emotion recognition using wireless
rating study (Report),’’ Swiss Center Affect. Sci., Univ. Geneva, Geneva, signals,’’ in Proc. 22nd Annu. Int. Conf. Mobile Comput. Netw., Oct. 2016,
Switzerland, Tech. Rep., 2012. [Online]. Available: https://archive- pp. 95–108.
ouverte.unige.ch/unige:97849 [43] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi,
[20] R. JA, ‘‘A circumplex model of affect,’’ J. Personality Social Psychol., T. Pun, A. Nijholt, and I. Patras, ‘‘DEAP: A database for emotion analysis
vol. 39, no. 6, pp. 1161–1178, 1980. ;using physiological signals,’’ IEEE Trans. Affective Comput., vol. 3, no. 1,
[21] M. Z. Uddin, M. M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, pp. 18–31, Jan./Mar. 2012.
and G. Fortino, ‘‘Facial expression recognition utilizing local direction- [44] S. Kiranyaz, T. Ince, and M. Gabbouj, ‘‘Real-time patient-specific
based robust features and deep belief network,’’ IEEE Access, vol. 5, ECG classification by 1-D convolutional neural networks,’’ IEEE Trans.
pp. 4525–4536, 2017. Biomed. Eng., vol. 63, no. 3, pp. 664–675, Mar. 2016.
[22] M. H. Siddiqi, M. G. R. Alam, C. S. Hong, A. M. Khan, and H. Choo, [45] M. M. Bradley and P. J. Lang, ‘‘Affective reactions to acoustic stimuli,’’
‘‘A novel maximum entropy Markov model for human facial expression Psychophysiology, vol. 37, no. 2, pp. 204–215, Mar. 2000.
recognition,’’ PLoS ONE, vol. 11, no. 9, Sep. 2016, Art. no. e0162702. [46] P. Pace, G. Aloi, R. Gravina, G. Caliciuri, G. Fortino, and A. Liotta,
‘‘An edge-based architecture to support efficient applications for healthcare
[23] M. Z. Uddin, W. Khaksar, and J. Torresen, ‘‘Facial expression recognition
industry 4.0,’’ IEEE Trans. Ind. Inform., vol. 15, no. 1, pp. 481–489,
using salient features and convolutional neural network,’’ IEEE Access,
Jan. 2019.
vol. 5, pp. 26146–26161, 2017.
[47] K. K. Wong, Methods in Research and Development of Biomedical
[24] D. C. Ong, J. Zaki, and N. D. Goodman, ‘‘Affective cognition: Exploring
Devices. Singapore: World Scientific, 2013.
lay theories of emotion,’’ Cognition, vol. 143, pp. 141–162, Oct. 2015.
[48] (2017). Galvanic Skin Response: The Complete Pocket Guide—Imotions.
[25] P. Gangamohan, S. R. Kadiri, and B. Yegnanarayana, ‘‘Analysis of GSR Pocket Guide, Imotions. [Online]. Available: https://imotions.
emotional speech—A review,’’ in Toward Robotic Socially Believable com/blog/galvanic-skin-response/
Behaving Systems, vol. 1. Berlin, Germany: Springer, 2016, pp. 205–238. [49] F. Agrafioti, D. Hatzinakos, and A. K. Anderson, ‘‘ECG pattern analysis
[26] A. A. A. Esmin, R. L. De Oliveira, and S. Matwin, ‘‘Hierarchical classifi- for emotion detection,’’ IEEE Trans. Affective Comput., vol. 3, no. 1,
cation approach to emotion recognition in Twitter,’’ in Proc. 11th Int. Conf. pp. 102–115, Jan./Mar. 2012.
Mach. Learn. Appl., vol. 2, Dec. 2012, pp. 381–385. [50] C. Vera-Munoz, L. Pastor-Sanz, G. Fico, M. T. Arredondo, F. Benuzzi,
[27] J.-J. Cabibihan and S. S. Chauhan, ‘‘Physiological responses to affective and A. Blanco, ‘‘A wearable EMG monitoring system for emotions assess-
tele-touch during induced emotional stimuli,’’ IEEE Trans. Affective Com- ment,’’ in Proc. Probing Exper., 2008, pp. 139–148.
put., vol. 8, no. 1, pp. 108–118, Jan./Mar. 2015. [51] G. Fortino, S. Galzarano, R. Gravina, and W. Li, ‘‘A framework for collab-
[28] S. P. Robbins, T. Judge, and T. T. Campbell, Organizational Behaviour. orative computing and multi-sensor data fusion in body sensor networks,’’
Upper Saddle River, NJ, USA: Prentice-Hall, 2010. Inf. Fusion, vol. 22, pp. 50–70, Mar. 2015.
[29] S. F. Abedin, M. G. R. Alam, R. Haw, and C. S. Hong, ‘‘A system model for [52] A. Andreoli, R. Gravina, R. Giannantonio, P. Pierleoni, and G. Fortino,
energy efficient green-IoT network,’’ in Proc. Int. Conf. Inf. Netw. (ICOIN), ‘‘SPINE-HRV: A BSN-based toolkit for heart rate variability analysis in
Jan. 2015, pp. 177–182. the time-domain,’’ in Proc. Wearable Auton. Biomed. Devices Syst. Smart
[30] D. Konstan, The Emotions of the Ancient Greeks: Studies in Aristotle and Environ. Berlin, Germany: Springer, 2010, pp. 369–389.
Classical Literature, vol. 5. Toronto, ON, Canada: Toronto Univ. Press,
2006.
[31] M. M. Bradley and P. J. Lang, ‘‘Measuring emotion: The self-assessment
manikin and the semantic differential,’’ J. Behav. Therapy Experim. Psy-
chiatry, vol. 25, no. 1, pp. 49–59, Mar. 1994.
[32] S. Mohseni, H. M. Kordy, and R. Ahmadi, ‘‘Facial expression recognition
using DCT features and neural network based decision tree,’’ in Proc. MD. GOLAM RABIUL ALAM (S’15–M’17)
ELMAR, Sep. 2013, pp. 361–364. received the B.S. and M.S. degrees in computer
[33] T. Ahsan, T. Jabid, and U.-P. Chong, ‘‘Facial expression recognition using science and engineering and information technol-
local transitional pattern on Gabor filtered facial images,’’ IETE Tech. Rev., ogy, respectively, and the Ph.D. degree in com-
vol. 30, no. 1, pp. 47–52, Sep. 2013. puter engineering from Kyung Hee University,
[34] M. Z. Uddin, J. J. Lee, and T.-S. Kim, ‘‘An enhanced independent South Korea, in 2017. He served as a Postdoc-
component-based human facial expression recognition from video,’’ IEEE toral Researcher with the Computer Science and
Trans. Consum. Electron., vol. 55, no. 4, pp. 2216–2224, Nov. 2009. Engineering Department, Kyung Hee University,
[35] M. Shoyaib, M. Abdullah-Al-Wadud, S. M. Z. Ishraque, and O. Chae, from 2017 to 2018. He is currently an Associate
‘‘Facial expression classification based on dempster-shafer theory of Professor with the Computer Science and Engi-
evidence,’’ in Proc. Belief Functions: Theory Appl. Berlin, Germany:
neering Department, BRAC University, Bangladesh. His research interests
Springer, 2012, pp. 213–220.
include healthcare informatics, mobile cloud and edge computing, ambient
[36] G. Fortino, R. Giannantonio, R. Gravina, P. Kuryloski, and R. Jafari,
intelligence, and persuasive technology. He is a member of the IEEE IES,
‘‘Enabling effective programming and flexible management of efficient
CES, CS, SPS, CIS, and ComSoc. He is also a member of the Korean Institute
body sensor network applications,’’ IEEE Trans. Human-Mach. Syst.,
vol. 43, no. 1, pp. 115–133, Jan. 2013. of Information Scientists and Engineers (KIISE) and received several best
[37] R. Gravina, P. Alinia, H. Ghasemzadeh, and G. Fortino, ‘‘Multi-sensor
paper awards from prestigious conferences.
fusion in body sensor networks: State-of-the-art and research challenges,’’
Inf. Fusion, vol. 35, pp. 68–80, May 2017.
[38] O. AlZoubi, R. A. Calvo, and R. H. Stevens, ‘‘Classification of EEG for
affect recognition: An adaptive approach,’’ in Proc. Australas. Conf. Artif.
Intell., vol. 5866. Berlin, Germany: Springer, 2009, pp. 52–61.
[39] P. C. Petrantonakis and L. J. Hadjileontiadis, ‘‘Emotion recognition from
EEG using higher order crossings,’’ IEEE Trans. Inf. Technol. Biomed., SARDER FAKHRUL ABEDIN (S’18) received the
vol. 14, no. 2, pp. 186–197, Mar. 2010. B.S. degree in computer science from Kristianstad
[40] E. D. Kondylis, T. A. Wozny, W. J. Lipski, A. Popescu, V. J. DeStefino, University, Kristianstad, Sweden, in 2013. He is
B. Esmaeili, V. K. Raghu, A. Bagic, and R. M. Richardson, ‘‘Detection of currently pursuing the Ph.D. degree in computer
high-frequency oscillations by hybrid depth electrodes in standard clinical science and engineering with Kyung Hee Univer-
intracranial EEG recordings,’’ Front. Neurol., vol. 5, p. 149, Aug. 2014. sity, South Korea. His research interests include
[41] C. Liu, K. Conn, N. Sarkar, and W. Stone, ‘‘Physiology-based affect the Internet of Things network management, cloud
recognition for computer-assisted intervention of children with autism computing, fog computing, and wireless sensor
spectrum disorder,’’ Int. J. Human-Comput. Stud., vol. 66, no. 9, networks. He is a member of the Korean Institute
pp. 662–677, Sep. 2008. of Information Scientists and Engineers.
SEUNG IL MOON received the B.S., M.S., and CHOONG SEON HONG (S’95–M’97–SM’11)
Ph.D. degrees from the Department of Computer received the B.S. and M.S. degrees in electronic
Engineering, Kyung Hee University, South Korea, engineering from Kyung Hee University, Seoul,
in 2011, 2013, and 2018, respectively, where he South Korea, in 1983 and 1985, respectively, and
is currently serving as a Postdoctoral Researcher the Ph.D. degree from Keio University, Minato,
with the Computer Science and Engineering Japan, in 1997. In 1988, he joined KT, where he
Department. He has authored several national and was involved in broadband networks as a Technical
international journals and conference proceedings. Staff Member. In 1993, he joined Keio University.
His research interests include ambient intelligent He was with the Telecommunications Network
living, advanced wireless network protocols, and Laboratory, KT, as a Senior Member of Techni-
SDN networks. He is a member of KIISE. cal Staff and the Director of the Networking Research Team, until 1999.
Since 1999, he has been a Professor with the Department of Computer
Engineering, Kyung Hee University. His research interests include the future
ASHIS TALUKDER (S’17–M’18) received the
Internet, ad hoc networks, network management, and network security. He is
B.S. and M.S. degrees in computer science
a member of ACM, IEICE, IPSJ, KIISE, KICS, KIPS, and OSIA. He has
and engineering from the University of Dhaka,
served as the General Chair, the TPC Chair/Member, or an Organizing Com-
Bangladesh. He is currently pursuing the Ph.D.
mittee Member of international conferences, such as NOMS, IM, APNOMS,
degree with the Department of Computer Sci-
E2EMON, CCNC, ADSN, ICPP, DIM, WISA, BcN, TINA, SAINT, and
ence and Engineering, Kyung Hee University,
ICOIN. He is currently an Associate Editor of the IEEE TRANSACTIONS ON
South Korea. He has been an Assistant Professor
NETWORK AND SERVICE MANAGEMENT, the International Journal of Network
with the Department of Management Information
Management, and the Journal of Communications and Networks and an
Systems (MIS), University of Dhaka, since 2009.
Associate Technical Editor of the IEEE Communications Magazine.
His research interests include social networks,
influence maximization, network optimization, and data mining. He is a
member of the IEEE Communication Society (IEEE ComSoc), SIGAPP,
the Association for Information Systems (AIS), Bangladesh Chapter, and the
Internet Society (ISOC), Bangladesh Chapter. He is also a member of the
Korean Institute of Information Scientists and Engineers (KIISE).