Current Issue

Journal of Korea Robotics Society - Vol. 19 , No. 1

[ ARTICLE ]
The Journal of Korea Robotics Society - Vol. 18, No. 2, pp. 203-215
Abbreviation: J. Korea Robot. Soc.
ISSN: 1975-6291 (Print) 2287-3961 (Online)
Print publication date 31 May 2023
Received 27 Dec 2022 Revised 06 Feb 2023 Accepted 07 Feb 2023
DOI: https://doi.org/10.7746/jkros.2023.18.2.203

The Implementation and Analysis of Facial Expression Customization for a Social Robot
Jiyeon Lee1, * ; Haeun Park2, * ; Temirlan Dzhoroev1 ; Byounghern Kim3 ; Hui Sung Lee
1M.S Student, Design Department, UNIST, Ulsan, Korea (delay0320@unist.ac.kr)(dzhoroev@unist.ac.kr)
2Ph.D Student, Creative Design Engineering, UNIST, Ulsan, Korea (haeunpark@unist.ac.kr)
3M.S, Creative Design Engineering, UNIST, Ulsan, Korea (byounghernkim@unist.ac.kr)

소셜 로봇의 표정 커스터마이징 구현 및 분석
이지연1, * ; 박하은2, * ; Temirlan Dzhoroev1 ; 김병헌3 ; 이희승
Correspondence to : Associate Professor, Corresponding author: Design Department, UNIST, Ulsan, Korea (huisung.lee@unist.ac.kr)
Contributed by footnote: * Jiyeon Lee and Haeun Park contributed equally to this work.


CopyrightⓒKROS
Funding Information ▼

Abstract

Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.


Keywords: Human-Robot Interaction, Human-Robot Relationship, Robot Facial Expression, Customization

1. Introduction

The scope of application of social robots is gradually expanding in areas such as education[1,2], caring professions[3,4], companion services[5], and pet[6,7]. There are roles for robots to interact with individuals or small groups of people. The process of forming human-robot relationships (HRR) is an important factor to take into account for robots that have a small group of users[8]. For HRR to be formed seamlessly, the emotions expressed by the robot are important[9,10]. Moreover, many studies have highlighted the importance of emotional intelligence, including emotional thinking and empathy[11-13]. Robot facial expressions are the dominant means by which robots can express their emotions[14]. However, individual users could respond very differently to the same robot facial expression. Therefore, a function that customizes robot facial expressions based on the individual preferences of a particular user would facilitate the establishment of strongly-connected HRR. A particular advantage of such a customization function would be that the user experience could be improved as the user customizes the robot with preferred robot facial expressions[15]. However, users might not have a clear realization of which robot facial expression they want and they need to spend sufficient time customizing characters. Existing studies on customizing robot facial expressions[16,17] have not addressed dynamic changes in robot facial expressions. Furthermore, evaluations of the process of customizing robot facial expressions and interactions with the customized versions of robots are sparse in the literature.

In order to address these shortcomings, we implemented a customization tool that allowed a user to customize a robot facial expression and created a prototype robot that could interact with its user. Unlike robot face customization, the robot facial expression customization does not change the basic shape or number of elements that make up the face, so the basic face remains the same. Instead, users can customize the robot facial expression, which changes the size, position, or shape of each element based on the robot’s emotions. By applying a user’s customized robot facial expressions to the prototype robot, the user was then enabled to interact with the customized version instead of the predefined version of the robot. When the facial expression of the robot responded to user-robot interaction, the robot facial expression is dynamically changed using an emotion engine. In addition, we conducted experiments to analyze our proposed customization process and explore the results achieved by users’ application of our process by comparing the predefined and customized versions of the prototype robot. Ekman’s six basic emotions are used in the experiment[18]. As part of our analysis of the customizing process, the difficulty, elapsed time, and satisfaction associated with each emotion were measured, as well as the overall interest and value that users expressed. To compare the two versions of the robot, we measured the degrees of intimacy, empathy, and enjoyment that users experienced when interacting with each version of the robot. This study proposes a tool for the customization of robot facial expressions. We discuss the key factors included in the tool and offer insights that may be referenced when designing further developments of this tool in the future.


2. Related work
2.1 Generation of robot facial expressions

Social robots use various means such as facial expressions[19-22], sounds[23-25], colors[23,24,26], and motions[23,24][26,28] to express emotions. Facial expressions convey a considerable proportion of the emotional information[14,29]. There are two ways for a robot to interact through robot facial expressions: either imitate the facial expressions of a user, or generate an emotion for the robot through a decision-making process. For social interaction, which is the main function of social robots, it does not suffice to imitate human facial expressions alone. Therefore, studies that explore how robots can express their own specific emotions, distinct from the user emotion, may play an important role in unlocking a richer user experience of social robots[10]. A number of studies have considered the concept of emotion space to allow a robot to generate its own emotions[30,31]. A robot using emotion space is required to have access to a process that can generate robot facial expressions that correspond to the robot’s intended emotional expression[32]. In addition, many attempts have been made to obtain a more natural mode of expression on a robot’s face, including studies that naturally implement emotional expression by applying dynamical processes to the transitions of robot emotion[33,34]. Park et al. added physiological movement to create more natural and diverse robot facial expressions[35,36]. Prajapati et al. implemented a process that allowed a user’s facial expression to naturally affect the robot’s responsive facial expression through kinetic emotions that use emotional information extracted from the user’s face[37].

2.2 Preference of robot facial expression

In general, a particular robot product has a pre-set range of facial expressions, limiting the robot’s ability to satisfy the preferences of various users and to form deeply-connected HRR. The factors that influence the preferences of any particular user are as follows:

The facial expression of the preferred robot varies depending on whether the user’s personality is introverted or extroverted [38-40]; The degree of vitality and abundance of the preferred facial expressions depends on the user’s nationality[41]; Certain people prefer robots with funny expressions[42]; The preference for an asymmetric facial expression depends on the user’s dominant hand direction (left or right)[43]. Asymmetric robot facial expression can enhance human-robot interaction, for example, by encouraging communication in interactions between robots and teenagers or by inducing favorable responses among consumers[17,44]; Cultural interpretations also influence users’ recognition of robot facial expressions. Barrett et al. argued that people are active constructors of the classification of facial movements using culturally-learned emotional concepts[45,46];

Preferred robot facial expressions depend on factors such as personality, nationality, and culture. It is not straightforward to establish robot facial expressions that will satisfy all users. This problem may be mitigated by an appropriate customization tool that can flexibly change a robot facial expression according to the user’s desire.

2.3 Customization of robot expression

Certain studies have allowed users to change a robot’s expression through customization in a flexible manner. For example, teenagers have been involved in robot customization[1]. Furthermore, FLEXI, a robot with customizable body shapes and facial expressions, has been designed to meet the needs of various users[16]. Another example demonstrated that a customized process increased intimacy with robots and assisted with forming social relationships[47]. However, most studies have failed to implement dynamic changes in facial expression in a robot by customizing only static facial expressions. In addition, no analyses of the customization process and the user experience of generated robot expressions have been conducted in depth. This study addresses both of these factors.


3. Implementation
3.1 Prototype robot

The prototype robot shown in [Fig. 1] was used in this study. The robot was used to enable users to interact with a social robot that expressed its emotions through its face. The prototype robot was designed to be simple in order to minimize the impact of certain appearances. The robot had a built-in 4-inch LCD screen and was driven by connecting an HDMI cable to a PC to display the robot facial expression. To blend in with the screen, the overall shape of the prototype was designed using the metaphor of a helmet. The robot facial expression shown in [Fig. 2] is a neutral facial expression with all control points at zero. The robot facial expression included ten control points, each designed to control a specific element of emotional expression. The elements that constitute a robot’s face are its two eyes and mouth, which contain the most information about emotion. The ten control points were denoted CPi = 1⋯10 and each control point had a normalized value between -1 and 1. The robot’s eyes included pupils and eyelids. Both pupils always pointed in the same direction. The eyelid structure was designed to accommodate an eyebrow structure as well, using dark colors. These control points allowed the robot to express various facial expressions with a simple underlying structure. The shapes of the eyes were determined by seven control points (CP1 ~ CP7);


[Fig. 1] 
The prototype robot with 4 inch display (robot size: 21 cm ×18 cm ×12 cm)


[Fig. 2] 
Prototype face control point (CP), 1: eye size, 2: pupil size, 3: angle of eyelid, 4: height of left eyelid, 5: height of right eyelid, 6: vertical location of pupil, 7: horizontal location of pupil, 8: mouth width, 9: mouth height, 10: mouth corner height. All CPs are set to zero in this example

  • CP1 (eye size): This could be adjusted from 80 ~ 120% of the default eye size.
  • CP2 (pupil size): This could be adjusted from 75 ~ 125% of the default pupil size.
  • CP3 (angle of eyelid): Relative to the horizontal axis of the screen, eyelids could rotate from -30 ~ +30 degrees. The angles of the left and right eyelids were reversed to make the angles of the two eyelids symmetrical.
  • CP4, CP5 (height of left and right eyelids): By adjusting the distance between the eyelid and the center of the eye, the degree of eyelid closure could be controlled (-1: fully closed, 1: fully opened).
  • CP6, CP7 (vertical and horizontal positions of pupils): CP6 determined the y-position and CP7 determined the x-position. When CP6 and CP7 were both equal to 1, the pupil met the edge of the eye in the northeast direction of the eye.

The shape of the mouth was determined by three control points (CP8 ~ CP10);

  • CP8 (mouth width): This could be adjusted from 30 ~ 170% of the default mouth width
  • CP9 (mouth height): This could be adjusted from 0 ~ 200% of the default mouth height.
  • CP10 (mouth corner height): When the value was 0, the corner of the mouth was located in the center of the mouth height. The corners of the mouth moved upward when this value was positive and downward when the value was negative.
3.2 Robot face simulator

As shown in [Fig. 3], the robot face simulator consisted of a frontend and a back-end. JavaScript React was used on the front-end and was used to create the robot facial expression in the browser environment. In the back-end, there was an emotion engine written in Python language. In order to express emotion dynamically, the emotion engine received a stimulus signal as an input and calculated the value of each control point in response. When communicating between the front-end and the back-end, stimulus signals were transmitted from the front-end to the back-end, and control point values were transmitted from the back-end to the front-end. The robot face simulator operated at a sampling frequency of 100 Hz.


[Fig. 3] 
Structure of robot face simulator

The operation of the robot face simulator is represented by the following set of equations and their clarifying descriptions. First, we present the stimulus signal vector:

rγ1γ2γiγ6T,0γi1,0r1(1) 

In the front-end, when a user pressed a button corresponding to a certain emotion, a stimulus corresponding to that emotion was generated and transmitted to the back-end. The stimulus signal vector (r) in Eq. (1) consists of a 6-dimensional vector expressed as γ1 to γ6. The index i in γi represents the emotion number, with the order proceeding through sadness, happiness, anger, disgust, fear, and surprise. For example, when the ‘happiness’ button was pressed by a user, the value of γ2 was maintained at 1 for about 2 seconds.

The stimulus signal vector (r) received by the back-end was used as an input value for the emotion engine. The emotion engine used in the back-end referenced a mental model[33] and a linear affect space model[48]. Second, we present the force vector:

s=Krs1s2sis6T(2) 

The force vector (s) is calculated by multiplying the stimulus signal vector (r) by K, as expressed in Eq. (2)[48]. This vector acts as a force applied to an emotion in the emotion dynamics. Next, we present the equation of emotion:

Me¨+Ce˙+Ke=s(3) 

Eq. (3) expresses the second-order equation of emotion used in Miwa’s mental model[33]. The force vector (s) drives the emotion vector (e) in the emotion dynamics. The emotion vector (e) represents the coordinate position of the emotion in the emotion space. The values of the parameters in Eq. (3) are as follows:

M=0.5,C=10,K=72(4) 

The values of M (inertia), C (viscosity), and K (elasticity) expressed in Eq. (4) determine the degree to which the robot facial expression changes naturally. These values can be adjusted by the user, but in this experiment we fixed the heuristically determined values to focus on the customization of facial expressions. Finally, we present the control point vector:

p10×1=T10×6e6×1CP1CP2CPiCP10T(5) 

In Eq. (5), T is a transformation matrix that converts the 6-dimensional emotion vector (e) into the 10-dimensional control point vector (p). The T matrix is a collection of control point values for each of the six emotions obtained when a user completes the customization process. Therefore, if the T matrix changes when the robot expresses the same emotion, the control point vector (p) also changes. Finally, the resulting control point vector (p) is transmitted to the front-end for creating the robot facial expression.

This model renders facial expressions more naturally and allows a dynamic expression of the changes between successive emotional expressions. In addition, if the user has a customized T matrix, it has the advantage that a dynamic robot facial expression may be generated immediately. These functions allow users to customize their own robots without any additional work from the developer when using the customizable robot.

3.3 Customization tool

In order to customize the robot facial expression, the participants were required to adjust the robot control point for each emotion. As shown in [Fig. 4], the customization tool was implemented in the front-end to enable experimentation in the browser environment. Each control point could be adjusted using a slide bar, with values varying from -1 to 1. An emotion button was present in the lower left corner of the monitor. When the emotion button was pressed, ‘selected’ was displayed and highlighted in green.


[Fig. 4] 
The customization tool for customizing the robot’s facial expression


4. Experiment
4.1 Hypotheses

As discussed in Section 2, one of the aims of the study was to explore how customized robot facial expressions would affect the HRR. An additional aim was to establish how the customization experience was evaluated by the users.

Consequently, our hypotheses were set up as follows:

  • H1 A more positive effect on HRR is achieved when using a customized version rather than a predefined version of robot facial expressions.
  • H2 Users experience different workload and satisfaction levels while they are customizing each emotional expression of the robot.
  • H3 The customization process itself is interesting and valuable to users
4.2 Participants

A total of forty-four participants (twenty-seven males and seventeen females) were recruited. Participants provided age and gender information and consented to participate in the experiment in advance. Participants’ ages ranged from 19 to 40 years, with a mean of 23.7 (SD = 3.33 years). Data were collected from all participants without exception.

4.3 Procedure

As shown in [Fig. 5], the experimental environment included a prototype robot, a monitor, and a PC to run the customization tool. A mouse and keyboard were provided to the participants so that they could use the customization tool displayed on the monitor. The experimental sequence was designed with the assumption that it would be a user’s first use of a customizable social robot. The overall experimental sequence proceeded as follows: First, individual participants interacted with the robot, reflecting the predefined version of robot facial expression. Second, the robot facial expression was customized through the PC. Finally, individual participants interacted with their respective customized versions.


[Fig. 5] 
Overview of the experimental environment

4.3.1 Interaction with predefined version

Scenarios for six basic Ekman emotions were first shown to participants before customization. These scenarios referenced common emotional triggers provided by the Paul Ekman Group[49]. For example, the scenario of complimenting the robot was used for happiness, and the scenario of bothering the robot was used for anger. After the participants confirmed that they fully understood these scenarios, they interacted with the predefined version of the prototype robot. Based on the scenario, the participants created sample sentences to define each emotion that the robot experienced. They were asked to press the emotion button while voicing or thinking the sample sentences so that they could sense an interaction with the robot. The participants were allotted adequate time to interact with the robot.

4.3.2 Customizing robot facial expressions.

Before customizing each emotion, participants practiced using the mouse to adjust the slide bar for each control point. The participants could perform customization while referring to both the face on the monitor and the face of the robot. After sufficient practice, the robot facial expression was customized for each emotion, and the order of the emotions was randomly mixed. Once a participant had customized all six emotions, that participant’s customized control point data were stored in the T matrix. Finally, the participant was asked to complete a simple questionnaire.

4.3.3 Interaction with customized version.

The participants interacted with their customized versions of the robot in the same way as in their initial interactions with the predefined version. The participants were provided with the opportunity to perform additional interactions with the predefined version. Finally, we asked the participants to complete a brief questionnaire about their customization experiences.

4.4 Measures

Measures obtained from users included their responses to the questionnaire and time data measured during customization. After providing sentences for evaluating each index, participants were asked to rate their degree of agreement with these sentences on a Likert 7-point scale. A measure of 1 on the scale meant “strongly disagree” and a measure of 7 meant “strongly agree”. For example, for the topic of difficulty, the participants were given the sentence “Customizing this emotion was difficult.”.

After a user customized the robot facial expression, the difficulty and satisfaction of each emotion were rated. Concurrently, the elapsed time used by the participant to customize the facial expression corresponding to each emotion was measured. Difficulty and elapsed time were used as factors to investigate the workload.

After all the experiments had been completed, the predefined version and the customized versions were evaluated for intimacy, empathy, and enjoyment, respectively. Finally, the participants evaluated the customization tool in terms of interest and value indices. Participants’ opinions were acquired through an interview, which included the following questions:

  • Q1 “If you are actually using this social robot, which robot do you want to use, predefined or customized? What is the reason?”
  • Q2 “What did you like and dislike when customizing the robot facial expressions for each emotion?”

Through the answers to the above questions, detailed opinions that were not covered in the evaluation indices were acquired.


5. Results

As a result of the experiment, a total of 264 (44 participants × 6 emotions) robot facial expression data were obtained. [Fig. 6(b)] illustrates an example of the facial expressions with the largest standard deviation among the customized facial expressions of all participants, for each of the six emotions.


[Fig. 6] 
Set of robot facial expressions corresponding to sadness, happiness, anger, disgust, fear, and surprise. (a) is a set of robot facial expressions of predefined version created by the author’s intuition. (b) is an example of a set of robot facial expressions with the largest difference from the average robot facial expression among the customized robot facial expressions of all participants, for each of six emotions

5.1 Comparison of predefined and customized version of robot facial expressions

A paired t-test (student’s t-test) was conducted to compare the predefined version and the customized version in terms of intimacy, empathy, and enjoyment. As shown in [Table 1], the results for the predefined version and the customized version displayed significant differences. The customized version was rated more highly than the predefined version in terms of intimacy (MD = -0.66). The predefined version and customized version exhibited an average difference of MD = -0.75 in the empathy index. In term of enjoyment, the two versions also exhibited a significant difference (MD = -0.55). In summary, the customized version of robot facial expression was rated more highly in terms of intimacy, empathy, and enjoyment than the predefined version.

[Table 1] 
Comparison between predefined version and customized version with paired t-test (student’s t-test), P: Predefined version, C: Customized version
statistic p MD SD
PIntimacy CIntimacy -2.93 0.005** -0.66 0.225
PEmpathy CEmpathy -2.90 0.006** -0.75 0.285
PEnjoymemt CEnjoyment -2.86 0.007** -0.55 0.191
Note ** p<.01

5.2 Analysis of workload (difficulty, elapsed time) and satisfaction

Calculations using Repeated Measures ANOVA (RM-ANOVA) were conducted to ascertain the difference in workload and satisfaction associated with the respective emotions. As the sample size was 44 (n > 30), it was assumed that a normal distribution would be satisfied, applying the central limit theorem. For the sphericity test, Mauchly’s test was conducted. We ascertained that Mauchly’s test of sphericity had been violated in all the indices. Therefore, the Greenhouse-Geisser sphericity estimate was used to correct the degrees of freedom for the F-distribution.

5.2.1 Difficulty

As shown in [Table 2], sadness, happiness, anger, and surprise displayed significant differences with respect to disgust and fear. Further, no significant differences between sadness, happiness, anger, and surprise were noted. Similarly, no significant difference was noted between disgust and fear. As displayed in [Fig. 7(a)], the set of emotions {sadness, happiness, anger, surprise} were associated with significantly lower levels of difficulty than {disgust, fear}.

[Table 2] 
Difficulty analysis according to emotions using RM-ANOVA
Comparison MD SE t ptukey
sadness - happiness 0.07 0.214 0.32 1.000
sadness - anger 0.16 0.256 0.62 0.989
sadness - disgust -3.02 0.284 -10.63 <.001***
sadness - fear -3.18 0.276 -11.51 <.001***
sadness - surprise -0.59 0.269 -2.20 0.259
happiness - anger 0.09 0.227 0.40 0.999
happiness - disgust -3.09 0.301 -10.26 <.001***
happiness - fear -3.25 0.307 -10.59 <.001***
happiness - surprise -0.66 0.287 -2.30 0.218
anger - disgust -3.18 0.246 -12.93 <.001***
anger - fear -3.34 0.307 -10.90 <.001***
anger - surprise -0.75 0.256 -2.93 0.057
disgust - fear -0.16 0.254 -0.63 0.988
disgust - surprise 2.43 0.300 8.09 <.001***
fear - surprise 2.59 0.257 10.10 <.001***
Note * p<.05, ** p<.01, *** p<.001


[Fig. 7] 
Box plot: (a) Difficulty, (b) Satisfaction, and (c) Elapsed time associated with respective emotion

5.2.2 Satisfaction

[Table 3] clearly reveals the relationships between happiness, anger, disgust, fear, and surprise. There were no significant differences between happiness, anger, and surprise. Similarly, there was no significant difference between disgust and fear. Notably, the comparison between these two groups of emotions, disgust and fear, consistently indicated a significant difference (p < 0.05). The set of emotions {happiness, anger, surprise} were associated with significantly lower levels of difficulty than {disgust, fear}. Sadness could not be clearly ordered. Sadness displayed a significant difference only with respect to anger and fear (psad-anger = 0.013, psad-fear = 0.011). As shown in [Fig. 7(b)], sadness is located between the two groups {happiness, anger, surprise}, and {disgust, fear} in terms of satisfaction.

[Table 3] 
Satisfaction analysis according to emotions using RM-ANOVA
Comparison MD SE t ptukey
sadness - happiness -0.32 0.220 -1.45 0.699
sadness - anger -0.61 0.176 -3.49 0.013*
sadness - disgust 0.95 0.336 2.84 0.070
sadness - fear 1.16 0.327 3.55 0.011*
sadness - surprise -0.02 0.263 -0.09 1.000
happiness - anger -0.30 0.154 -1.91 0.409
happiness - disgust 1.27 0.334 3.81 0.005**
happiness - fear 1.48 0.334 4.42 <.001***
happiness - surprise 0.30 0.275 1.07 0.889
anger - disgust 1.57 0.331 4.74 <.001***
anger - fear 1.77 0.322 5.51 <.001***
anger - surprise 0.59 0.263 2.25 0.237
disgust - fear 0.20 0.288 0.71 0.980
disgust - surprise -0.98 0.311 -3.14 0.034*
fear - surprise -1.18 0.259 -4.57 <.001***
Note * p<.05, ** p<.01, *** p<.001

5.2.3 Elapsed time

[Table 4] clearly reveals the relationships between sadness, anger, disgust, fear, and surprise. There were no significant differences between sadness, anger, and surprise. Similarly, there was no significant difference between disgust and fear. The comparison between these two groups of emotions consistently showed a significant difference (p < 0.001). Therefore, the order of emotions is {sadness, anger, surprise} < {disgust, fear} with respect to elapsed time. Happiness could not be clearly ordered. However, happiness did not display a significant difference with respect to disgust, and surprise (p > 0.05), whereas it did display a significant difference with respect to sadness, anger and fear (p < 0.05). In [Fig. 7(c)], happiness is located between the two groups {sadness, anger, surprise}, and {disgust, fear} in terms of elapsed time.

[Table 4] 
Elapse time analysis according to emotions using RM-ANOVA
Comparison MD SE t ptukey
sadness - happiness -14.0 4.21 -3.35 0.020*
sadness - anger -0.5 3.81 -0.14 1.000
sadness - disgust -30.7 4.56 -6.73 <.001***
sadness - fear -35.3 5.01 -7.05 <.001***
sadness - surprise -1.8 3.96 -0.44 0.998
happiness - anger 13.6 4.34 3.13 0.035*
happiness - disgust -16.6 5.93 -2.80 0.078
happiness - fear -21.2 6.11 -3.47 0.014*
happiness - surprise 12.3 4.78 2.58 0.123
anger - disgust -30.2 5.08 -5.94 <.001***
anger - fear -34.8 5.79 -6.00 <.001***
anger - surprise -1.2 3.95 -0.31 1.000
disgust - fear -4.6 6.49 -0.71 0.980
disgust - surprise 28.9 5.75 5.03 <.001***
fear - surprise 33.5 5.33 6.29 <.001***
Note * p<.05, ** p<.01, *** p<.001

5.2.4 Correlation

A correlation analysis was conducted to determine the correlation between the three indices (difficulty, elapsed time, and satisfaction). The Pearson correlation coefficient (r) was used to represent the magnitude of the linear correlation. difficulty and satisfaction displayed a significant strong negative correlation (r = -0.515, p < 0.001). A significantly positive correlation was observed between difficulty and elapsed time (r = 0.376, p < 0.001) satisfaction and elapsed time displayed a significantly negative correlation (r = -0.239, p < 0.001).

5.3 Analysis of customization process

We obtained data, assessing interest and value for the respective customization processes. Interest was calculated with an average point value of 6.52 and a standard deviation of 0.698. The mean of the value data was 6.41 points and the standard deviation was 0.972. These two mean scores both lay between 6 and 7 on the 7-point Likert scale.


6. Discussion
6.1 The impact of a customizable robot

The results of [Table 1] indicate that the customized version was rated higher than the predefined version for all indices, and these differences were found to be statistically significant. This result implies that users felt intimate with the customized version, empathized to a greater degree with the robot facial expressions, and enjoyed interacting with it. The same conclusions could be drawn from the results of the interviews. In response to the question Q1, 30 out of 44 participants chose the customized version. Participants explained their preferences for the customized version by using words including ‘intimacy’ (P24), ‘empathy’ (P17, P22, P27, P35), and ‘enjoyment’ (P18). Eight participants answered: ‘Suitable for me.’. Four participants emphasized the need for a customized version, noting that each person’s emotions and facial expressions are subtly different. Participant P24 expressed that the process of creating the robot facial expression imbued the robot with character and personality. Participant P36 emphasized that the customized version felt like his own personal robot and expressed a feeling of possessiveness towards the robot.

These results confirm hypothesis H1. Users generated customized versions of the robot as a partner with a preferred facial expression, character, and personality through the customization process. Through this process, users experienced feelings of intimacy, empathy, and enjoyment towards the customized version. These are attributes that improve HRR. We may conclude that the availability of a customized version of robot facial expression has a more positive effect on HRR than a predefined version.

6.2 Design guideline for customizing robot facial expressions

As revealed by the results, the more difficult it was to customize the robot facial expression, the longer it took and the less satisfying it was. Furthermore, the experienced workload and satisfaction associated with customizing robot facial expression varied across the respective emotions. Mostly, participants experienced greater difficulty and longer elapsed time in customizing the facial expressions of disgust and fear than for other emotions. We are able to identify the root causes of these results by analyzing the interviews. The causes are classified into three categories. First, when the user’s desired facial expression element (control point) was not available, the evaluations were different for each emotion. Fifteen participants wanted other control points to create the desired expression, mainly in the case of disgust and fear, but several control points were mentioned in the case of sadness and happiness. Examples of control points that were mentioned are ‘round eyelids’, ‘grimacing expression’, ‘distance between pupils’. Another complicating factor is the scenario where different robot facial expressions could be chosen for the same emotion, depending on the situation. Seven participants commented about this scenario and felt that this happened in the case of fear. An example of a participant’s statement is as follows. “I couldn’t think of a representative facial expression because the facial expression was different depending on the situation in which the fear was expressed.” (P24). A final complicating factor occurred when the user was not familiar with making a robot facial expression and did not know how to express it. Eight participants noted that they were not familiar in creating disgust emotions. Nine participants mentioned that it was not easy to imagine how to express the robot’s emotions of disgust and fear.

We conclude that hypothesis H2 was confirmed and that there were particular difficulties associated with expressing disgust and fear. It is necessary to pay attention to dealing with the emotions of disgust and fear in the design of robot facial expressions; The following comments summarize the insights gained from this study: An optimized robot facial expression element needs to be added to adequately express disgust and fear; The process of segmenting the situation according to the particular emotion should be required; An improved customization process is needed, such as providing sample robot facial expressions to enable users to identify robot facial expressions of disgust and fear.

6.3 Implications of customization process

The participants expressed very strongly that they were interested in the customization process and that they found it valuable. A comparative analysis was not conducted, but the participants’ interviews indicated which aspects had a positive impact on the customization process. Examples of comments from the interviews follow. First, with respect to interest: “It was exciting to be able to see the robot facial expression change while customizing the robot.” (P18). “It was fun to create a robot expression the way I wanted while using the customization tool.” (P33). Second, with respect to value: “I think I feel affectionate and familiar with the robot because I customized the robot facial expression.” (P24). “I feel more friendly through the process of creating my expression on the robot.” (P40). “Because I performed the customization process, I felt more empathetic to the robot.” (P41).

We conclude that the participants are interested in changing and generating robot facial expressions. Therefore, the customization tool can be utilized for performing additional interaction with the robot to increase user interest. In addition, the process of creating a specialized robot facial expression for individuals is regarded as valuable. Therefore, the customization tool can not only provide the user with an opportunity to create a customized expression but also play a role in amplifying the effect of the customized expressions. Participants noted that the experience of creating their own facial expressions on the robot made them feel more affectionate, friendly and sympathetic towards the robot. These results are consistent with hypothesis H3.


7. Conclusion

The customized version was experienced as an individual user’s own personal robot, and the users sensed intimacy with the robot. Because the robot facial expression was created according to a user’s thoughts, the user deeply sympathized with the robot facial expression. A customized version of robot facial expression enhances the user’s enjoyment, which is one of the key factors for successful interaction with a robot.

In this study, participants found it difficult and less satisfying to customize disgust and fear. We identify three reasons for this and suggest design guidelines for improvement: First, it is necessary to optimize the available facial expression elements. A lack of facial expression elements makes it difficult to express the desired emotion, and conversely, too many facial expression elements require more effort from the user; Second, the scenario should be presented in a clearer and more detailed way. Different robot facial expressions can be created for the same emotion depending on the scenario. Finally, the step of providing robot facial expression samples in the customization process should be offered to the user as an additional resource. This is necessary for users who may have difficulty coming up with the appropriate robot facial expression for a specific emotion.

Participants showed interest in using the customization tool to change the facial expressions of the robot and create the desired ones on it. In addition, the above-mentioned process helps the user experience affection, friendliness, and sympathy for the robot. Therefore, although the main function of the customization tool is to only provide the user with the opportunity to customize the robot facial expressions, the process creates interest in the user; this, in turn, enhances the effect of the customized robot. The process of customizing facial expressions can also create a positive experience for the user, so developing an interface that takes UI/UX into account can amplify this effect. For example, an interface that can be directly adjusted in size or position using the robot’s touch screen or an interactive interface can be implemented.

Our contribution involves improving HRR through customized social robots. This is a function that should be considered as the market for social robots expands. Notably, the effectiveness of this function is verified in this study through experiments with participants. We expect that our findings will guide the development of custom functions in robots in the future.

A limitation of our study is that the users interacted with the robots for a short period and our evaluation results are based on that. If the users interact with the robots for a long period, it may be possible to explore new effects that can be obtained through functions such as modification of the robot facial expressions multiple times. Additionally, we created a robot face and conducted an experiment. However, there may be differences in the effect and scope of customization depending on the robot face, further research on this will be necessary. The emotion engine used in the study has limitations in representing patterns of emotions or diverse expressions even for the same emotion. The reason for this is that we have implemented a linear expression from neutral to each emotion based on the customized facial expressions. Future research will focus on improving an emotion engine that supplements these limitations.


Acknowledgments

This paper was partly supported by National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (NRF-2020R1F1A1066397) and the Technology Innovation Program (20015056, Commercialization design and development of Intelligent Product-Service System for personalized full silver life cycle care) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea)


References
1. T. Kanda, T. Hirano, D. Eaton, and H. Ishiguro, “Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial,” Hum Comput Interact, vol. 19, pp. 61-84, 2004, [Online], https://www.tandfonline.com/doi/abs/10.1080/07370024.2004.9667340.
2. F. Tanaka, A. Cicourel, and J. R. Movellan, “Socialization between toddlers and robots at an early childhood education center,” PNAS, vol. 104, no. 46, pp. 17954-17958, Nov., 2007.
3. S. M. S. Khaksar, R. Khosla, M. T. Chu, and F. S. Shahmehr, “Service Innovation Using Social Robot to Reduce Social Vulnerability among Older People in Residential Care Facilities,” Technol Forecast Soc Change, vol. 113, pp. 438-453, Dec., 2016.
4. H. Kozima, C. Nakagawa, and Y. Yasuda, “Interactive robots for communication-care: a case-study in autism therapy,” IEEE International Workshop on Robot and Human Interactive Communication (ROMAN), Nashville, USA, 2005.
5. G. Hoffman, O. Zuckerman, G. Hirschberger, M. Luria, and T. Shani Sherman, “Design and Evaluation of a Peripheral Robotic Conversation Companion,” The Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland Oregon, USA, pp. 3-10, 2015.
6. J. Hudson, R. Ungar, L. Albright, R. Tkatch, J. Schaeffer, and E. R. Wicker, “Robotic Pet Use Among Community-Dwelling Older Adults,” The Journals of Gerontology: Series B, vol. 75, no. 9, pp. 2018-2028, Aug., 2020.
7. G. F. Melson, P. H. Kahn, Jr., A. Beck, and B. Friedman, “Robotic Pets in Human Lives: Implications for the Human-Animal Bond and for Human Relationships with Personified Technologies,” Journal of Social Issues, vol. 65, no. 3, pp. 545-567, Sept., 2009.
8. T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially interactive robots,” Robotics and Autonomous Systems, vol. 42, no. 3-4, pp. 143-166, Mar., 2003.
9. I. Leite, A. Pereira, S. Mascarenhas, C. Martinho, R. Prada, and A. Paiva, “The influence of empathy in human-robot relations,” International Journal of Human-Computer Studies, vol. 71, no. 3, pp. 250-260, Mar., 2013.
10. R. Stock and M. A. Nguyen, “Robotic Psychology. What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?,” the 52nd Hawaii International Conference on System Sciences, Grand Wailea Hawaii, USA, 2019.
11. A.-H. Chiang, S. Trimi, and Y.-J. Lo, “Emotion and service quality of anthropomorphic robots,” Technological Forecasting and Social Change, vol. 177, Apr., 2022.
12. M.-H. Huang and R. T. Rust, “Engaged to a Robot? The Role of AI in Service,” Journal of Service Research, vol. 24, no. 1, pp. 30-41, Feb., 2021.
13. P. Salovey and J. D. Mayer, “Emotional Intelligence,” Imagination Cognition and Personality, vol. 9, no. 3, Nov., 2016.
14. A. Mehrabian, “Communication Without Words,” Communication Theory, 2nd ed. Routledge, 2017.
15. A. Schade, “Customization vs. Personalization in the User Experience,” Nielsen Norman Group, 2016, [Online], https://www.nngroup.com/articles/customization-personalization/, Accessed: Sept. 07, 2022.
16. P. Alves-Oliveira, M. Bavier, S. Malandkar, R. Eldridge, J. Sayigh, E. A. Björling, and M. Cakmak, “FLEXI: A Robust and Flexible Social Robot Embodiment Kit,” Designing Interactive Systems Conference, Jun., 2022, pp. 1177-1191.
17. A. S. Kim, E. A. Björling, S. Bhatia, and D. Li, “Designing a Collaborative Virtual Reality Game for Teen-Robot Interactions,” The 18th ACM International Conference on Interaction Design and Children, Jun., 2019, pp. 470-475.
18. P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality Social Psychology, vol. 17, no. 2, pp. 124-129, 1971.
19. C. Breazeal, Designing Sociable Robots, The MIT Press, 2004.
20. L. Canamero and J. Fredslund, “I show you how I like you - can you read it in my face? [robotics],” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 31, no. 5, pp. 454-459, Sept., 2001.
21. F. Hara and H. Kobayashi, “Real-time facial interaction between human and 3D face robot agent,” IEEE International Workshop on Robot and Human Communication (ROMAN), Tsukuba, Japan, 1996.
22. K. Kühnlenz, S. Sosnowski, and M. Buss, “Impact of Animal-Like Features on Emotion Expression of Robot Head EDDIE,” Advanced Robotics, vol. 24, no. 8-9, pp. 1239-1255, Apr., 2012.
23. M. Haring, N. Bee, and E. Andre, “Creation and Evaluation of emotion expression with body movement, sound and eye color for humanoid robots,” IEEE International Workshop on Robot and Human Communication (ROMAN), Atlanta, USA, 2011.
24. D. Löffler, N. Schmidt, and R. Tscharn, “Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound,” The 2018 ACM/IEEE International Conference on Human-Robot Interaction, New York, USA, pp. 334-343, 2018.
25. S. Yilmazyildiz, R. Read, T. Belpeame, and W. Verhelst, “Review of Semantic-Free Utterances in Social Human-Robot Interaction,” International Journal of Human-Computer Interaction, vol. 32, no. 1, pp. 63-85, Jan., 2016.
26. J. Monceaux, J. Becker, C. Boudier, and A. Mazel, “Demonstration: first steps in emotional expression of the humanoid robot Nao,” The 2009 international conference on Multimodal interfaces, New York, USA, pp. 235-236, 2009.
27. A. Beck, L. Canamero, and K. A. Bard, “Towards an Affect Space for robots to display emotional body language,” IEEE International Workshop on Robot and Human Communication (ROMAN), Viareggio, Italy, 2010.
28. M. Zecca, Y. Mizoguchi, K. Endo, F. Iida, Y. Kawabata, N. Endo, K. Itoh, and A. Takanishit, “Whole body emotion expressions for KOBIAN humanoid robot — preliminary experiments with different Emotional patterns —,” IEEE International Workshop on Robot and Human Communication (ROMAN), Toyama, Japan, 2009.
29. R. L. Birdwhistell, Introduction to Kinesics: An Annotation System for Analysis of Body Motion and Gesture, University of Louisville, 1952, [Online], https://books.google.co.kr/books/about/Introduction_to_Kinesics.html?id=eKKOvwEACAAJ&redir_esc=y.
30. F. Yan, A. M. Iliyasu, and K. Hirota, “Emotion space modelling for social robots,” Engineering Applications of Artificial Intelligence, vol. 100, Apr., 2021.
31. H.-S. Ahn and J.-Y. Choi, “Multi-Dimensional Complex Emotional Model for Various Complex Emotional Expression using Human Friendly Robot System,” The Journal of Korea Robotics Society, vol. 4, no. 3, pp. 210-217, 2009, [Online], https://koreascience.kr/article/JAKO200917161877648.page.
32. B. C. Stahl, N. McBride, K. Wakunuma, and C. Flick, “The empathic care robot: A prototype of responsible research and innovation,” Technological Forecasting and Social Change, vol. 84, pp. 74-85, May, 2014.
33. H. Miwa, T. Okuchi, K. Itoh, H. Takanobu, and A. Takanishi, “A new mental model for humanoid robots for human friendly communication introduction of learning system, mood vector and second order equations of emotion,” IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003.
34. K.-G. Oh, M.-S. Jang, and S.-J. Kim, “Automatic emotional expression of a face robot by using a reactive behavior decision model,” Journal of Mechanical Science and Technology, vol. 24, no. 3, Mar., 2010.
35. J. W. Park, H. S. Lee, and M. J. Chung, “Generation of Realistic Robot Facial Expressions for Human Robot Interaction,” Journal of Intelligent & Robotic Systems, vol. 78, Jun., 2014.
36. J.-W. Park, H.-S. Lee, S.-H. Jo, and M.-J. Chung, “Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot,” The Journal of Korea Robotics Society, vol. 2, no. 3, pp. 282-288, Sept., 2007, [Online], https://koreascience.kr/article/JAKO200723736028090.page.
37. S. Prajapati, C. L. S. Naika, S. S. Jha, and S. B. Nair, “On Rendering Emotions on a Robotic Face,” Conference on Advances In Robotics, New York, USA, pp. 1-7, 2013.
38. S. Andrist, B. Mutlu, and A. Tapus, “Look Like Me: Matching Robot Personality via Gaze to Increase Motivation,” The 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3603-3612, 2015.
39. S. Jung, H.-Lim, S. Kwak, and F. Biocca, “Personality and facial expressions in human-robot interaction,” The seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 161-162, 2012.
40. E. Park, D. Jin, and A. P. del Pobil, “The Law of Attraction in Human-Robot Interaction,” International Journal of Advanced Robotic Systems, Jan., 2012.
41. G. Trovato, T. Kishi, N. Endo, M. Zecca, K. Hashimoto, and A. Takanishi, “Cross-Cultural Perspectives on Emotion Expressive Humanoid Robotic Head: Recognition of Facial Expressions and Symbols,” International Journal of Social Robotics, vol. 5, pp. 515-527, Sept., 2013.
42. P. Sripian, M. N. A. M. Anuardi, J. Yu, and M. Sugaya, “The Implementation and Evaluation of Individual Preference in Robot Facial Expression Based on Emotion Estimation Using Biological Signals,” Sensors, vol. 21, no. 18, Sept., 2021.
43. R. Campbell, “Asymmetries in Interpreting and Expressing a Posed Facial Expression,” Cortex, vol. 14, no. 3, pp. 327-342, Sept., 1978.
44. G. S. Urumutta Hewage, Y. Liu, Z. Wang, and H. Mao, “Consumer responses toward symmetric versus asymmetric facial expression emojis,” Marketing Letters, vol. 32, Nov., 2020.
45. L. F. Barrett, K. A. Lindquist, and M. Gendron, “Language as context for the perception of emotion,” Trends in Cognitive Sciences, vol. 11, no. 8, pp. 327-332, Aug., 2007.
46. L. F. Barrett, How emotions are made: The secret life of the brain, HarperCollins, 2017, [Online], https://books.google.co.kr/books/about/How_Emotions_Are_Made.html?id=hN8MBgAAQBAJ&redir_esc=y.
47. P. H. Kahn, J. H. Ruckert, T. Kanda, H. Ishiguro, A. Reichert, H. Gary, and S. Shent, “Psychological intimacy with robots? Using interaction patterns to uncover depth of relation,” ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2010.
48. H. S. Lee, J. W. Park, and M. J. Chung, “A Linear Affect-Expression Space Model and Control Points for Mascot-Type Facial Robots,” IEEE Transactions on Robotics, vol. 23, no. 5, pp. 863-873, Oct., 2007.
49. Paul Ekman Group, What are Emotions?, [Online], https://www.paulekman.com/universal-emotions/, Accessed: Jun. 02, 2023.

Jiyeon Lee

2021 Department of Mechanical and Aerospace Engineering, UNIST, Korea (Bachelor)

2021~Present Department of Design, UNIST, Korea (Master)

Interest: Human-Robot Interaction, Social Robot Design, Smart Mobility

Haeun Park

2018 Department of Electrical Engineering, Handong Global University, Korea (Bachelor)

2019 Department of Information and Communication Engineering, Handong Global University, Korea (Master)

2020~Present Graduate School of Creative Design Engineering, UNIST, Korea (Ph.D.)

Interest: Human-Robot Interaction, Social Robot Design

Temirlan Dzhoroev

2021 Department of Design and Human Engineering, UNIST, Korea (Bachelor)

2021~Present Department of Design, UNIST, Korea (Master)

Interest: Human-Robot Interaction, Human-Computer Interaction

Byounghern Kim

2020 Department of Electrical and Computer Engineering, UNIST, Korea (Bachelor)

2022 Graduate School of Creative Design Engineering, UNIST, Korea (Master)

2022~Present Front-end Engineer, SI Analytics, Korea

Interest: Human-Robot Interaction, Human-Computer Interaction

Hui Sung Lee

2000 Department of Electrical Engineering, KAIST, Korea (Bachelor)

2002 Department of Electrical Engineering, KAIST, Korea (Master)

2008 Department of Electrical Engineering, KAIST, Korea (Ph.D.)

2008~2010 Senior research engineer, Samsung Electronics

2010~2017 Senior research engineer, Hyundai Motor Co.

2017~Present Associate professor, Design Department, UNIST, Korea

Interest: Human-Robot Interaction, Social Robot Design, Smart Mobility, Embedded system