


On-Going!
Modeling Emotional Cues in Robot–Patient Conversations
Conversational robots hold growing potential for supporting the emotional well-being of older adults with dementia, but doing so requires more than delivering prewritten responses. As patients shift between comfort, confusion, engagement, and frustration, the robot must perceive these changes and adjust its behavior in a sensitive and timely way. This project aimed to build that foundation: enabling the robot to evolve from a simple conversational companion into a dialogue partner capable of responding to subtle emotional changes.
Learning Emotional Cues from Conversations
The work began with collecting conversation data between the robot and dementia patients. Emotional cues were derived primarily from nonverbal signals in video: facial expression dynamics, micro-movements, and gaze patterns. To extract these cues, deep learning–based emotion recognition model was applied to interpret multiple behavioral signals jointly. This observation informed how the model was designed, encouraging an approach that could remain stable even when faced with the expressions and irregular motion patterns seen in older adults.
Comparing Contexts, Cultures, and Interaction Conditions
Beyond classifying emotions, the project examined how emotional expression shifts across different conversational contexts. We compared interactions with a robot versus a human partner, explored cultural differences in expression across participant groups, and analyzed how emotional responses changed with variations in dialogue content and conversational flow. These comparisons could reveal how strongly emotional expression depends on the interaction setting, partner, and context, reinforcing the need for dialogue systems that adjust their behavior dynamically rather than relying on uniform, fixed rules.
Explore More Works and Projects!