Taro Asada, Yuiko Yano, Yasunari Yoshitomi, Masayoshi Tabuse
||Journal of Robotics, Networking and Artificial Life, 2019
We have developed a real-time system for expressing emotion as a portrait selected according to the facial expression while writing a message. The portrait is decided by a hair style, a facial outline, and a cartoon of facial expression. The image signal is analyzed by the system using image processing software (OpenCV). The system selects one portrait expressing one of neutral, subtly smiling and smiling facial expressions using two thresholds on facial expression intensity. We applied the system to post on a Social Network Service (SNS) a message and a portrait expressing the facial expression.