Top Mission Laboratories Members Projects Research Robots Resources Publications Access
Top > Publications
Japanese

QuickSearch:   Matching entries: 0.

settings...
Books
石黒浩, "アバターと共生する未来社会", 集英社, June, 2023.
Abstract: アバター(分身)を使って、メタバースの世界だけでなく、実社会でも、別のキャラクターとして遠隔地で仕事をしたり、家にいながらにして趣味の仲間と旅行をしたり、AIと協業したり…姿や年齢を超えた多彩な人生を体験できる時代がやって来る。新しい未来の幕開けだ! 【目次】 第一章 アバターとは何か──実世界でも稼働する遠隔操作が可能な分身 第二章 アバター共生社会が目指すもの 第三章 ムーンショットが進めるアバター研究 第四章 技術の社会実装──AVITAの取り組み 第五章 仮想化実世界とアバターの倫理問題 第六章 さらなる未来──大阪· 関西万博とアバター
BibTeX:
@Book{石黒浩2023d,
  author     = {石黒浩},
  publisher  = {集英社},
  title      = {アバターと共生する未来社会},
  year       = {2023},
  abstract   = {アバター(分身)を使って、メタバースの世界だけでなく、実社会でも、別のキャラクターとして遠隔地で仕事をしたり、家にいながらにして趣味の仲間と旅行をしたり、AIと協業したり…姿や年齢を超えた多彩な人生を体験できる時代がやって来る。新しい未来の幕開けだ!

【目次】
第一章 アバターとは何か──実世界でも稼働する遠隔操作が可能な分身
第二章 アバター共生社会が目指すもの
第三章 ムーンショットが進めるアバター研究
第四章 技術の社会実装──AVITAの取り組み
第五章 仮想化実世界とアバターの倫理問題
第六章 さらなる未来──大阪· 関西万博とアバター},
  day        = {26},
  etitle     = {Future Society in Harmony with Avatars},
  isbn       = {978-4-08-786136-5},
  month      = jun,
  price      = {¥2,090},
  totalpages = {296},
  url        = {https://www.shueisha.co.jp/books/items/contents.html?isbn=978-4-08-786136-5},
}
石黒浩, "ロボットと人間 人とは何か", 岩波新書, no. 新赤版 1901, November, 2021.
Abstract: ロボットを研究することは、人間を深く知ることでもある。ロボット学の世界的第一人者である著者は、長年の研究を通じて、人間にとって自律、心、存在、対話、体、進化、生命などは何かを問い続ける。ロボットと人間の未来に向けての関係性にも言及。人と関わるロボットがますます身近になる今こそ、必読の書。
BibTeX:
@Book{石黒浩2021q,
  author     = {石黒浩},
  publisher  = {岩波新書},
  title      = {ロボットと人間 人とは何か},
  year       = {2021},
  abstract   = {ロボットを研究することは、人間を深く知ることでもある。ロボット学の世界的第一人者である著者は、長年の研究を通じて、人間にとって自律、心、存在、対話、体、進化、生命などは何かを問い続ける。ロボットと人間の未来に向けての関係性にも言及。人と関わるロボットがますます身近になる今こそ、必読の書。},
  day        = {19},
  isbn       = {9784004319016},
  month      = nov,
  number     = {新赤版 1901},
  price      = {¥1,034},
  totalpages = {286},
  url        = {https://www.iwanami.co.jp/book/b593235.html},
}
Shuichi Nishio, Hideyuki Nakanishi, Tsumomu Fujinami, "Investigating Human Nature and Communication through Robots", Frontiers Media, January, 2017.
Abstract: The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.
BibTeX:
@Book{Nishio2017,
  title     = {Investigating Human Nature and Communication through Robots},
  publisher = {Frontiers Media},
  year      = {2017},
  editor    = {Shuichi Nishio and Hideyuki Nakanishi and Tsumomu Fujinami},
  month     = Jan,
  isbn      = {9782889450862},
  abstract  = {The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.},
  file      = {Nishio2017.pdf:pdf/Nishio2017.pdf:PDF},
  url       = {http://www.frontiersin.org/books/Investigating_Human_Nature_and_Communication_through_Robots/1098},
}
Book Chapters
李歆玥, 石井カルロス寿憲, 傅昌鋥, 林良子, "中国語を母語とする日本語学習者と母語話者を対象とする非流暢性発話フィラーの音声分析", ひつじ書房, pp. 417-428, February, 2024.
Abstract: 本研究では、中国語を母語とする日本語学習者による日本語自然会話に見られるフィラーの母音を対象とした音響的特徴を検討し、日本語母語話者によるフィラーの母音との比較検証を行なった。次に、自然会話におけるフィラーの母音と通常語彙項目の母音の相違について検討した。その結果、duration、F0mean、intensity、スペクトル傾斜関連特徴、jitter and shimmerに関して、中国人日本語学習者と日本語母語話者ともに、フィラーの母音と通常語彙項目の母音の間に顕著な差が観察された。さらに、random forestを用いた分類分析を行なったところ、フィラーの母音か通常語彙項目の母音かという分類には、duration と intensityは最も貢献しており、声質的特徴はその次に貢献していることが示された。
BibTeX:
@InBook{李歆玥2024,
  author    = {李歆玥 and 石井カルロス寿憲 and 傅昌鋥 and 林良子},
  booktitle = {流暢性と非流暢性},
  chapter   = {第6部 言語障害からみた(非)流暢性 第2章},
  pages     = {417-428},
  publisher = {ひつじ書房},
  title     = {中国語を母語とする日本語学習者と母語話者を対象とする非流暢性発話フィラーの音声分析},
  year      = {2024},
  abstract  = {本研究では、中国語を母語とする日本語学習者による日本語自然会話に見られるフィラーの母音を対象とした音響的特徴を検討し、日本語母語話者によるフィラーの母音との比較検証を行なった。次に、自然会話におけるフィラーの母音と通常語彙項目の母音の相違について検討した。その結果、duration、F0mean、intensity、スペクトル傾斜関連特徴、jitter and shimmerに関して、中国人日本語学習者と日本語母語話者ともに、フィラーの母音と通常語彙項目の母音の間に顕著な差が観察された。さらに、random forestを用いた分類分析を行なったところ、フィラーの母音か通常語彙項目の母音かという分類には、duration と intensityは最も貢献しており、声質的特徴はその次に貢献していることが示された。},
  date      = {2024-02-22},
  isbn      = {978-4-8234-1208-0},
  month     = feb,
  url       = {https://www.hituzi.co.jp/hituzibooks/ISBN978-4-8234-1208-0.htm},
  comment   = {y},
}
Hidenobu Sumioka, Junya Nakanishi, Masahiro Shiomi, Hiroshi Ishiguro, "Abbracci virtuali per l’educazione: studio pilota sul co-sleeping con un huggable communication medium e considerazioni di progettazione per applicazioni educative", pp. 169-190, July, 2023.
Abstract: In this chapter, we report two experiments to propose the application of virtual hug for educational contexts. In the first experiment, we report an experiment where we introduced huggable communication media into daytime sleep in a co-sleeping situation. In the second experiment, we investigated the effect of the gender perception from Hugvie on user’s touch perception.
BibTeX:
@InBook{Sumioka2020b,
  author    = {Hidenobu Sumioka and Junya Nakanishi and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {Robot sociali e educazione. Interazioni, applicazioni e nuove frontiere},
  chapter   = {11},
  pages     = {169-190},
  title     = {Abbracci virtuali per l’educazione: studio pilota sul co-sleeping con un huggable communication medium e considerazioni di progettazione per applicazioni educative},
  year      = {2023},
  abstract  = {In this chapter, we report two experiments to propose the application of virtual hug for educational contexts. In the first experiment, we report an experiment where we introduced huggable communication media into daytime sleep in a co-sleeping situation. In the second experiment, we investigated the effect of the gender perception from Hugvie on user’s touch perception.},
  date      = {2023-07-14},
  isbn      = {978-88-3285-557-9},
  month     = jul,
}
Carlos T. Ishi, "Motion generation during vocalized emotional expressions and evaluation in android robots", IntechOpen, pp. 1-20, August, 2019.
Abstract: Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression.
BibTeX:
@Inbook{Ishi2019d,
  chapter   = {1},
  pages     = {1-20},
  title     = {Motion generation during vocalized emotional expressions and evaluation in android robots},
  publisher = {IntechOpen},
  year      = {2019},
  author    = {Carlos T. Ishi},
  booktitle = {Future of Robotics - Becoming Human with Humanoid or Emotional Intelligence},
  month     = aug,
  isbn      = {978-1-78985-484-8},
  abstract  = {Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression.},
  url       = {https://www.intechopen.com/books/becoming-human-with-humanoid-from-physical-interaction-to-social-intelligence/motion-generation-during-vocalized-emotional-expressions-and-evaluation-in-android-robots},
  comment   = {y},
  doi       = {10.5772/intechopen.88457},
  keywords  = {emotion expression; laughter; surprise; motion generation; human-robot interaction; nonverbal information},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Brain-computer interface and motor imagery training: The role of visual feedback and embodiment", Chapter in Evolving BCI Therapy - Engaging Brain State Dynamics, pp. 73-88, October, 2018.
Abstract: We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users.
BibTeX:
@Incollection{Alimardani2018,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Brain-computer interface and motor imagery training: The role of visual feedback and embodiment},
  booktitle = {Evolving BCI Therapy - Engaging Brain State Dynamics},
  year      = {2018},
  chapter   = {5},
  pages     = {73-88},
  month     = oct,
  isbn      = {978-1-78984-070-4},
  abstract  = {We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users.},
}
Panikos Heracleous, Denis Beautemps, Hiroshi Ishiguro, Norihiro Hagita, "Towards Augmentative Speech Communication", Chapter in Speech and Language Technologies, InTech, Vukovar, Croatia, pp. 303-318, June, 2011.
Abstract: Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \citeTADOMA). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.
BibTeX:
@Incollection{Heracleous2011,
  author    = {Panikos Heracleous and Denis Beautemps and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Towards Augmentative Speech Communication},
  booktitle = {Speech and Language Technologies},
  publisher = {{InT}ech},
  year      = {2011},
  editor    = {Ivo Ipsic},
  pages     = {303--318},
  address   = {Vukovar, Croatia},
  month     = Jun,
  abstract  = {Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \cite{TADOMA}). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.},
  file      = {Heracleous2011.pdf:Heracleous2011.pdf:PDF;InTech-Towards_augmentative_speech_communication.pdf:http\://www.intechopen.com/source/pdfs/15082/InTech-Towards_augmentative_speech_communication.pdf:PDF},
  url       = {http://www.intechopen.com/articles/show/title/towards-augmentative-speech-communication},
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Geminoid: Teleoperated Android of an Existing Person", Chapter in Humanoid Robots: New Developments, I-Tech Education and Publishing, Vienna, Austria, pp. 343-352, June, 2007.
BibTeX:
@Incollection{Nishio2007a,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Geminoid: Teleoperated Android of an Existing Person},
  booktitle       = {Humanoid Robots: New Developments},
  publisher       = {I-Tech Education and Publishing},
  year            = {2007},
  editor          = {Armando Carlos de Pina Filho},
  pages           = {343--352},
  address         = {Vienna, Austria},
  month           = Jun,
  file            = {Nishio2007a.pdf:Nishio2007a.pdf:PDF;InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:http\://www.intechopen.com/source/pdfs/240/InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:PDF},
  url             = {http://www.intechopen.com/articles/show/title/geminoid__teleoperated_android_of_an_existing_person},
}
Overviews
Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Using Androids to Provide Communication Support for the Elderly", New Breeze, vol. 27, no. 4, pp. 14-17, October, 2015.
BibTeX:
@Article{Nishio2015c,
  author   = {Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title    = {Using Androids to Provide Communication Support for the Elderly},
  journal  = {New Breeze},
  year     = {2015},
  volume   = {27},
  number   = {4},
  pages    = {14-17},
  month    = Oct,
  day      = {9},
  url      = {https://www.ituaj.jp/wp-content/uploads/2015/10/nb27-4_web_05_ROBOTS_usingandroids.pdf},
  file     = {Nishio2015c.pdf:pdf/Nishio2015c.pdf:PDF},
}
Kohei Ogawa, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Android Robots as Tele-presence Media", Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications, Medical Information Science Reference, Pennsylvania, USA, pp. 54-63, September, 2012.
Abstract: In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.
BibTeX:
@Article{Ogawa2012b,
  author    = {Kohei Ogawa and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title     = {Android Robots as Tele-presence Media},
  journal   = {Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications},
  year      = {2012},
  pages     = {54-63},
  month     = Sep,
  abstract  = {In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.},
  url       = {http://www.igi-global.com/chapter/android-robots-telepresence-media/69905},
  doi       = {10.4018/978-1-4666-2113-8.ch006},
  address   = {Pennsylvania, USA},
  chapter   = {6},
  editor    = {Jinglong Wu},
  file      = {Ogawa2012b.pdf:Ogawa2012b.pdf:PDF},
  isbn      = {9781466621138},
  publisher = {Medical Information Science Reference},
}
Daisuke Sakamoto, Hiroshi Ishiguro, "Geminoid: Remote-Controlled Android System for Studying Human Presence", Kansei Engineering International, vol. 8, no. 1, pp. 3-9, 2009.
BibTeX:
@Article{Sakamoto2009,
  author   = {Daisuke Sakamoto and Hiroshi Ishiguro},
  title    = {Geminoid: Remote-Controlled Android System for Studying Human Presence},
  journal  = {Kansei Engineering International},
  year     = {2009},
  volume   = {8},
  number   = {1},
  pages    = {3--9},
  url      = {http://mol.medicalonline.jp/archive/search?jo=dp7keint&ye=2009&vo=8&issue=1},
  file     = {Sakamoto2009.pdf:Sakamoto2009.pdf:PDF},
}
Invited Talks
Hiroshi Ishiguro, "Avatar and the future society", In The 7th Iberian Robotic Conference (ROBOT 2024), Madrid, Spain, November, 2024.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2024c,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 7th Iberian Robotic Conference (ROBOT 2024)},
  title     = {Avatar and the future society},
  year      = {2024},
  address   = {Madrid, Spain},
  day       = {6},
  month     = nov,
  url       = {https://eventos.upm.es/109808/detail/robot-2024-.html},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
Hiroshi Ishiguro, "Avatar and the future society", In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), IROS2024 Special Forum: Human-Avatars Symbiosis, Abu Dhabi, UAE, October, 2024.
Abstract: The special forum - Can you imagine a future society where you can remotely control multiple avatars? -, will focus on these grand challenges and cutting-edge research results of the JST Moonshot Goal 1 program in Japan, which is driving the above-mentioned challenging research and development of multiple avatars, called Cybernetic Avatars(CAs). It aims to realize a future society in which human beings can be free from limitations of body, brain, space and time by 2050. The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2024a,
  author    = {Hiroshi Ishiguro},
  booktitle = {2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), IROS2024 Special Forum: Human-Avatars Symbiosis},
  title     = {Avatar and the future society},
  year      = {2024},
  address   = {Abu Dhabi, UAE},
  day       = {17},
  month     = oct,
  url       = {https://iros2024-abudhabi.org/forums},
  abstract  = {The special forum - Can you imagine a future society where you can remotely control multiple avatars? -, will focus on these grand challenges and cutting-edge research results of the JST Moonshot Goal 1 program in Japan, which is driving the above-mentioned challenging research and development of multiple avatars, called Cybernetic Avatars(CAs). It aims to realize a future society in which human beings can be free from limitations of body, brain, space and time by 2050. The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
Hiroshi Ishiguro, "Advances in Humanoid Research and our Future Life", In 2024 International Robot Business Conference (ROBOTWORLD 2024), KINTEX Exhibition Center, Korea, October, 2024.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2024b,
  author    = {Hiroshi Ishiguro},
  booktitle = {2024 International Robot Business Conference (ROBOTWORLD 2024)},
  title     = {Advances in Humanoid Research and our Future Life},
  year      = {2024},
  address   = {KINTEX Exhibition Center, Korea},
  day       = {24},
  month     = oct,
  url       = {https://eng.robotworld.or.kr/conference/conference.php},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
Hidenobu Sumioka, Shintaro Okazaki, "Therapeutic social robots in elderly care: reflections from Japan", In King's Festival for Artificial Intelligence, London, UK (online), May, 2024.
Abstract: In this session, we introduce a faceless huggable robot, 'Hiro-chan', which is used for dementia patients to improve their cognitive deterioration. ATR has been trying to equip Hiro-chan with ChatGPT to express emotions through a sensor embedded in its body. In this session, we show a video demonstration of Hiro-chan as well as other therapeutic social robots for patients with mental illness. We call for interfaculty collaboration to plan a future grant application. Scholars from diverse disciplines are encouraged to attend.
BibTeX:
@InProceedings{Sumioka2024a,
  author    = {Hidenobu Sumioka and Shintaro Okazaki},
  booktitle = {King's Festival for Artificial Intelligence},
  title     = {Therapeutic social robots in elderly care: reflections from Japan},
  year      = {2024},
  address   = {London, UK (online)},
  day       = {22},
  month     = may,
  url       = {https://www.kcl.ac.uk/events/therapeutic-social-robots-in-elderly-care-reflections-from-japan},
  abstract  = {In this session, we introduce a faceless huggable robot, 'Hiro-chan', which is used for dementia patients to improve their cognitive deterioration. ATR has been trying to equip Hiro-chan with ChatGPT to express emotions through a sensor embedded in its body. In this session, we show a video demonstration of Hiro-chan as well as other therapeutic social robots for patients with mental illness. We call for interfaculty collaboration to plan a future grant application. Scholars from diverse disciplines are encouraged to attend.},
}
Hiroshi Ishiguro, "Avatar and the future society", In 2024 IEEE International Conference on Robotics and Automation (ICRA2024) Workshop:Society of Avatar-Symbiosis through Social Field Experiments, パシフィコ横浜, 神奈川, May, 2024.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future. The discussion of the workshop will lead us to adapt and adjust to a future ‘Cybernetic Avatar Life.’
BibTeX:
@InProceedings{Ishiguro2024,
  author    = {Hiroshi Ishiguro},
  booktitle = {2024 IEEE International Conference on Robotics and Automation (ICRA2024) Workshop:Society of Avatar-Symbiosis through Social Field Experiments},
  title     = {Avatar and the future society},
  year      = {2024},
  address   = {パシフィコ横浜, 神奈川},
  day       = {13},
  month     = may,
  url       = {https://2024.ieee-icra.org/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future. The discussion of the workshop will lead us to adapt and adjust to a future ‘Cybernetic Avatar Life.’},
}
David Achanccaray, "Brain-Machine Interfaces: From Typical Paradigms to VR/Robot-based Social Applications", In Semana Internacional PUCP(International Week PUCP), Lima, Peru (online), March, 2024.
Abstract: BMI is a technology that provides an alternative way of communication and can augment human abilities. It can assist people to perform daily tasks, which is more beneficial for people with disabilities. This technology requires knowledge of several fields of engineering and neuroscience, which will be given in lectures and hands-on sessions. The knowledge for the development of a BMI application will be approached during this course.
BibTeX:
@InProceedings{Achanccaray2024,
  author    = {David Achanccaray},
  booktitle = {Semana Internacional PUCP(International Week PUCP)},
  title     = {Brain-Machine Interfaces: From Typical Paradigms to VR/Robot-based Social Applications},
  year      = {2024},
  address   = {Lima, Peru (online)},
  day       = {11-16},
  month     = mar,
  url       = {https://facultad-derecho.pucp.edu.pe/wp-content/uploads/2024/02/semana-internacional-2024-1.pdf},
  abstract  = {BMI is a technology that provides an alternative way of communication and can augment human abilities. It can assist people to perform daily tasks, which is more beneficial for people with disabilities. This technology requires knowledge of several fields of engineering and neuroscience, which will be given in lectures and hands-on sessions. The knowledge for the development of a BMI application will be approached during this course.},
}
Hidenobu Sumioka, "Social robots for older people with dementia and care staff toward all-stakeholder-centered care.", In The History & Future of Care Robots, Claremont, USA, March, 2024.
Abstract: This symposium brings together scholars and students across diverse disciplines such as history, anthropology, engineering, technology, information sciences, and Japan studies, along with experts in the care industry, to share their research findings and experiences experiences related to the integration of assistive technologies in elderly and disability care in Japan, Denmark, and the US. We will also discuss strategies for enhancing the practicality and accessibility of care robots and other technological devices.
BibTeX:
@InProceedings{Sumioka2024,
  author    = {Hidenobu Sumioka},
  booktitle = {The History & Future of Care Robots},
  title     = {Social robots for older people with dementia and care staff toward all-stakeholder-centered care.},
  year      = {2024},
  address   = {Claremont, USA},
  day       = {30},
  month     = mar,
  abstract  = {This symposium brings together scholars and students across diverse disciplines such as history, anthropology, engineering, technology, information sciences, and Japan studies, along with experts in the care industry, to share their research findings and experiences experiences related to the integration of assistive technologies in elderly and disability care in Japan, Denmark, and the US. We will also discuss strategies for enhancing the practicality and accessibility of care robots and other technological devices.},
}
Hiroshi Ishiguro, "AVATAR AND THE FUTURE SOCIETY", In Italian Tech Week, Torino, Italy, September, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023e,
  author    = {Hiroshi Ishiguro},
  booktitle = {Italian Tech Week},
  title     = {AVATAR AND THE FUTURE SOCIETY},
  year      = {2023},
  address   = {Torino, Italy},
  day       = {29},
  month     = sep,
  url       = {https://italiantechweek.com/en},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
Hiroshi Ishiguro, "GEMINOID, Avatar and the future society", In AI for Good, Geneva, Switzerland, July, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future. Robot on stage in Geneva, controlled by Prof Ishiguro remotely.
BibTeX:
@InProceedings{Ishiguro2023d,
  author    = {Hiroshi Ishiguro},
  booktitle = {AI for Good},
  title     = {GEMINOID, Avatar and the future society},
  year      = {2023},
  address   = {Geneva, Switzerland},
  day       = {7},
  month     = jul,
  url       = {https://aiforgood.itu.int/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future. Robot on stage in Geneva, controlled by Prof Ishiguro remotely.},
}
Hiroshi Ishiguro, "Avatar and the future society", In 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, May, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023c,
  author    = {Hiroshi Ishiguro},
  booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
  title     = {Avatar and the future society},
  year      = {2023},
  address   = {London, UK},
  day       = {31},
  month     = may,
  url       = {https://www.icra2023.org/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
Hiroshi Ishiguro, "Avatars and our future society", In HR Festival Europe, Zurich, Switzerland, March, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023b,
  author    = {Hiroshi Ishiguro},
  booktitle = {HR Festival Europe},
  title     = {Avatars and our future society},
  year      = {2023},
  address   = {Zurich, Switzerland},
  day       = {28-29},
  month     = mar,
  url       = {https://www.hrfestival.ch/en/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
Hiroshi Ishiguro, "10 Ways Robotics Can Transform Our Future", In World Government Summit 2023, Madinat Jumeirah, Dubai, United Arab Emirates, February, 2023.
Abstract: n this talk, Professor Hiroshi Ishiguro of Osaka University provides insight into the very real threats posed by developments in robotics, avatar creation, and artificial intelligence and its effects on our collective future.
BibTeX:
@InProceedings{Ishiguro2023a,
  author    = {Hiroshi Ishiguro},
  booktitle = {World Government Summit 2023},
  title     = {10 Ways Robotics Can Transform Our Future},
  year      = {2023},
  address   = {Madinat Jumeirah, Dubai, United Arab Emirates},
  day       = {13},
  month     = feb,
  url       = {https://www.worldgovernmentsummit.org/home},
  abstract  = {n this talk, Professor Hiroshi Ishiguro of Osaka University provides insight into the very real threats posed by developments in robotics, avatar creation, and artificial intelligence and its effects on our collective future.},
}
Hiroshi Ishiguro, "Me, Myself and AI: AI Avatar world", In DeepFest 2023, Riyadh Front Exhibition&Conference Centre, Saudi Arabia, February, 2023.
Abstract: DeepFest 2023 will be co-located with LEAP Tech Conference in Saudi Arabia 2023. In this interactive big talk, the speaker will talk about the basic ideas on interactive robots and avatars. An android copy of himself will also be on the stage and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2023,
  author    = {Hiroshi Ishiguro},
  booktitle = {DeepFest 2023},
  title     = {Me, Myself and AI: AI Avatar world},
  year      = {2023},
  address   = {Riyadh Front Exhibition&Conference Centre, Saudi Arabia},
  day       = {7},
  month     = feb,
  url       = {https://deepfest.com},
  abstract  = {DeepFest 2023 will be co-located with LEAP Tech Conference in Saudi Arabia 2023. In this interactive big talk, the speaker will talk about the basic ideas on interactive robots and avatars. An android copy of himself will also be on the stage and discuss about our future life.},
}
Hiroshi Ishiguro, "Avatar and the future society", In The 14th International Conference on Social Robotics (ICSR2022), Florence, Italy (hybrid), December, 2022.
Abstract: Part of Half Day Workshop "Realization of Avatar-Symbiotic Society". In this talk, the speaker will talk about the basic ideas on interactive robots and avatars, and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2022e,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 14th International Conference on Social Robotics (ICSR2022)},
  title     = {Avatar and the future society},
  year      = {2022},
  address   = {Florence, Italy (hybrid)},
  day       = {13},
  month     = dec,
  url       = {https://www.icsr2022.it/workshop-program-13th-december/},
  abstract  = {Part of Half Day Workshop "Realization of Avatar-Symbiotic Society". In this talk, the speaker will talk about the basic ideas on interactive robots and avatars, and discuss about our future life.},
}
Carlos Toshinori Ishi, "Analysis and generation of speech-related motions, and evaluation in humanoid robots", In The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2022, Bangalore, India (online), November, 2022.
Abstract: The generation of motions coordinated with speech utterances is important for dialogue robots or avatars, in both autonomous and tele-operated systems, to express humanlikeness and tele-presence. For that purpose, we have been studying on the relationships between speech and motion, and methods to generate motions from speech, for example, lip motion from formants, head motion from dialogue functions, facial and upper body motions coordinated with vocalized emotional expressions (such as laughter and surprise), hand gestures from linguistic and prosodic information, and gaze behaviors from dialogue states. In this talk, I will give an overview of our research activities on motion analysis and generation, and evaluation of speech-driven motions generated in several humanoid robots (such as the android ERICA, and a desktop robot CommU).
BibTeX:
@InProceedings{Ishi2022,
  author    = {Carlos Toshinori Ishi},
  booktitle = {The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2022},
  title     = {Analysis and generation of speech-related motions, and evaluation in humanoid robots},
  year      = {2022},
  address   = {Bangalore, India (online)},
  day       = {7},
  month     = nov,
  url       = {https://genea-workshop.github.io/2022/workshop/#workshop-programme},
  abstract  = {The generation of motions coordinated with speech utterances is important for dialogue robots or avatars, in both autonomous and tele-operated systems, to express humanlikeness and tele-presence. For that purpose, we have been studying on the relationships between speech and motion, and methods to generate motions from speech, for example, lip motion from formants, head motion from dialogue functions, facial and upper body motions coordinated with vocalized emotional expressions (such as laughter and surprise), hand gestures from linguistic and prosodic information, and gaze behaviors from dialogue states. In this talk, I will give an overview of our research activities on motion analysis and generation, and evaluation of speech-driven motions generated in several humanoid robots (such as the android ERICA, and a desktop robot CommU).},
}
石黒浩, "テクノロジーと社会―未来をどうつくる", In 朝日地球会議2022, ハイブリット開催, October, 2022.
Abstract: 近年の人工知能(AI)やロボットの技術は、障害や病気で失われた機能に置き換わるなど、社会をより便利で豊かなものにする一方で、人を殺傷する兵器にも応用されるなど、多様な可能性をはらんでいる。いつか人が老いなどの身体的な制約から解かれ、今と全く違う存在になる兆しすら見えてきた。どこまでの技術の進展を許容すべきか。また、すべての人がその恩恵を享受できるのだろうか。「人とは何か」を、歴史学者・哲学者であるユヴァル・ノア・ハラリ氏とともに語り、一人ひとりがどう未来に携わっていくか考える。
BibTeX:
@InProceedings{石黒浩2022h,
  author    = {石黒浩},
  booktitle = {朝日地球会議2022},
  title     = {テクノロジーと社会―未来をどうつくる},
  year      = {2022},
  address   = {ハイブリット開催},
  day       = {18},
  etitle    = {Hiroshi Ishiguro},
  month     = oct,
  url       = {https://www.asahi.com/eco/awf/program/?cid=prtimes&program=20},
  abstract  = {近年の人工知能(AI)やロボットの技術は、障害や病気で失われた機能に置き換わるなど、社会をより便利で豊かなものにする一方で、人を殺傷する兵器にも応用されるなど、多様な可能性をはらんでいる。いつか人が老いなどの身体的な制約から解かれ、今と全く違う存在になる兆しすら見えてきた。どこまでの技術の進展を許容すべきか。また、すべての人がその恩恵を享受できるのだろうか。「人とは何か」を、歴史学者・哲学者であるユヴァル・ノア・ハラリ氏とともに語り、一人ひとりがどう未来に携わっていくか考える。},
}
Hidenobu Sumioka, "Humanlike Robots that connect people in Elderly Nursing Home", In 精準智慧照護 國際技術交流論壇, 新竹, 台湾(オンライン), October, 2022.
Abstract: BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. In this talk, I will present our study with humanlike robot for older people with dementia.
BibTeX:
@InProceedings{Sumioka2022b,
  author    = {Hidenobu Sumioka},
  booktitle = {精準智慧照護 國際技術交流論壇},
  title     = {Humanlike Robots that connect people in Elderly Nursing Home},
  year      = {2022},
  address   = {新竹, 台湾(オンライン)},
  day       = {24},
  month     = oct,
  url       = {https://aicspht.org.tw/news/精準健康與智慧照護-國際技術交流論壇/},
  abstract  = {BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. In this talk, I will present our study with humanlike robot for older people with dementia.},
}
石黒浩, "人間ロボット共生社会の未来", In 北陸技術交流テクノフェア, 福井生活学習館, 福井, October, 2022.
Abstract: 地方の中小企業はこれからのロボット社会とどう向き合うべきなのか─ 本講演では、これまでの研究成果を紹介すると共に、人間とロボット・アバターが共生するこれからの社会の姿について語る。
BibTeX:
@InProceedings{石黒浩2022i,
  author    = {石黒浩},
  booktitle = {北陸技術交流テクノフェア},
  title     = {人間ロボット共生社会の未来},
  year      = {2022},
  address   = {福井生活学習館, 福井},
  day       = {20},
  month     = oct,
  url       = {https://www.technofair.jp/seminar/},
  abstract  = {地方の中小企業はこれからのロボット社会とどう向き合うべきなのか─ 本講演では、これまでの研究成果を紹介すると共に、人間とロボット・アバターが共生するこれからの社会の姿について語る。},
}
Hiroshi Ishiguro, "Robotics and Health: Avatar technology for supporting our future society", In The 29th Scientific Meeting of the International Society of Hypertension (ISH2022), Kyoto International Conference Center, 京都, October, 2022.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. Research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots, avatars, such as Geminoid and discuss in what kind of society humans and robots will coexist in the future. By using avatars, anyone, including the elderly and people with disabilities, will be able to freely participate in various activities with abilities beyond ordinary people while expanding their physical, cognitive, and perceptual abilities using a large number of avatars. Anyone will be able to work and study anytime, anywhere, minimize commuting to work, and have plenty of free time in the future society.
BibTeX:
@InProceedings{Ishiguro2022d,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 29th Scientific Meeting of the International Society of Hypertension (ISH2022)},
  title     = {Robotics and Health: Avatar technology for supporting our future society},
  year      = {2022},
  address   = {Kyoto International Conference Center, 京都},
  day       = {13},
  month     = oct,
  url       = {https://www.ish2022.org/scientific-information/scientific-program/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. Research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots, avatars, such as Geminoid and discuss in what kind of society humans and robots will coexist in the future. By using avatars, anyone, including the elderly and people with disabilities, will be able to freely participate in various activities with abilities beyond ordinary people while expanding their physical, cognitive, and perceptual abilities using a large number of avatars. Anyone will be able to work and study anytime, anywhere, minimize commuting to work, and have plenty of free time in the future society.},
}
Hidenobu Sumioka, "Ethical consideration of companion robots for people with dementia", In 3rd joint ERCIM-JST Workshop 2022, Rocquencourt, France, October, 2022.
Abstract: BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. However, it also offers us ethical and legal issues. In this talk, I will discuss some issues, presenting short- and long-term experiments we have conducted with our baby-like interactive robot. I point out that there are no guidelines on robot therapy for people with dementia and indicate that the efforts made in doll therapy may be helpful. In addition, I will discuss that the caregiver's perspective must also be considered in developing a robot for the elderly with dementia.
BibTeX:
@InProceedings{Sumioka2022a,
  author    = {Hidenobu Sumioka},
  booktitle = {3rd joint ERCIM-JST Workshop 2022},
  title     = {Ethical consideration of companion robots for people with dementia},
  year      = {2022},
  address   = {Rocquencourt, France},
  day       = {20-21},
  month     = oct,
  url       = {https://www.ercim.eu/events/3rd-joint-ercim-jst-workshop},
  abstract  = {BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. However, it also offers us ethical and legal issues. In this talk, I will discuss some issues, presenting short- and long-term experiments we have conducted with our baby-like interactive robot. I point out that there are no guidelines on robot therapy for people with dementia and indicate that the efforts made in doll therapy may be helpful. In addition, I will discuss that the caregiver's perspective must also be considered in developing a robot for the elderly with dementia.},
}
Hiroshi Ishiguo, "The Future of Robotics and Humanoids", In Global AI Summit, Riyadh, Saudi Arabia, September, 2022.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2022c,
  author    = {Hiroshi Ishiguo},
  booktitle = {Global AI Summit},
  title     = {The Future of Robotics and Humanoids},
  year      = {2022},
  address   = {Riyadh, Saudi Arabia},
  day       = {14},
  month     = sep,
  url       = {https://globalaisummit.org/en/default.aspx},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Avatar and the future society", In The 65th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS 2022), online, August, 2022.
Abstract: Prof. Hiroshi Ishiguro has been doing research on teleoperated robots, for more than two decades. In his research, he developed a series of avatars, called Geminoids, which resemble himself. The study not only helps to understand humans and apply methods from engineering, cognitive science and neuroscience to various research topics, but also practically allows a person to be physically present and work in different places without travelling. The talk will introduce research and development of teleoperated androids, such as Geminoids, and discuss how humans and robots can coexist in future society.
BibTeX:
@InProceedings{Ishiguro2022,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 65th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS 2022)},
  title     = {Avatar and the future society},
  year      = {2022},
  address   = {online},
  day       = {8},
  month     = aug,
  url       = {https://mwscas2022.org/keynotespeakers.php#speaker7},
  abstract  = {Prof. Hiroshi Ishiguro has been doing research on teleoperated robots, for more than two decades. In his research, he developed a series of avatars, called Geminoids, which resemble himself. The study not only helps to understand humans and apply methods from engineering, cognitive science and neuroscience to various research topics, but also practically allows a person to be physically present and work in different places without travelling. The talk will introduce research and development of teleoperated androids, such as Geminoids, and discuss how humans and robots can coexist in future society.},
}
Hiroshi Ishiguro, "Interactive Intelligent Robots and Our Future", In The 9th RSI International Conference on Robotics and Mechatronics (ICRoM 2021), online, November, 2021.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2021b,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 9th RSI International Conference on Robotics and Mechatronics (ICRoM 2021)},
  title     = {Interactive Intelligent Robots and Our Future},
  year      = {2021},
  address   = {online},
  day       = {18},
  month     = nov,
  url       = {https://icrom.ir/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hidenobu Sumioka, "Human-Robot Deep interaction", In CiNET Friday Lunch Seminar, online, October, 2021.
Abstract: Communication robots are expected to provide a variety of support services through interaction with people. They have been reported to be especially effective for the elderly and patients with mental illness. In the past, research on human-robot interaction has examined the effects of actual interaction with robots using psychological scales and motor information such as gaze and movement. However, in recent years, researchers have started to focus on brain activity during interaction to investigate the effects of actual interaction on the brain and control robot behavior based on brain activity to facilitate smooth interaction with humans. In this presentation, I will introduce our ongoing research to realize human-robot interaction using brain activity during the interaction. First, we will report the effect of the robot’s appearance on brain activity. Next, we will present a method for detecting subjective difficulty based on the cognitive load during a working memory task. Finally, we will introduce our ongoing efforts to investigate how humans are affected by robot interaction from multi-layer information among human behavior, brain activity, and metabolites.
BibTeX:
@Inproceedings{Sumioka2021b,
  author    = {Hidenobu Sumioka},
  title     = {Human-Robot Deep interaction},
  booktitle = {CiNET Friday Lunch Seminar},
  year      = {2021},
  address   = {online},
  month     = oct,
  day       = {1},
  url       = {https://cinet.jp/japanese/event/20211001_4027/},
  abstract  = {Communication robots are expected to provide a variety of support services through interaction with people. They have been reported to be especially effective for the elderly and patients with mental illness.
In the past, research on human-robot interaction has examined the effects of actual interaction with robots using psychological scales and motor information such as gaze and movement. However, in recent years, researchers have started to focus on brain activity during interaction to investigate the effects of actual interaction on the brain and control robot behavior based on brain activity to facilitate smooth interaction with humans.
In this presentation, I will introduce our ongoing research to realize human-robot interaction using brain activity during the interaction.
First, we will report the effect of the robot’s appearance on brain activity. Next, we will present a method for detecting subjective difficulty based on the cognitive load during a working memory task.
Finally, we will introduce our ongoing efforts to investigate how humans are affected by robot interaction from multi-layer information among human behavior, brain activity, and metabolites.},
}
Hiroshi Ishiguro, "Constructive Approach for Interactive Robots and the Fundamental Issues", In ACM/IEEE International Conference on Human-Robot Interaction (HRI2021), virtual, March, 2021.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2021a,
  author    = {Hiroshi Ishiguro},
  booktitle = {ACM/IEEE International Conference on Human-Robot Interaction (HRI2021)},
  title     = {Constructive Approach for Interactive Robots and the Fundamental Issues},
  year      = {2021},
  address   = {virtual},
  day       = {9},
  month     = mar,
  url       = {https://humanrobotinteraction.org/2021/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Studies on avatars and our future society", In the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan (virtual), January, 2021.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2021,
  author    = {Hiroshi Ishiguro},
  booktitle = {the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020)},
  title     = {Studies on avatars and our future society},
  year      = {2021},
  address   = {Yokohama, Japan (virtual)},
  doi       = {https://ijcai20.org/},
  month     = jan,
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
Hiroshi Ishiguro, "Studies on interactive robots", In IEEE TALE2020, virtual, December, 2020.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@Inproceedings{Ishiguro2020,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on interactive robots},
  booktitle = {IEEE TALE2020},
  year      = {2020},
  address   = {virtual},
  month     = dec,
  day       = {8-11},
  url       = {http://tale2020.org/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
Hidenobu Sumioka, "Social Robots for Touch interaction and Education", In 2019 International Conference on Advances in STEM Education (ASTEM 2019), The Education University of Hong Kong (EdUHK), Hong Kong, December, 2019.
Abstract: In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change its relationship with us by playing different roles. They can become our teacher, our student, care-receiver, and our peer, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve relationship between elderly people and care staffs. Finally, I present the physical embodiment of the robot enables us to overcome our limitation to build social bond with people and provide us with a new way of making close human relationship.
BibTeX:
@InProceedings{Sumioka2019g,
  author    = {Hidenobu Sumioka},
  booktitle = {2019 International Conference on Advances in STEM Education (ASTEM 2019)},
  title     = {Social Robots for Touch interaction and Education},
  year      = {2019},
  address   = {The Education University of Hong Kong (EdUHK), Hong Kong},
  day       = {18-20},
  month     = dec,
  url       = {https://www.eduhk.hk/astem/},
  abstract  = {In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change its relationship with us by playing different roles. They can become our teacher, our student, care-receiver, and our peer, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve relationship between elderly people and care staffs. Finally, I present the physical embodiment of the robot enables us to overcome our limitation to build social bond with people and provide us with a new way of making close human relationship.},
}
Hidenobu Sumioka, "Emerging Education with Social Robots", In The 11th Asian Conference on Education (ACE2019), Toshi Center Hotel, Tokyo, November, 2019.
Abstract: Recent advances in robotic technologies enable robots to support us in our daily activities such as social interactions. Such robots, called social robots, often make us interact in more intuitive and casual ways than a real human because of the lack of nonverbal cues and demographic messages. Thanks to this characteristic, they are just beginning to be applied to various fields of social interaction such as education. In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change their relationship with us by playing different roles. They can become our teachers, our students, and our peers, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve communication skills. Finally, I will present the physical embodiment of the robot that enables us to overcome our limitation to build social bonds with people and provide us with a new way of making close human relationships.
BibTeX:
@InProceedings{Sumioka2019f,
  author    = {Hidenobu Sumioka},
  booktitle = {The 11th Asian Conference on Education (ACE2019)},
  title     = {Emerging Education with Social Robots},
  year      = {2019},
  address   = {Toshi Center Hotel, Tokyo},
  day       = {1-3},
  month     = nov,
  url       = {https://ace.iafor.org/},
  abstract  = {Recent advances in robotic technologies enable robots to support us in our daily activities such as social interactions. Such robots, called social robots, often make us interact in more intuitive and casual ways than a real human because of the lack of nonverbal cues and demographic messages. Thanks to this characteristic, they are just beginning to be applied to various fields of social interaction such as education. In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change their relationship with us by playing different roles. They can become our teachers, our students, and our peers, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve communication skills. Finally, I will present the physical embodiment of the robot that enables us to overcome our limitation to build social bonds with people and provide us with a new way of making close human relationships.},
}
Soheil Keshmiri, "Human-Robot Physical Interaction: The recent Findings and their Utilities for preventing age-related cognitive decline, improving the quality of child care, and advancing quality of mental disorder services", In Big Data and AI Congress 5th Edition 2019, Barcelona, Spain, pp. 1-33, October, 2019.
BibTeX:
@Inproceedings{Keshmiri2019j,
  author    = {Soheil Keshmiri},
  title     = {Human-Robot Physical Interaction: The recent Findings and their Utilities for preventing age-related cognitive decline, improving the quality of child care, and advancing quality of mental disorder services},
  booktitle = {Big Data and AI Congress 5th Edition 2019},
  year      = {2019},
  pages     = {1-33},
  address   = {Barcelona, Spain},
  month     = oct,
  day       = {17},
  url       = {https://bigdatacongress.barcelona/en/},
}
Hiorshi Ishiguro, "Human Robots and Smart Textiles", In Comfort and Smart Textile International Symposium 2019, Kasugano International Forum, Nara, September, 2019.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2019c,
  author    = {Hiorshi Ishiguro},
  booktitle = {Comfort and Smart Textile International Symposium 2019},
  title     = {Human Robots and Smart Textiles},
  year      = {2019},
  address   = {Kasugano International Forum, Nara},
  day       = {6-7},
  month     = sep,
  url       = {https://cscenter.co.jp/issttcc2019/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Studies on Interactive Robots", In Living Machines 2019, Kasugano International Forum, Nara, July, 2019.
Abstract: In this talk, he will introduce various interactive personal robots and androids and explain how to study the technologies and scientific issues by using them. Especially, he will focus on embodiment, emotion and intention/desire of the robots and androids. And further, he will discuss on our future society where we have symbiotic relationships with them.
BibTeX:
@InProceedings{Ishiguro2019b,
  author    = {Hiroshi Ishiguro},
  booktitle = {Living Machines 2019},
  title     = {Studies on Interactive Robots},
  year      = {2019},
  address   = {Kasugano International Forum, Nara},
  day       = {9-12},
  month     = jul,
  url       = {http://livingmachinesconference.eu/2019/plenarytalks/},
  abstract  = {In this talk, he will introduce various interactive personal robots and androids and explain how to study the technologies and scientific issues by using them. Especially, he will focus on embodiment, emotion and intention/desire of the robots and androids. And further, he will discuss on our future society where we have symbiotic relationships with them.},
}
Hidenobu Sumioka, "Robotics For Elderly Society", In Long term care system & scientific technology in Japan aging society, 大阪大学, 大阪, July, 2019.
Abstract: In this talk, I present current elderly care with communication robots in Japan
BibTeX:
@InProceedings{Sumioka2019b,
  author    = {Hidenobu Sumioka},
  booktitle = {Long term care system \& scientific technology in Japan aging society},
  title     = {Robotics For Elderly Society},
  year      = {2019},
  address   = {大阪大学, 大阪},
  day       = {22},
  month     = jul,
  abstract  = {In this talk, I present current elderly care with communication robots in Japan},
}
Hiroshi Ishiguro, "Studies on Interactive Robots", In PerCom2019, Kyoto International Conference Center, Kyoto, March, 2019.
Abstract: We, humans, have innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interaction. The speaker, Ishiguro, has developed various types of interactive robots and androids so fare. These robots can be used for studying on the technologies and understanding human natures. He has contributed to establish the research area of Human-Robot Interaction with the robots. Geminoid that is a teleoperated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people often hesitate to talk with adult humans and the adult androids. A question is what the ideal medium for everybody is. In order to investigate it, the speaker proposes the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot judge the age and gender. Elderly people like to talk with the Telenoid very much. In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans. Further, he, Ishiguro, is developing and studying autonomous conversational robots and androids recently. Especially, he focuses on embodiment, emotion and intention/desire of the robots and androids. In addition to these robotics studies, he will discuss on our future society where we have symbiotic relationships with them in this talk.
BibTeX:
@Inproceedings{Ishiguro2019a,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Interactive Robots},
  booktitle = {PerCom2019},
  year      = {2019},
  address   = {Kyoto International Conference Center, Kyoto},
  month     = mar,
  day       = {13},
  url       = {http://www.percom.org/},
  abstract  = {We, humans, have innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interaction. The speaker, Ishiguro, has developed various types of interactive robots and androids so fare. These robots can be used for studying on the technologies and understanding human natures. He has contributed to establish the research area of Human-Robot Interaction with the robots.

Geminoid that is a teleoperated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid.

However, the geminoid is not the ideal medium for everybody. For example, elderly people often hesitate to talk with adult humans and the adult androids. A question is what the ideal medium for everybody is. In order to investigate it, the speaker proposes the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot judge the age and gender. Elderly people like to talk with the Telenoid very much. In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.

Further, he, Ishiguro, is developing and studying autonomous conversational robots and androids recently. Especially, he focuses on embodiment, emotion and intention/desire of the robots and androids.

In addition to these robotics studies, he will discuss on our future society where we have symbiotic relationships with them in this talk.},
}
Hidenobu Sumioka, "Robotics for Elderly and Stressful Society", In The Kansai Resilience Forum 2019, The Hyogo Prefectural Museum of Art, 兵庫, February, 2019.
Abstract: The Kansai Resilience Forum 2019 is an event organised by The Government of Japan in collaboration with The International Academic Forum (IAFOR), which re-examines resilience from interdisciplinary perspectives and paradigms, from the abstract concept to the concrete, with contributions from thought leaders in academia, business and government.
BibTeX:
@InProceedings{Sumioka2019,
  author    = {Hidenobu Sumioka},
  booktitle = {The Kansai Resilience Forum 2019},
  title     = {Robotics for Elderly and Stressful Society},
  year      = {2019},
  address   = {The Hyogo Prefectural Museum of Art, 兵庫},
  day       = {22},
  month     = feb,
  url       = {https://kansai-resilience-forum.jp/},
  abstract  = {The Kansai Resilience Forum 2019 is an event organised by The Government of Japan in collaboration with The International Academic Forum (IAFOR), which re-examines resilience from interdisciplinary perspectives and paradigms, from the abstract concept to the concrete, with contributions from thought leaders in academia, business and government.},
}
Hiroshi Ishiguro, "State-of-the-art and different approaches to robotics research and development", In Roboethics: Humans, Machines and Health, New Synod Hall, Vatican, February, 2019.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2019,
  author    = {Hiroshi Ishiguro},
  title     = {State-of-the-art and different approaches to robotics research and development},
  booktitle = {Roboethics: Humans, Machines and Health},
  year      = {2019},
  address   = {New Synod Hall, Vatican},
  month     = Feb,
  day       = {25},
  url       = {http://www.academyforlife.va/content/pav/en/news/2018/humans--machines-and-health--workshop-2019.html},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
Hiroshi Ishiguro, "Humanoid Robots and Our Future Society", In 18th ACM International Conference on Intelligent Virtual Agents, Sydney, Australia, November, 2018.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2018f,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoid Robots and Our Future Society},
  booktitle = {18th ACM International Conference on Intelligent Virtual Agents},
  year      = {2018},
  address   = {Sydney, Australia},
  month     = Nov,
  day       = {7},
  url       = {https://iva2018.westernsydney.edu.au/},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
Hiorshi Ishiguro, "I robot faranno parte della nostra società?", In Anteprima del Forum di Cernobbio, Villa d'Este Via Regina, Italy, September, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018e,
  author    = {Hiorshi Ishiguro},
  title     = {I robot faranno parte della nostra società?},
  booktitle = {Anteprima del Forum di Cernobbio},
  year      = {2018},
  address   = {Villa d'Este Via Regina, Italy},
  month     = Sep,
  day       = {6},
  url       = {https://www.aggiornamentopermanente.it/it/incontri/view/7583},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Androids, AI and the Future of Human Creativity", In ALIFE 2018, Miraikan, Tokyo, July, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018c,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, AI and the Future of Human Creativity},
  booktitle = {ALIFE 2018},
  year      = {2018},
  address   = {Miraikan, Tokyo},
  month     = Jul,
  day       = {26},
  url       = {http://2018.alife.org/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Androids, AI and the Future of Human Creativity", In Cannes Lions 2018, Palais des Festivals, Cannes, June, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018b,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, AI and the Future of Human Creativity},
  booktitle = {Cannes Lions 2018},
  year      = {2018},
  address   = {Palais des Festivals, Cannes},
  month     = Jun,
  day       = {18},
  url       = {https://www.canneslions.com},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Fundamental Issues in Symbiotic Human-Robot Interaction", In Robotics: Science and Systems 2018, Carnegie Music Hall, USA, June, 2018.
Abstract: The focus of robotics research is shifting from industrial robots to robots working in daily situations and one of the most important issues is to develop autonomous social robots capable to interact with and live together with humans, i.e., symbiotic robots with humans. The aim of this workshop is to introduce research activities in "Symbiotic Human-Robot Interaction," and discuss the future challenges in this research area. One of the goals of this research is providing communication support for people, such as communication care support robot for elderly people, which is equally important as physical support in elderly care. Another aim is to offer a framework for understanding what human is by using robots as a communication stimulus input device in the actual situations. In this workshop, we will introduce the research activities using communication robots, along with a demonstration of an android, one of the most advancing communication robots. We will discuss the future of everyday robots, key technologies required to make them able to be true companions living together with us, and ethical and social issues related to this topic.
BibTeX:
@Inproceedings{Ishiguro2018d,
  author    = {Hiroshi Ishiguro},
  title     = {Fundamental Issues in Symbiotic Human-Robot Interaction},
  booktitle = {Robotics: Science and Systems 2018},
  year      = {2018},
  address   = {Carnegie Music Hall, USA},
  month     = Jun,
  day       = {30},
  url       = {http://www.roboticsconference.org/},
  abstract  = {The focus of robotics research is shifting from industrial robots to robots working in daily situations and one of the most important issues is to develop autonomous social robots capable to interact with and live together with humans, i.e., symbiotic robots with humans. The aim of this workshop is to introduce research activities in "Symbiotic Human-Robot Interaction," and discuss the future challenges in this research area. One of the goals of this research is providing communication support for people, such as communication care support robot for elderly people, which is equally important as physical support in elderly care. Another aim is to offer a framework for understanding what human is by using robots as a communication stimulus input device in the actual situations. In this workshop, we will introduce the research activities using communication robots, along with a demonstration of an android, one of the most advancing communication robots. We will discuss the future of everyday robots, key technologies required to make them able to be true companions living together with us, and ethical and social issues related to this topic.},
}
Hiroshi Ishiguro, "Connecting with robots", In and& festival, Leuven, Belgium, May, 2018.
Abstract: Hiroshi believes that since we are hardwired to interact with and place our faith in humans, the more humanlike we can make a robot appear, the more open we'll be to sharing our lives with it. Toward this end, his teams are pioneering a young field of research called human-robot interaction, a hybrid discipline that combines engineering, AI, social psychology and cognitive science. Would you trust robots to play a significant role in our future cities? Analyzing and cultivating our evolving relationship with robots, Hiroshi seeks to understand why and when we're willing to interact with, and maybe even feel affection for, a machine. And with each android he produces, Ishiguro believes he is moving closer to building that trust.
BibTeX:
@Inproceedings{Ishiguro2018a,
  author    = {Hiroshi Ishiguro},
  title     = {Connecting with robots},
  booktitle = {and\& festival},
  year      = {2018},
  address   = {Leuven, Belgium},
  month     = May,
  day       = {3},
  url       = {https://www.andleuven.com/en/program/summit/prof-hiroshi-ishiguro},
  abstract  = {Hiroshi believes that since we are hardwired to interact with and place our faith in humans, the more humanlike we can make a robot appear, the more open we'll be to sharing our lives with it. Toward this end, his teams are pioneering a young field of research called human-robot interaction, a hybrid discipline that combines engineering, AI, social psychology and cognitive science. 
Would you trust robots to play a significant role in our future cities? Analyzing and cultivating our evolving relationship with robots, Hiroshi seeks to understand why and when we're willing to interact with, and maybe even feel affection for, a machine. And with each android he produces, Ishiguro believes he is moving closer to building that trust.},
}
Hidenobu Sumioka, "Social touch in human-human telecommunication mediated by a robot", In IoT Enabling Sensing/Network/AI and Photonics Conference 2018 (IoT-SNAP2018), Pacifico Yokohama, Kanagawa, April, 2018.
Abstract: We present how virtual physical contact mediated by an artificial entity affects our quality of life through human-human telecommunication, focusing on elderly care and education.
BibTeX:
@Inproceedings{Sumioka2018,
  author    = {Hidenobu Sumioka},
  title     = {Social touch in human-human telecommunication mediated by a robot},
  booktitle = {IoT Enabling Sensing/Network/AI and Photonics Conference 2018 (IoT-SNAP2018)},
  year      = {2018},
  address   = {Pacifico Yokohama, Kanagawa},
  month     = Apr,
  day       = {24-27},
  url       = {http://iot-snap.opicon.jp/},
  abstract  = {We present how virtual physical contact mediated by an artificial entity affects our quality of life through human-human telecommunication, focusing on elderly care and education.},
}
Hiorshi Ishiguro, "Studies on Interactive Robots", In International Research Conference Robophilosophy 2018, Vienna, Austria, February, 2018.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2018,
  author    = {Hiorshi Ishiguro},
  title     = {Studies on Interactive Robots},
  booktitle = {International Research Conference Robophilosophy 2018},
  year      = {2018},
  address   = {Vienna, Austria},
  month     = Feb,
  day       = {15},
  url       = {http://conferences.au.dk/robo-philosophy-2018-at-the-university-of-vienna/},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
Hiroshi Ishiguro, "Conversational Robots and the Fundamental Issues", In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2017), Okinawa, Japan, December, 2017.
Abstract: This talk introduces the robots and discusses on fundamental issues. Especially, it focuses on feeling of presence, so-called "sonzaikan" in Japanese and dialogue as the fundamental issues.
BibTeX:
@Inproceedings{Ishiguro2017k,
  author    = {Hiroshi Ishiguro},
  title     = {Conversational Robots and the Fundamental Issues},
  booktitle = {2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2017)},
  year      = {2017},
  address   = {Okinawa, Japan},
  month     = Dec,
  day       = {20},
  url       = {https://asru2017.org/default.asp},
  abstract  = {This talk introduces the robots and discusses on fundamental issues. Especially, it focuses on feeling of presence, so-called "sonzaikan" in Japanese and dialogue as the fundamental issues.},
}
Hiroshi Ishiguro, "Humanoid Robots and Our Future Society", In INCmty, Monterrey N.L., Mexico, November, 2017.
Abstract: Hiroshi Ishiguro is an innovator like no other in the world of robotics, redefining standards of quality and creativity in the field. His passion and dedication for the subject has led him to create robots called androids that resemble humans both physically and mentally, giving them a sense of realism like never before. The Intelligent Robotics Laboratory of the School of Engineering Sciences of Osaka University is the place where Ishiguro's ideas are born, developed and turned into reality.
BibTeX:
@Inproceedings{Ishiguro2017j,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoid Robots and Our Future Society},
  booktitle = {INCmty},
  year      = {2017},
  address   = {Monterrey N.L., Mexico},
  month     = Nov,
  day       = {16},
  url       = {http://incmty.com/},
  abstract  = {Hiroshi Ishiguro is an innovator like no other in the world of robotics, redefining standards of quality and creativity in the field. His passion and dedication for the subject has led him to create robots called androids that resemble humans both physically and mentally, giving them a sense of realism like never before. The Intelligent Robotics Laboratory of the School of Engineering Sciences of Osaka University is the place where Ishiguro's ideas are born, developed and turned into reality.},
}
Hiroshi Ishiguro, "Studies on Interactive Robots - Principles of conversation", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver Convention Centre, Canada, September, 2017.
Abstract: This talk introduces the robots and androids and discusses on our future society supported by them. In addition, this talk discusses on the fundamentals of human-robot interaction and conversation focusing on the feeling of presence given by robots and androids and conversations with two robots and touch panels.
BibTeX:
@Inproceedings{Ishiguro2017h,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Interactive Robots - Principles of conversation},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  year      = {2017},
  address   = {Vancouver Convention Centre, Canada},
  month     = Sep,
  day       = {26},
  url       = {http://www.iros2017.org/},
  abstract  = {This talk introduces the robots and androids and discusses on our future society supported by them. In addition, this talk discusses on the fundamentals of human-robot interaction and conversation focusing on the feeling of presence given by robots and androids and conversations with two robots and touch panels.},
}
Hiroshi Ishiguro, "Robotics for understanding humans", In 第114回医学物理学会学術大会, 8th Japan-Korea Joint Meeting on Medical Physics, 大阪大学コンベンションセンター, 大阪, September, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{Ishiguro2017i,
  author    = {Hiroshi Ishiguro},
  title     = {Robotics for understanding humans},
  booktitle = {第114回医学物理学会学術大会, 8th Japan-Korea Joint Meeting on Medical Physics},
  year      = {2017},
  address   = {大阪大学コンベンションセンター, 大阪},
  month     = Sep,
  day       = {16},
  url       = {http://www.jsmp.org/conf/114/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Androids, Robots, and Our Future Life", In 2970°The Boiling Point, The Arts Centre Gold Coast, Australia, September, 2017.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and show the demonstration with the robot.
BibTeX:
@Inproceedings{Ishiguro2017g,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, Robots, and Our Future Life},
  booktitle = {2970°The Boiling Point},
  year      = {2017},
  address   = {The Arts Centre Gold Coast, Australia},
  month     = Sep,
  day       = {9},
  url       = {http://www.2970degrees.com.au/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and show the demonstration with the robot.},
}
Hiroshi Ishiguro, "Studies on humanlike robots", In Computer Graphics International 2017 (CGI2017), Keio University Hiyoshi Campus, Yokohama, June, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2017e,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on humanlike robots},
  booktitle = {Computer Graphics International 2017 (CGI2017)},
  year      = {2017},
  address   = {Keio University Hiyoshi Campus, Yokohama},
  month     = Jun,
  url       = {http://fj.ics.keio.ac.jp/cgi17/},
  abstract  = {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
}
Hiroshi Ishiguro, "Studies on Humanlike Robots", In Academia Film Olomouc (AFO52), Olomouc, Czech, April, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2017f,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Humanlike Robots},
  booktitle = {Academia Film Olomouc (AFO52)},
  year      = {2017},
  address   = {Olomouc, Czech},
  month     = Apr,
  day       = {28},
  url       = {http://www.afo.cz/programme/3703/},
  abstract  = {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
}
Hiroshi Ishiguro, "Humans and Robots in a Free-for-All Discussion", In The South by Southwest (SXSW) Conference & Festivals 2017, Austin Convention Center, USA, March, 2017.
Abstract: Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?
BibTeX:
@Inproceedings{Ishiguro2017c,
  author    = {Hiroshi Ishiguro},
  title     = {Humans and Robots in a Free-for-All Discussion},
  booktitle = {The South by Southwest (SXSW) Conference \& Festivals 2017},
  year      = {2017},
  address   = {Austin Convention Center, USA},
  month     = Mar,
  day       = {12},
  url       = {http://schedule.sxsw.com/2017/events/PP95381},
  abstract  = {Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?},
}
Hiroshi Ishiguro, "AI, Labour, Creativity and Authorship", In AI in Asia: AI for Social Good, Waseda University, Tokyo, March, 2017.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.
BibTeX:
@Inproceedings{Ishiguro2017a,
  author    = {Hiroshi Ishiguro},
  title     = {AI, Labour, Creativity and Authorship},
  booktitle = {AI in Asia: AI for Social Good},
  year      = {2017},
  address   = {Waseda University, Tokyo},
  month     = Mar,
  day       = {6},
  url       = {https://www.digitalasiahub.org/2017/02/27/ai-in-asia-ai-for-social-good/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.},
}
Hiroshi Ishiguro, "Androids, Robots, and Our Future Life", In CeBIT 2017, Hannover, Germany, March, 2017.
Abstract: We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.
BibTeX:
@Inproceedings{Ishiguro2017b,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, Robots, and Our Future Life},
  booktitle = {CeBIT 2017},
  year      = {2017},
  address   = {Hannover, Germany},
  month     = Mar,
  day       = {21},
  url       = {http://www.cebit.de/en/},
  abstract  = {We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.},
}
Hiroshi Ishiguro, "Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans", In USC Visions and Voices, Doheny Memorial Library, USA, March, 2017.
Abstract: A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.
BibTeX:
@Inproceedings{Ishiguro2017d,
  author    = {Hiroshi Ishiguro},
  title     = {Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans},
  booktitle = {USC Visions and Voices},
  year      = {2017},
  address   = {Doheny Memorial Library, USA},
  month     = Mar,
  day       = {23},
  url       = {https://calendar.usc.edu/event/uncanny_valleys_thinking_and_feeling_in_the_age_of_synthetic_humans#.WNDWQz96pGZ},
  abstract  = {A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.},
}
Hiroshi Ishiguro, "Studies on humanlike robots", In IVA seminar, IVA Konferenscenter, Sweden, January, 2017.
Abstract: Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?
BibTeX:
@Inproceedings{Ishiguro2017,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on humanlike robots},
  booktitle = {IVA seminar},
  year      = {2017},
  address   = {IVA Konferenscenter, Sweden},
  month     = Jan,
  day       = {24},
  url       = {http://www.iva.se/en/tidigare-event/social-and-humanlike-robots/},
  abstract  = {Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?},
}
Hiroshi Ishiguro, "Robotics", In Microsoft Research Asia Faculty Summit 2016, Yonsei University, Korea, November, 2016.
Abstract: his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.
BibTeX:
@Inproceedings{Ishiguro2016k,
  author    = {Hiroshi Ishiguro},
  title     = {Robotics},
  booktitle = {Microsoft Research Asia Faculty Summit 2016},
  year      = {2016},
  address   = {Yonsei University, Korea},
  month     = Nov,
  day       = {5},
  url       = {https://www.microsoft.com/en-us/research/event/asia-faculty-summit-2016/},
  abstract  = {his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.},
}
Hiroshi Ishiguro, "Humanlike robots and our future society", In ROMAEUROPA FESTIVAL 2016, Auditorium MACRO, Italy, November, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016i,
  author    = {Hiroshi Ishiguro},
  title     = {Humanlike robots and our future society},
  booktitle = {ROMAEUROPA FESTIVAL 2016},
  year      = {2016},
  address   = {Auditorium MACRO, Italy},
  month     = Nov,
  day       = {24},
  url       = {http://romaeuropa.net/festival-2016/ishiguro/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
Hiroshi Ishiguro, "What can we learn from very human-like robots & androids?", In Creative Innovation Asia Pacific 2016, Sofitel Melbourne on Collins, Australia, November, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@Inproceedings{Ishiguro2016e,
  author    = {Hiroshi Ishiguro},
  title     = {What can we learn from very human-like robots \& androids?},
  booktitle = {Creative Innovation Asia Pacific 2016},
  year      = {2016},
  address   = {Sofitel Melbourne on Collins, Australia},
  month     = Nov,
  day       = {9},
  url       = {http://www.creativeinnovationglobal.com.au/Ci2016/},
  abstract  = {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
}
Hiroshi Ishiguro, "Interactive robots and our future life", In MarkeThing, Alten Teppichfabrik Berlin, Germany, September, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016g,
  author    = {Hiroshi Ishiguro},
  title     = {Interactive robots and our future life},
  booktitle = {MarkeThing},
  year      = {2016},
  address   = {Alten Teppichfabrik Berlin, Germany},
  month     = Sep,
  day       = {28},
  url       = {http://www.markething.de/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
Hiroshi Ishiguro, "Studies on Humanoids and Androids", In CEDI 2016, University of Salamanca, Spain, September, 2016.
Abstract: Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@Inproceedings{Ishiguro2016h,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Humanoids and Androids},
  booktitle = {CEDI 2016},
  year      = {2016},
  address   = {University of Salamanca, Spain},
  month     = Sep,
  day       = {13},
  url       = {http://www.congresocedi.es/en/ponentes-invitados},
  abstract  = {Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
}
Hiroshi Ishiguro, "Adaptation to Teleoperate Robots", In The 31st International Congress of Psychology, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2016d,
  author    = {Hiroshi Ishiguro},
  title     = {Adaptation to Teleoperate Robots},
  booktitle = {The 31st International Congress of Psychology},
  year      = {2016},
  address   = {PACIFICO Yokohama, Yokohama},
  month     = Jul,
  day       = {24},
  url       = {http://www.icp2016.jp/index.html},
  abstract  = {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
}
Hiroshi Ishiguro, "Communication Robots", In International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@Inproceedings{Ishiguro2016f,
  author    = {Hiroshi Ishiguro},
  title     = {Communication Robots},
  booktitle = {International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality},
  year      = {2016},
  address   = {PACIFICO Yokohama, Yokohama},
  month     = Jul,
  day       = {23},
  url       = {http://darwin.c.u-tokyo.ac.jp/empathysymposium2016/ja/},
  abstract  = {Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
}
Hiroshi Ishiguro, "Humanoids: Future Robots for Service", In RoboBusiness Europe 2016, Odense Congress Center, Denmark, June, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@Inproceedings{Ishiguro2016,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoids: Future Robots for Service},
  booktitle = {RoboBusiness Europe 2016},
  year      = {2016},
  address   = {Odense Congress Center, Denmark},
  month     = Jun,
  day       = {2},
  url       = {http://www.robobusiness.eu/rb/},
  abstract  = {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
}
Hiroshi Ishiguro, "The Power of Presence", In The Power of Presence:Preconference of International Communication Association 2016 in Japan, Kyoto Research Park, Kyoto, June, 2016.
Abstract: a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)
BibTeX:
@Inproceedings{Ishiguro2016c,
  author    = {Hiroshi Ishiguro},
  title     = {The Power of Presence},
  booktitle = {The Power of Presence:Preconference of International Communication Association 2016 in Japan},
  year      = {2016},
  address   = {Kyoto Research Park, Kyoto},
  month     = Jun,
  day       = {8},
  url       = {https://ispr.info/presence-conferences/the-power-of-presence-preconference-of-international-communication-association-2016-in-japan/},
  abstract  = {a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)},
}
Hiroshi Ishiguro, "AI(Artificial Intelligence) & Humanoid robot", In Soeul Forum 2016, Seoul Shilla Hotel, Korea, May, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016b,
  author    = {Hiroshi Ishiguro},
  title     = {AI(Artificial Intelligence) \& Humanoid robot},
  booktitle = {Soeul Forum 2016},
  year      = {2016},
  address   = {Seoul Shilla Hotel, Korea},
  month     = May,
  day       = {12},
  url       = {http://www.seoulforum.kr/eng/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
Shuichi Nishio, "Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark", In 2016 MOST&JST Workshop on ICT for Accessibility and Support of Older People, Tainan, Taiwan, April, 2016.
BibTeX:
@Inproceedings{Nishio2016,
  author    = {Shuichi Nishio},
  title     = {Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark},
  booktitle = {2016 MOST\&JST Workshop on ICT for Accessibility and Support of Older People},
  year      = {2016},
  address   = {Tainan, Taiwan},
  month     = Apr,
  day       = {11},
}
Hiroshi Ishiguro, "Androids and Future Life", In South by Southwest 2016 Music, Film and Interactive Festivals(SXSW), Austin Convention Center, USA, March, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2016a,
  author    = {Hiroshi Ishiguro},
  title     = {Androids and Future Life},
  booktitle = {South by Southwest 2016 Music, Film and Interactive Festivals(SXSW)},
  year      = {2016},
  address   = {Austin Convention Center, USA},
  month     = Mar,
  day       = {13},
  url       = {http://schedule.sxsw.com/2016/events/event_PP50105},
  abstract  = {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
}
Dylan F. Glas, "ERICA: The ERATO Intelligent Conversational Android", In Symposium on Human-Robot Interaction, Stanford University, USA, November, 2015.
Abstract: Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.
BibTeX:
@Inproceedings{Glas2015,
  author    = {Dylan F. Glas},
  title     = {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {Symposium on Human-Robot Interaction},
  year      = {2015},
  address   = {Stanford University, USA},
  month     = Nov,
  abstract  = {Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.},
  file      = {Glas2015.pdf:pdf/Glas2015.pdf:PDF},
}
山崎竜二, "「テレノイド」ロボット:その特異な存在", In ケアとソリューション 大阪フォーラム ケアとテクノロジー, 大阪, October, 2015.
BibTeX:
@Inproceedings{山崎竜二2015,
  author    = {山崎竜二},
  title     = {「テレノイド」ロボット:その特異な存在},
  booktitle = {ケアとソリューション 大阪フォーラム ケアとテクノロジー},
  year      = {2015},
  address   = {大阪},
  month     = Oct,
  file      = {山崎竜二2015.pdf:pdf/山崎竜二2015.pdf:PDF},
}
Hiroshi Ishiguro, "Minimum design of interactive robots", In International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」, 東京, March, 2015.
BibTeX:
@Inproceedings{Ishiguro2015,
  author    = {Hiroshi Ishiguro},
  title     = {Minimum design of interactive robots},
  booktitle = {International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」},
  year      = {2015},
  address   = {東京},
  month     = Mar,
  file      = {Ishiguro2015a.pdf:pdf/Ishiguro2015a.pdf:PDF},
}
Shuichi Nishio, "Teleoperated android robots - Fundamentals, applications and future", In China International Advanced Manufacturing Conference 2014, Mianyang, China, October, 2014.
Abstract: I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.
BibTeX:
@Inproceedings{Nishio2014a,
  author    = {Shuichi Nishio},
  title     = {Teleoperated android robots - Fundamentals, applications and future},
  booktitle = {China International Advanced Manufacturing Conference 2014},
  year      = {2014},
  address   = {Mianyang, China},
  month     = Oct,
  abstract  = {I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.},
}
Hiroshi Ishiguro, "Android Philosophy", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 3, August, 2014.
BibTeX:
@Inproceedings{Ishiguro2014b,
  author    = {Hiroshi Ishiguro},
  title     = {Android Philosophy},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {3},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-3},
  url       = {http://ebooks.iospress.nl/volumearticle/38527},
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In The Global Mobile Internet Conference Beijing, Beijing, China, May, 2014.
BibTeX:
@Inproceedings{Ishiguro2014,
  author    = {Hiroshi Ishiguro},
  title     = {The Future Life Supported by Robotic Avatars},
  booktitle = {The Global Mobile Internet Conference Beijing},
  year      = {2014},
  address   = {Beijing, China},
  month     = May,
  day       = {5-6},
  file      = {ishiguro2014a.pdf:pdf/ishiguro2014a.pdf:PDF},
}
Hiroshi Ishiguro, "Telenoid : A Teleoperated Android with a Minimalistic Human Design", In Robo Business Europe, Billund, Denmark, May, 2014.
BibTeX:
@Inproceedings{Ishiguro2014a,
  author    = {Hiroshi Ishiguro},
  title     = {Telenoid : A Teleoperated Android with a Minimalistic Human Design},
  booktitle = {Robo Business Europe},
  year      = {2014},
  address   = {Billund, Denmark},
  month     = May,
  day       = {26-28},
}
Shuichi Nishio, "The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan", In International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Delmenhorst, Germany, February, 2014.
BibTeX:
@Inproceedings{Nishio2014,
  author    = {Shuichi Nishio},
  title     = {The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year      = {2014},
  address   = {Delmenhorst, Germany},
  month     = Feb,
  day       = {13-15},
}
Ryuji Yamazaki, "Teleoperated Android in Elderly Care", In Patient@home seminar, Denmark, February, 2014.
Abstract: We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.
BibTeX:
@Inproceedings{Yamazaki2014b,
  author    = {Ryuji Yamazaki},
  title     = {Teleoperated Android in Elderly Care},
  booktitle = {Patient@home seminar},
  year      = {2014},
  address   = {Denmark},
  month     = Feb,
  day       = {5},
  abstract  = {We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.},
}
Hiroshi Ishiguro, "Studies on very humanlike robots", In International Conference on Instrumentation, Control, Information Technology and System Integration, Aichi, September, 2013.
Abstract: Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.
BibTeX:
@Inproceedings{Ishiguro2013a,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on very humanlike robots},
  booktitle = {International Conference on Instrumentation, Control, Information Technology and System Integration},
  year      = {2013},
  address   = {Aichi},
  month     = Sep,
  day       = {14},
  abstract  = {Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.},
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In Global Future 2045 International Congress, NY, USA, June, 2013.
Abstract: Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).
BibTeX:
@Inproceedings{Ishiguro2013,
  author    = {Hiroshi Ishiguro},
  title     = {The Future Life Supported by Robotic Avatars},
  booktitle = {Global Future 2045 International Congress},
  year      = {2013},
  address   = {NY, USA},
  month     = Jun,
  abstract  = {Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).},
}
Mari Velonaki, David C. Rye, Steve Scheding, Karl F. MacDorman, Stephen J. Cowley, Hiroshi Ishiguro, Shuichi Nishio, "Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, California, USA, pp. 141-147, March, 2008. (2008.3.26)
BibTeX:
@Inproceedings{Nishio2008b,
  author    = {Mari Velonaki and David C. Rye and Steve Scheding and Karl F. MacDorman and Stephen J. Cowley and Hiroshi Ishiguro and Shuichi Nishio},
  title     = {Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?},
  booktitle = {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  year      = {2008},
  pages     = {141-147},
  address   = {California, USA},
  month     = Mar,
  url       = {http://www.aaai.org/Library/Symposia/Spring/2008/ss08-04-022.php},
  file      = {Rye_Panel.pdf:http\://psychometrixassociates.com/Rye_Panel.pdf:PDF},
  note      = {2008.3.26},
}
Journal Papers
Junya Nakanishi, Jun Baba, Wei-Chuan Chang, Aya Nakae, Hidenobu Sumioka, Hiroshi Ishiguro, "Robot-Mediated Intergenerational Childcare: Experimental Study Based on Health-Screening Task in Nursery School", International Journal of Social Robotics, pp. 1-15, June, 2024.
Abstract: Intergenerational interactions between children and older adults are gaining broader recognition because of their mutualbenefits. However, such interactions face practical limitations owing to potential disease transmission and the poor health ofolder adults for face-to-face interactions. This study explores robot-mediated interactions as a potential solution to addressthese issues. In this study, older adults remotely controlled a social robot to perform a health-screening task for nursery schoolchildren, thereby overcoming the problems associated with face-to-face interactions while engaging in physical interactions.The results of this study suggested that the children responded favorably to the robot, and the rate of positive responseincreased over time. Older adults also found the task generally manageable and experienced a significant positive shift in theirattitude toward children. These findings suggest that robot-mediated interactions can effectively facilitate intergenerationalengagement and provide psychosocial benefits to both the parties to the engagement. This study provides valuable insightsinto the potential of robot-mediated interactions in childcare and other similar settings.
BibTeX:
@Article{Nakanishi2024,
  author   = {Junya Nakanishi and Jun Baba and Wei-Chuan Chang and Aya Nakae and Hidenobu Sumioka and Hiroshi Ishiguro},
  journal  = {International Journal of Social Robotics},
  title    = {Robot-Mediated Intergenerational Childcare: Experimental Study Based on Health-Screening Task in Nursery School},
  year     = {2024},
  abstract = {Intergenerational interactions between children and older adults are gaining broader recognition because of their mutualbenefits. However, such interactions face practical limitations owing to potential disease transmission and the poor health ofolder adults for face-to-face interactions. This study explores robot-mediated interactions as a potential solution to addressthese issues. In this study, older adults remotely controlled a social robot to perform a health-screening task for nursery schoolchildren, thereby overcoming the problems associated with face-to-face interactions while engaging in physical interactions.The results of this study suggested that the children responded favorably to the robot, and the rate of positive responseincreased over time. Older adults also found the task generally manageable and experienced a significant positive shift in theirattitude toward children. These findings suggest that robot-mediated interactions can effectively facilitate intergenerationalengagement and provide psychosocial benefits to both the parties to the engagement. This study provides valuable insightsinto the potential of robot-mediated interactions in childcare and other similar settings.},
  day      = {21},
  doi      = {10.1007/s12369-024-01149-7},
  month    = jun,
  pages    = {1-15},
  url      = {https://link.springer.com/article/10.1007/s12369-024-01149-7},
  keywords = {Intergenerational interaction · Teleoperated social robot · Childcare · Nursery school},
}
Aya Nakae, Wei-Chuan Chang, Chie Kishimoto, Hani M. BU-OMER, HidenobuSumioka, "Towards Objectively Assessing thePsychological Effects of Operating aTelenoid: Minimum Humanoid Design forCommunication", Frontiers in Robotics and AI, 2024.
Abstract: Background: As the Internet of Things (IoT) advances, it opens up broader opportunities for communication, extending beyond face-to-face interactions to include digital intermediaries like avatars. Research on the effect of such communication devices on their operators is limited, and an objective evaluation is desired. This study was conducted to objectively assessed the effects of communication devices on the operator health. Methods: Twelve participants (two women and 10 men, aged 18&8211;23 years) were recruited from Osaka University. Blood samples were collected before and after a conversation with a first-time partner, both face-to-face and via a robot, called Telenoid. Telenoid is a robot with minimal human design, and it was operated by a participant in this study. Changes in hormones and oxidative/antioxidative markers were compared. Results: We found a significant decrease in cortisol levels in the Telenoid-mediated conversations that was not observed in face-to-face communication. The diacron reactive oxygen metabolites (dROMs) has increased as a significant biomarker of oxidative stress in face-to-face communication, but not in Telenoid-mediated communication. Conclusions: Our results suggest that cortisol and dROMs may serve as objective indicators for the psychophysical status of a robot operator. further studies are however required for comprehensive investigation.
BibTeX:
@Article{Nakae2024_7,
  author   = {Aya Nakae and Wei-Chuan Chang and Chie Kishimoto and Hani M. BU-OMER and HidenobuSumioka},
  journal  = {Frontiers in Robotics and AI},
  title    = {Towards Objectively Assessing thePsychological Effects of Operating aTelenoid: Minimum Humanoid Design forCommunication},
  year     = {2024},
  abstract = {Background: As the Internet of Things (IoT) advances, it opens up broader opportunities for communication, extending beyond face-to-face interactions to include digital intermediaries like avatars. Research on the effect of such communication devices on their operators is limited, and an objective evaluation is desired. This study was conducted to objectively assessed the effects of communication devices on the operator health. Methods: Twelve participants (two women and 10 men, aged 18–23 years) were recruited from Osaka University. Blood samples were collected before and after a conversation with a first-time partner, both face-to-face and via a robot, called Telenoid. Telenoid is a robot with minimal human design, and it was operated by a participant in this study. Changes in hormones and oxidative/antioxidative markers were compared. Results: We found a significant decrease in cortisol levels in the Telenoid-mediated conversations that was not observed in face-to-face communication. The diacron reactive oxygen metabolites (dROMs) has increased as a significant biomarker of oxidative stress in face-to-face communication, but not in Telenoid-mediated communication. Conclusions: Our results suggest that cortisol and dROMs may serve as objective indicators for the psychophysical status of a robot operator. further studies are however required for comprehensive investigation.},
  url      = {https://www.frontiersin.org/journals/robotics-and-ai},
  keywords = {Avatar, Telenoid, stress, conversation, new acquaintance, cortisol, SRS-18, d-ROMs},
}
Nobuo Yamato, Hidenobu Sumioka, Hiroshi Ishiguro, Masahiro Shiomi, Youji Kohda, "Technology Acceptance Models from Different Viewpoints of Caregiver, Receiver, and Care Facility Administrator: Lessons from Long-Term Implementation Using Baby-Like Interactive Robot for Nursing Home Residents with Dementia", Journal of Technology in Human Services, vol. 41, pp. 296-321, December, 2023.
Abstract: The introduction of companion robots into nursing homes has positive effects on older people with dementia (PwD) but increases the physical and psychological burden on the nursing staff, such as learning how to use them, fear of breakdowns, and concern about hygiene, and the concerns of the nursing home administrator, such as increased turnover and reduced quality of care due to this. To solve this problem, it is necessary to investigate the acceptability of robots from the viewpoints of all stakeholders: PwD as receivers, nursing staff as caregivers, and nursing home administrator as a care facility administrator. However, there is still missing hypothesis about how their acceptability is structured and involved with each other. This study proposes three technology acceptance model (TAMs) from the perspectives of PwD, nursing staff, and nursing home administrator. The models are conceptualized based on the qualitative and quantitative analysis of the results of our two experiments involving a baby-like interactive robot to stimulate PwD in the same nursing home (one with low acceptance of all stakeholders and the other with their high acceptance) in addition to the comparison with other companion robots. Based on the proposed models, we discuss an integrated TAM for the acceptance of companion robots in long-term care facilities. We also discuss the possibility of applying our approach, which examines the perspectives of various stakeholders on technology acceptance, to other areas such as health care and education, followed by the ethical consideration of introducing a baby-like robot and some limitations.
BibTeX:
@Article{Yamato2023,
  author   = {Nobuo Yamato and Hidenobu Sumioka and Hiroshi Ishiguro and Masahiro Shiomi and Youji Kohda},
  journal  = {Journal of Technology in Human Services},
  title    = {Technology Acceptance Models from Different Viewpoints of Caregiver, Receiver, and Care Facility Administrator: Lessons from Long-Term Implementation Using Baby-Like Interactive Robot for Nursing Home Residents with Dementia},
  year     = {2023},
  abstract = {The introduction of companion robots into nursing homes has positive effects on older people with dementia (PwD) but increases the physical and psychological burden on the nursing staff, such as learning how to use them, fear of breakdowns, and concern about hygiene, and the concerns of the nursing home administrator, such as increased turnover and reduced quality of care due to this. To solve this problem, it is necessary to investigate the acceptability of robots from the viewpoints of all stakeholders: PwD as receivers, nursing staff as caregivers, and nursing home administrator as a care facility administrator. However, there is still missing hypothesis about how their acceptability is structured and involved with each other. This study proposes three technology acceptance model (TAMs) from the perspectives of PwD, nursing staff, and nursing home administrator. The models are conceptualized based on the qualitative and quantitative analysis of the results of our two experiments involving a baby-like interactive robot to stimulate PwD in the same nursing home (one with low acceptance of all stakeholders and the other with their high acceptance) in addition to the comparison with other companion robots. Based on the proposed models, we discuss an integrated TAM for the acceptance of companion robots in long-term care facilities. We also discuss the possibility of applying our approach, which examines the perspectives of various stakeholders on technology acceptance, to other areas such as health care and education, followed by the ethical consideration of introducing a baby-like robot and some limitations.},
  day      = {24},
  doi      = {10.1080/15228835.2023.2292058},
  month    = dec,
  pages    = {296-321},
  url      = {https://www.tandfonline.com/doi/full/10.1080/15228835.2023.2292058},
  volume   = {41},
  issue    = {4},
  keywords = {TAM, BPSD, robot therapy, interactive doll therapy, dementia},
}
Satomi Doi, Aya Isumi, Yui Yamaoka, Shiori Noguchi, Juri Yamazaki, Kanako Ito, Masahiro Shiomi, Hidenobu Sumioka, Takeo Fujiwara, "The effect of breathing relaxation using a huggable human-shaped device on sleep quality among people with sleep problems: A randomized controlled trial", Sleep and Breathing, pp. 1-11, July, 2023.
Abstract: 研究に参加した外来患者67名(ハグビー介入群:29名、対照群:38名)が解析対象となりました。ピッツバーグ睡眠質問票という睡眠障害の程度を評価するツール(PSQI)を使って、介入前、介入開始から2週間後、介入開始から4週間後に睡眠の問題を評価しました。 統計解析の結果、対照群と比べて、介入群のPSQI合計得点が低下していることが示されました。PSQIには複数の下位項目があますが、なかでも主観的な睡眠の質に関する得点が低下していました。つまり、ハグビーを用いた呼吸法によって、睡眠の質が著名に改善することが明らかになりました。また、睡眠改善の効果は、介入開始から2週間後にすでに現れていることも示されました。
BibTeX:
@Article{Doi2023,
  author   = {Satomi Doi and Aya Isumi and Yui Yamaoka and Shiori Noguchi and Juri Yamazaki and Kanako Ito and Masahiro Shiomi and Hidenobu Sumioka and Takeo Fujiwara},
  journal  = {Sleep and Breathing},
  title    = {The effect of breathing relaxation using a huggable human-shaped device on sleep quality among people with sleep problems: A randomized controlled trial},
  year     = {2023},
  abstract = {研究に参加した外来患者67名(ハグビー介入群:29名、対照群:38名)が解析対象となりました。ピッツバーグ睡眠質問票という睡眠障害の程度を評価するツール(PSQI)を使って、介入前、介入開始から2週間後、介入開始から4週間後に睡眠の問題を評価しました。 統計解析の結果、対照群と比べて、介入群のPSQI合計得点が低下していることが示されました。PSQIには複数の下位項目があますが、なかでも主観的な睡眠の質に関する得点が低下していました。つまり、ハグビーを用いた呼吸法によって、睡眠の質が著名に改善することが明らかになりました。また、睡眠改善の効果は、介入開始から2週間後にすでに現れていることも示されました。},
  day      = {10},
  doi      = {https://doi.org/10.1007/s11325-023-02858-5},
  month    = jul,
  pages    = {1-11},
  url      = {https://link.springer.com/article/10.1007/s11325-023-02858-5},
  keywords = {Sleep quality, Breathing relaxation, Huggable human-shaped device, Hugvie, Adverse childhood experience},
}
Takashi Minato, Kurima Sakai, Takahisa Uchida, Hiroshi Ishiguro, "A study of interactive robot architecture through the practical implementation of conversational android", Frontiers in Robotics and AI, vol. 9, no. 905030, pp. 1-25, October, 2022.
Abstract: This study shows an autonomous android robot that can have a natural daily dialogue with humans. The dialogue system for daily dialogue is different from a task-oriented dialogue system in that it is not given a clear purpose or the necessary information. That is, it needs to generate an utterance in a situation where there is no clear request from humans. Therefore, to continue a dialogue with a consistent content, it is necessary to essentially change the design policy of dialogue management compared with the existing dialogue system. The purpose of our study is to constructively find out the dialogue system architecture for realizing daily dialogue through implementing an autonomous dialogue robot capable of daily natural dialogue. We defined the android’s desire necessary for daily dialogue and the dialogue management system in which the android changes its internal (mental) states in accordance to the desire and partner’s behavior and chooses a dialogue topic suitable for the current situation. The developed android could continue daily dialogue for about 10 min in the scene where the robot and partner met for the first time in the experiment. Moreover, a multimodal Turing test has shown that half of the participants had felt that the android was remotely controlled to some degree, that is, the android’s behavior was humanlike. This result suggests that the system construction method assumed in this study is an effective approach to realize daily dialogue, and the study discusses the system architecture for daily dialogue.
BibTeX:
@Article{Minato2022,
  author   = {Takashi Minato and Kurima Sakai and Takahisa Uchida and Hiroshi Ishiguro},
  journal  = {Frontiers in Robotics and AI},
  title    = {A study of interactive robot architecture through the practical implementation of conversational android},
  year     = {2022},
  abstract = {This study shows an autonomous android robot that can have a natural daily dialogue with humans. The dialogue system for daily dialogue is different from a task-oriented dialogue system in that it is not given a clear purpose or the necessary information. That is, it needs to generate an utterance in a situation where there is no clear request from humans. Therefore, to continue a dialogue with a consistent content, it is necessary to essentially change the design policy of dialogue management compared with the existing dialogue system. The purpose of our study is to constructively find out the dialogue system architecture for realizing daily dialogue through implementing an autonomous dialogue robot capable of daily natural dialogue. We defined the android’s desire necessary for daily dialogue and the dialogue management system in which the android changes its internal (mental) states in accordance to the desire and partner’s behavior and chooses a dialogue topic suitable for the current situation. The developed android could continue daily dialogue for about 10 min in the scene where the robot and partner met for the first time in the experiment. Moreover, a multimodal Turing test has shown that half of the participants had felt that the android was remotely controlled to some degree, that is, the android’s behavior was humanlike. This result suggests that the system construction method assumed in this study is an effective approach to realize daily dialogue, and the study discusses the system architecture for daily dialogue.},
  day      = {11},
  doi      = {10.3389/frobt.2022.905030},
  month    = oct,
  number   = {905030},
  pages    = {1-25},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2022.905030/full},
  volume   = {9},
  keywords = {conversational robot, android, daily dialogue, multimodal turing test, architecture},
}
Yoshiki Ohira, Takahisa Uchida, takashi Minato, Hiroshi Ishiguro, "A Dialogue System That Models User's Opinion Based on Information Content", Multimodal Technologies and Interaction, vol. 6, Issue 10, no. 91, pp. 1-33, October, 2022.
Abstract: When designing rule-based dialogue systems, the need for the creation of an elaboratedesign by the designer is a challenge. One way to reduce the cost of creating content is to generateutterances from data collected in an objective and reproducible manner. This study focuses onrule-based dialogue systems using survey data and, more specifically, on opinion dialogue in whichthe system models the user. In the field of opinion dialogue, there has been little study on the topic oftransition methods for modeling users while maintaining their motivation to engage in dialogue. Tomodel them, we adopted information content. Our contribution includes the design of a rule-baseddialogue system that does not require an elaborate design. We also reported an appropriate topictransition method based on information content. This is confirmed by the influence of the user’spersonality characteristics. The content of the questions gives the user a sense of the system’s intentionto understand them. We also reported the possibility that the system’s rational intention contributesto the user’s motivation to engage in dialogue with the system.
BibTeX:
@Article{Ohira2022,
  author   = {Yoshiki Ohira and Takahisa Uchida and takashi Minato and Hiroshi Ishiguro},
  journal  = {Multimodal Technologies and Interaction},
  title    = {A Dialogue System That Models User's Opinion Based on Information Content},
  year     = {2022},
  abstract = {When designing rule-based dialogue systems, the need for the creation of an elaboratedesign by the designer is a challenge. One way to reduce the cost of creating content is to generateutterances from data collected in an objective and reproducible manner. This study focuses onrule-based dialogue systems using survey data and, more specifically, on opinion dialogue in whichthe system models the user. In the field of opinion dialogue, there has been little study on the topic oftransition methods for modeling users while maintaining their motivation to engage in dialogue. Tomodel them, we adopted information content. Our contribution includes the design of a rule-baseddialogue system that does not require an elaborate design. We also reported an appropriate topictransition method based on information content. This is confirmed by the influence of the user’spersonality characteristics. The content of the questions gives the user a sense of the system’s intentionto understand them. We also reported the possibility that the system’s rational intention contributesto the user’s motivation to engage in dialogue with the system.},
  day      = {13},
  doi      = {10.3390/mti6100091},
  month    = oct,
  number   = {91},
  pages    = {1-33},
  url      = {https://www.mdpi.com/2414-4088/6/10/91},
  volume   = {6, Issue 10},
  keywords = {opinion model; user modeling; information content; dialogue strategy; dialogue system},
}
Hidenobu Sumioka, Jim Torresen, Masahiro Shiomi, Liang-Kung Chen, Atsushi Nakazawa, "Editorial: Interaction in robot-assistive elderly care", Frontiers in Robotics and AI, pp. 1-3, September, 2022.
Abstract: This Research Topic focuses on scientific and technical advances in methods, models, techniques, algorithms, and interaction design developed to understand and facilitate verbal and non-verbal interaction between older people and caregivers/artificial systems. In this collection containing seven peer-reviewed articles, the studies can be divided into two categories.
BibTeX:
@Article{Sumioka2022,
  author    = {Hidenobu Sumioka and Jim Torresen and Masahiro Shiomi and Liang-Kung Chen and Atsushi Nakazawa},
  journal   = {Frontiers in Robotics and AI},
  title     = {Editorial: Interaction in robot-assistive elderly care},
  year      = {2022},
  abstract  = {This Research Topic focuses on scientific and technical advances in methods, models, techniques, algorithms, and interaction design developed to understand and facilitate verbal and non-verbal interaction between older people and caregivers/artificial systems. In this collection containing seven peer-reviewed articles, the studies can be divided into two categories.},
  day       = {29},
  doi       = {10.3389/frobt.2022.1020103},
  month     = sep,
  pages     = {1-3},
  url       = {https://www.frontiersin.org/articles/10.3389/frobt.2022.1020103/full},
  booktitle = {Frontiers in Robotics and AI},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An Improved CycleGAN-based Emotional Speech Conversion Model by Augmenting Receptive Field with Transformer", Speech Communication, vol. 144, pp. 110-121, September, 2022.
Abstract: Emotional voice conversion (EVC) is a task that converts the spectrogram and prosody of speech to a target emotion. Recently, some researchers leverage deep learning methods to improve the performance of EVC, such as deep neural network (DNN), sequence-to-sequence model (seq2seq), long-short-term memory network (LSTM), convolutional neural network (CNN), as well as their combinations with the attention mechanism. However, their methods always suffer from some instability problems such as mispronunciations and skipped phonemes, because the model fails to capture temporal intra-relationships among a wide range of frames, which results in unnatural speech and discontinuous emotional expression. Considering to enhance the ability to capture intra-relations among frames by augmenting the receptive field of models, in this study, we explored the power of the transformer. Specifically, we proposed a CycleGAN-based model with the transformer and investigated its ability in the EVC task. In the training procedure, we adopted curriculum learning to gradually increase the frame length so that the model can see from the short segment throughout the entire speech. The proposed method was evaluated on a Japanese emotional speech dataset and compared to widely used EVC baselines (ACVAE, CycleGAN) with objective and subjective evaluations. The results show that our proposed model is able to convert emotion with higher emotional strength, quality, and naturalness.
BibTeX:
@Article{Fu2022a,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Speech Communication},
  title    = {An Improved CycleGAN-based Emotional Speech Conversion Model by Augmenting Receptive Field with Transformer},
  year     = {2022},
  abstract = {Emotional voice conversion (EVC) is a task that converts the spectrogram and prosody of speech to a target emotion. Recently, some researchers leverage deep learning methods to improve the performance of EVC, such as deep neural network (DNN), sequence-to-sequence model (seq2seq), long-short-term memory network (LSTM), convolutional neural network (CNN), as well as their combinations with the attention mechanism. However, their methods always suffer from some instability problems such as mispronunciations and skipped phonemes, because the model fails to capture temporal intra-relationships among a wide range of frames, which results in unnatural speech and discontinuous emotional expression. Considering to enhance the ability to capture intra-relations among frames by augmenting the receptive field of models, in this study, we explored the power of the transformer. Specifically, we proposed a CycleGAN-based model with the transformer and investigated its ability in the EVC task. In the training procedure, we adopted curriculum learning to gradually increase the frame length so that the model can see from the short segment throughout the entire speech. The proposed method was evaluated on a Japanese emotional speech dataset and compared to widely used EVC baselines (ACVAE, CycleGAN) with objective and subjective evaluations. The results show that our proposed model is able to convert emotion with higher emotional strength, quality, and naturalness.},
  day      = {20},
  doi      = {10.1016/j.specom.2022.09.002},
  month    = sep,
  pages    = {110-121},
  url      = {https://www.sciencedirect.com/science/article/abs/pii/S0167639322001224?via=ihub},
  volume   = {144},
  keywords = {Emotional voice conversion, CycleGAN, Transformer, Temporal dependency},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An Adversarial Training Based Speech Emotion Classifier with Isolated Gaussian Regularization", IEEE Transaction of Affective Computing, vol. 14, no. 8, April, 2022.
Abstract: Speaker individual bias may cause emotion-related features to form clusters with irregular borders (non-Gaussian distributions), making the model be sensitive to local irregularities of pattern distributions and resulting in model over-fit of the in-domain dataset. This problem may cause a decrease in the validation scores in cross-domain (i.e. speaker-independent, channel-variant) implementation. To mitigate this problem, in this paper, we propose an adversarial training-based classifier, which is supposed to regularize the distribution of latent representations and smooth the boundaries among different categories. In the regularization phase, we mapped the representations into isolated Gaussian distributions in an unsupervised manner to improve the discriminative ability of latent representations. Moreover, we adopted multi-instance learning by dividing speech into a bag of segments to capture the most salient part for presenting an emotion. The model was evaluated on the IEMOCAP dataset and MELD data with in-corpus speakerindependent sittings. Besides, we investigated the accuracy with cross-corpus speaker-independent sittings to simulate the channelvariant. In the experiment, we compared the proposed model not only with baseline models but also with different configurations of our model. The results show that the proposed model is competitive with the baseline of both in-corpus validation and cross-corpus validation.
BibTeX:
@Article{Fu2022,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {IEEE Transaction of Affective Computing},
  title    = {An Adversarial Training Based Speech Emotion Classifier with Isolated Gaussian Regularization},
  year     = {2022},
  abstract = {Speaker individual bias may cause emotion-related features to form clusters with irregular borders (non-Gaussian distributions), making the model be sensitive to local irregularities of pattern distributions and resulting in model over-fit of the in-domain dataset. This problem may cause a decrease in the validation scores in cross-domain (i.e. speaker-independent, channel-variant) implementation. To mitigate this problem, in this paper, we propose an adversarial training-based classifier, which is supposed to regularize the distribution of latent representations and smooth the boundaries among different categories. In the regularization phase, we mapped the representations into isolated Gaussian distributions in an unsupervised manner to improve the discriminative ability of latent representations. Moreover, we adopted multi-instance learning by dividing speech into a bag of segments to capture the most salient part for presenting an emotion. The model was evaluated on the IEMOCAP dataset and MELD data with in-corpus speakerindependent sittings. Besides, we investigated the accuracy with cross-corpus speaker-independent sittings to simulate the channelvariant. In the experiment, we compared the proposed model not only with baseline models but also with different configurations of our model. The results show that the proposed model is competitive with the baseline of both in-corpus validation and cross-corpus validation.},
  day      = {21},
  doi      = {10.1109/TAFFC.2022.3169091},
  month    = apr,
  number   = {8},
  url      = {https://ieeexplore.ieee.org/document/9761736},
  volume   = {14},
  keywords = {Speech emotion recognition, Adversarial training, Regularization},
}
Takuto Akiyoshi, Junya Nakanishi, Hiroshi Ishiguro, Hidenobu Sumioka, Masahiro Shiomi, "A Robot that Encourages Self-Disclosure to Reduce Anger Mood", IEEE Robotics and Automation Letters (RA-L), vol. 6, Issue 4, pp. 7925-7932, August, 2021.
Abstract: Oneessential role of social robots is supportinghumanmental health by interaction with people. In this study, we focusedon making people’s moods more positive through conversationsabout their problems as our first step to achieving a robot that caresabout mental health. We employed the column method, typicalstress coping technique in Japan, and designed conversational contentsfor a robot. We implemented conversational functions basedon the column method for a social robot as well as a self-schema estimationfunction using conversational data, and proposed conversationalstrategies to support awareness of their self-schemas andautomatic thoughts, which are related to mental health support.We experimentally evaluated our system’s effectiveness and foundthat participants who used it with our proposed conversationalstrategies made more self-disclosures and experienced less angerthan those who did not use our proposed conversational strategies.Unfortunately, the strategies did not significantly increase the performanceof the self-schema estimation function.
BibTeX:
@Article{Akiyoshi2021,
  author   = {Takuto Akiyoshi and Junya Nakanishi and Hiroshi Ishiguro and Hidenobu Sumioka and Masahiro Shiomi},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {A Robot that Encourages Self-Disclosure to Reduce Anger Mood},
  year     = {2021},
  abstract = {Oneessential role of social robots is supportinghumanmental health by interaction with people. In this study, we focusedon making people’s moods more positive through conversationsabout their problems as our first step to achieving a robot that caresabout mental health. We employed the column method, typicalstress coping technique in Japan, and designed conversational contentsfor a robot. We implemented conversational functions basedon the column method for a social robot as well as a self-schema estimationfunction using conversational data, and proposed conversationalstrategies to support awareness of their self-schemas andautomatic thoughts, which are related to mental health support.We experimentally evaluated our system’s effectiveness and foundthat participants who used it with our proposed conversationalstrategies made more self-disclosures and experienced less angerthan those who did not use our proposed conversational strategies.Unfortunately, the strategies did not significantly increase the performanceof the self-schema estimation function.},
  day      = {6},
  doi      = {10.1109/LRA.2021.3102326},
  month    = aug,
  pages    = {7925-7932},
  url      = {https://ieeexplore.ieee.org/document/9508832},
  volume   = {6, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Human-robot interaction, Stress coping},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Takamasa Iio, Hiroshi Ishiguro, "Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations", IEEE Robotics and Automation Letters (RA-L), vol. 6, Issue 4, pp. 6670-6677, July, 2021.
Abstract: Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.
BibTeX:
@Article{Fu2021b,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Takamasa Iio and Hiroshi Ishiguro},
  title    = {Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  year     = {2021},
  volume   = {6, Issue 4},
  pages    = {6670-6677},
  month    = jul,
  abstract = {Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.},
  day      = {7},
  url      = {https://ieeexplore.ieee.org/document/9477165},
  doi      = {10.1109/LRA.2021.3094779},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Robots, Databases, Chatbot, COVID-19, Training, Teleworking, Robot sensing system},
}
Hidenobu Sumioka, Hirokazu Kumazaki, Taro Muramatsu, Yuichiro Yoshikawa, Hiroshi Ishiguro, Haruhiro Higashida, Teruko Yuhi, Masaru Mumura, "A huggable device can reduce the stress of calling an unfamiliar person on the phone for individuals with ASD", PLOS ONE, vol. 16, no. 7, pp. 1-14, July, 2021.
Abstract: Individuals with autism spectrum disorders (ASD) are often not comfortable with calling unfamiliar people on a mobile phone. “Hugvie”, a pillow with a human-like shape, was designed to provide users with the tactile sensation of hugging a person during phone conversations to improve their positive feelings (e.g., comfort and trust) toward phone conversation partners. The primary aim of this study is to examine whether physical contact by hugging a Hugvie can reduce the stress of calling an unfamiliar person on the phone. In this study, 24 individuals with ASD participated. After a phone conversation using only a mobile phone or a mobile phone plus Hugvie, all participants completed questionnaires on their self-confidence in talking on the phone. In addition, participants provided salivary cortisol samples four times each day. Our analysis showed a significant effect of the communication medium, indicating that individuals with ASD who talked on the phone with an unfamiliar person while hugging a Hugvie had stronger self-confidence and lower stress than those who did not use Hugvie. Given the results of this study, we recommend that huggable devices be used as adjunctive tools to support individuals with ASD when they call unfamiliar people on mobile phones.
BibTeX:
@Article{Sumioka2021d,
  author   = {Hidenobu Sumioka and Hirokazu Kumazaki and Taro Muramatsu and Yuichiro Yoshikawa and Hiroshi Ishiguro and Haruhiro Higashida and Teruko Yuhi and Masaru Mumura},
  journal  = {PLOS ONE},
  title    = {A huggable device can reduce the stress of calling an unfamiliar person on the phone for individuals with ASD},
  year     = {2021},
  abstract = {Individuals with autism spectrum disorders (ASD) are often not comfortable with calling unfamiliar people on a mobile phone. “Hugvie”, a pillow with a human-like shape, was designed to provide users with the tactile sensation of hugging a person during phone conversations to improve their positive feelings (e.g., comfort and trust) toward phone conversation partners. The primary aim of this study is to examine whether physical contact by hugging a Hugvie can reduce the stress of calling an unfamiliar person on the phone. In this study, 24 individuals with ASD participated. After a phone conversation using only a mobile phone or a mobile phone plus Hugvie, all participants completed questionnaires on their self-confidence in talking on the phone. In addition, participants provided salivary cortisol samples four times each day. Our analysis showed a significant effect of the communication medium, indicating that individuals with ASD who talked on the phone with an unfamiliar person while hugging a Hugvie had stronger self-confidence and lower stress than those who did not use Hugvie. Given the results of this study, we recommend that huggable devices be used as adjunctive tools to support individuals with ASD when they call unfamiliar people on mobile phones.},
  day      = {23},
  doi      = {10.1371/journal.pone.0254675},
  month    = jul,
  number   = {7},
  pages    = {1-14},
  url      = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0254675},
  volume   = {16},
  keywords = {autism spectrum disorders, tactile, huggable device, self-confidence, cortisol},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence", IEEE Robotics and Automation Letters (RA-L), vol. 6, no. Issue 4, pp. 6521-6528, July, 2021.
Abstract: Motivated by the fact that some human emotional expressions promote affiliating functions such as signaling, social change, and support, all of which have been established as providing social benefits, we investigated how these behaviors can be extended to Human-Robot Interaction (HRI) scenarios. We explored how to furnish an android robot with socially motivated expressions geared toward eliciting adherence to COVID-19 guidelines. We analyzed how different behaviors associated with social expressions in such situations occur in Human-Human Interaction (HHI) and designed a scenario where a robot utilizes context-inspired behaviors (polite, gentle, displeased, and angry) to enforce social compliance. We then implemented these behaviors in an android robot and subjectively evaluated how effectively it expressed them and how they were perceived in terms of their appropriateness, effectiveness, and tendency to enforce social compliance to COVID-19 guidelines. We also considered how the subjects' sense of values regarding compliance awareness would affect the robot's behavior impressions. Our evaluation results indicated that participants generally preferred polite behaviors by a robot, although participants with different levels of compliance awareness manifested different trends toward appropriateness and effectiveness for social compliance enforcement through negative expressions by the robot.
BibTeX:
@Article{Ajibo2021,
  author   = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence},
  year     = {2021},
  abstract = {Motivated by the fact that some human emotional expressions promote affiliating functions such as signaling, social change, and support, all of which have been established as providing social benefits, we investigated how these behaviors can be extended to Human-Robot Interaction (HRI) scenarios. We explored how to furnish an android robot with socially motivated expressions geared toward eliciting adherence to COVID-19 guidelines. We analyzed how different behaviors associated with social expressions in such situations occur in Human-Human Interaction (HHI) and designed a scenario where a robot utilizes context-inspired behaviors (polite, gentle, displeased, and angry) to enforce social compliance. We then implemented these behaviors in an android robot and subjectively evaluated how effectively it expressed them and how they were perceived in terms of their appropriateness, effectiveness, and tendency to enforce social compliance to COVID-19 guidelines. We also considered how the subjects' sense of values regarding compliance awareness would affect the robot's behavior impressions. Our evaluation results indicated that participants generally preferred polite behaviors by a robot, although participants with different levels of compliance awareness manifested different trends toward appropriateness and effectiveness for social compliance enforcement through negative expressions by the robot.},
  day      = {7},
  doi      = {10.1109/LRA.2021.3094783},
  month    = jul,
  number   = {Issue 4},
  pages    = {6521-6528},
  url      = {https://ieeexplore.ieee.org/document/9476976},
  volume   = {6},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Guidelines, COVID-19, Robot sensing system, Pandemics, Task analysis, Human-robot interaction, Faces},
}
Hidenobu Sumioka, Masahiro Shiomi, Miwako Honda, Atsushi Nakazawa, "Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™", Frontiers in Robotics and AI, vol. 8, no. 650906, pp. 1-14, June, 2021.
Abstract: Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.
BibTeX:
@Article{Sumioka2021,
  author   = {Hidenobu Sumioka and Masahiro Shiomi and Miwako Honda and Atsushi Nakazawa},
  journal  = {Frontiers in Robotics and AI},
  title    = {Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™},
  year     = {2021},
  abstract = {Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.},
  day      = {2},
  doi      = {10.3389/frobt.2021.650906},
  month    = jun,
  number   = {650906},
  pages    = {1-14},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2021.650906/full},
  volume   = {8},
  keywords = {Humanitude, dementia care, social robot, human-robot interaction, skill evaluation, dementia},
}
Hidenobu Sumioka, Nobuo Yamato, Masahiro Shiomi, Hiroshi Ishiguro, "A Minimal Design of a Human Infant Presence: A Case Study Toward Interactive Doll Therapy for Older Adults With Dementia", Frontiers in Robotics and AI, vol. 8, no. 633378, pp. 1-12, June, 2021.
Abstract: We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.
BibTeX:
@Article{Sumioka2021a,
  author   = {Hidenobu Sumioka and Nobuo Yamato and Masahiro Shiomi and Hiroshi Ishiguro},
  title    = {A Minimal Design of a Human Infant Presence: A Case Study Toward Interactive Doll Therapy for Older Adults With Dementia},
  journal  = {Frontiers in Robotics and AI},
  year     = {2021},
  volume   = {8},
  number   = {633378},
  pages    = {1-12},
  month    = jun,
  abstract = {We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.},
  day      = {17},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2021.633378/full},
  doi      = {10.3389/frobt.2021.633378},
}
Takahisa Uchida, Takashi Minato, Yutaka Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro, "Female-type Android's Drive to Quickly Understand a User's Concept of Preferences Stimulates Dialogue Satisfaction:Dialogue Strategies for Modeling User's Concept of Preferences", International Journal of Social Robotics (IJSR), January, 2021.
Abstract: This research develops a conversational robot that stimulates users’ dialogue satisfaction and motivation in non-task-oriented dialogues that include opinion and/or preference exchanges. One way to improve user satisfaction and motivation is by demonstrating the robot’s ability to understand user opinions. In this paper, we explore a method that efficiently obtains the concept of user preferences: likes and dislikes. The concept is acquired by complementing a small amount of user preference data observed in dialogues. As a method for efficient collection, we propose a dialogue strategy that creates utterances with the largest expected complementation. Our experimental results with a female-type android robot suggest that the proposed strategy efficiently obtained user preferences and enhanced dialogue satisfaction. In addition, the strength of user motivation (i.e., long-term willingness to communicate with the android) is only positively correlated with the android’s willingness to understand. Our results not only show the effectiveness of our proposed strategy but also suggest a design theory for dialogue robots to stimulate dialogue motivation, although the current results are derived only from a female-type android.
BibTeX:
@Article{Uchida2021,
  author   = {Takahisa Uchida and Takashi Minato and Yutaka Nakamura and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  journal  = {International Journal of Social Robotics (IJSR)},
  title    = {Female-type Android's Drive to Quickly Understand a User's Concept of Preferences Stimulates Dialogue Satisfaction:Dialogue Strategies for Modeling User's Concept of Preferences},
  year     = {2021},
  abstract = {This research develops a conversational robot that stimulates users’ dialogue satisfaction and motivation in non-task-oriented dialogues that include opinion and/or preference exchanges. One way to improve user satisfaction and motivation is by demonstrating the robot’s ability to understand user opinions. In this paper, we explore a method that efficiently obtains the concept of user preferences: likes and dislikes. The concept is acquired by complementing a small amount of user preference data observed in dialogues. As a method for efficient collection, we propose a dialogue strategy that creates utterances with the largest expected complementation. Our experimental results with a female-type android robot suggest that the proposed strategy efficiently obtained user preferences and enhanced dialogue satisfaction. In addition, the strength of user motivation (i.e., long-term willingness to communicate with the android) is only positively correlated with the android’s willingness to understand. Our results not only show the effectiveness of our proposed strategy but also suggest a design theory for dialogue robots to stimulate dialogue motivation, although the current results are derived only from a female-type android.},
  day      = {7},
  doi      = {10.1007/s12369-020-00731-z},
  month    = jan,
  url      = {https://www.springer.com/journal/12369/},
}
Bowen Wu, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Modeling the Conditional Distribution of Co-speech Upper Body Gesture jointly using Conditional-GAN and Unrolled-GAN", MDPI Electronics Special Issue "Human Computer Interaction and Its Future", vol. 10, Issue 3, no. 228, January, 2021.
Abstract: Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation and user studies show that the proposed model outperforms the existing deterministic model, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.
BibTeX:
@Article{Wu2020a,
  author   = {Bowen Wu and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  journal  = {MDPI Electronics Special Issue "Human Computer Interaction and Its Future"},
  title    = {Modeling the Conditional Distribution of Co-speech Upper Body Gesture jointly using Conditional-GAN and Unrolled-GAN},
  year     = {2021},
  abstract = {Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation and user studies show that the proposed model outperforms the existing deterministic model, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.},
  day      = {20},
  doi      = {10.3390/electronics10030228},
  month    = jan,
  number   = {228},
  url      = {https://www.mdpi.com/2079-9292/10/3/228},
  volume   = {10, Issue 3},
  keywords = {Gesture generation; social robots; generative model; neural network; deep learning},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Ryusuke Mikata, Chaoran Liu, Hiroshi Ishiguro, "Analysis of Anger Motion Expression and Evaluation in Android Robot", Advanced Robotics, vol. Vol.34, Issue 24, pp. 1581-1590, December, 2020.
Abstract: Recent studies in human–human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions which are beneficial to the expresser and also help fostering cordiality and closeness amongst interlocutors during conversation. Effort in human–robot interaction has also been devoted to furnish robots with the expression of both positive and negative emotions. However, only a few have considered body gestures in context with the dialogue act functions conveyed by the emotional utterances. This study aims on furnishing robots with humanlike negative emotional expression, specifically anger-based body gestures roused by the utterance context. In this regard, we adopted a multimodal HHI corpus for the study, and then analyzed and established predominant gestures types and dialogue acts associated with anger-based utterances in HHI. Based on the analysis results, we implemented these gesture types in an android robot, and carried out a subjective evaluation to investigate their effects on the perception of anger expression in utterances with different dialogue act functions. Results showed significant effects of the presence of gesture on the anger degree perception. Findings from this study also revealed that the functional content of anger-based utterances plays a significant role in the choice of the gesture accompanying such utterances.
BibTeX:
@Article{Ajibo2020a,
  author   = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Ryusuke Mikata and Chaoran Liu and Hiroshi Ishiguro},
  title    = {Analysis of Anger Motion Expression and Evaluation in Android Robot},
  journal  = {Advanced Robotics},
  year     = {2020},
  volume   = {Vol.34, Issue 24},
  pages    = {1581-1590},
  month    = dec,
  abstract = {Recent studies in human–human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions which are beneficial to the expresser and also help fostering cordiality and closeness amongst interlocutors during conversation. Effort in human–robot interaction has also been devoted to furnish robots with the expression of both positive and negative emotions. However, only a few have considered body gestures in context with the dialogue act functions conveyed by the emotional utterances. This study aims on furnishing robots with humanlike negative emotional expression, specifically anger-based body gestures roused by the utterance context. In this regard, we adopted a multimodal HHI corpus for the study, and then analyzed and established predominant gestures types and dialogue acts associated with anger-based utterances in HHI. Based on the analysis results, we implemented these gesture types in an android robot, and carried out a subjective evaluation to investigate their effects on the perception of anger expression in utterances with different dialogue act functions. Results showed significant effects of the presence of gesture on the anger degree perception. Findings from this study also revealed that the functional content of anger-based utterances plays a significant role in the choice of the gesture accompanying such utterances.},
  day      = {8},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2020.1855244},
  doi      = {10.1080/01691864.2020.1855244},
  keywords = {Anger emotion; gesture and speech; android robot; human–robot interaction},
}
Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network", Sensors, vol. 21, Issue 1, no. 205, pp. 1-16, December, 2020.
Abstract: Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Existing gesture-based emotion recognition methods using deep learning are on convolutional neural networks or recurrent neural networks, without explicitly considering the spatial connection between joints. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition.
BibTeX:
@Article{Shi2020,
  author   = {Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network},
  year     = {2020},
  abstract = {Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Existing gesture-based emotion recognition methods using deep learning are on convolutional neural networks or recurrent neural networks, without explicitly considering the spatial connection between joints. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition.},
  day      = {30},
  doi      = {10.3390/s21010205},
  month    = dec,
  number   = {205},
  pages    = {1-16},
  url      = {https://www.mdpi.com/1424-8220/21/1/205},
  volume   = {21, Issue 1},
  keywords = {Emotion recognition; Gesture; Skeleton; Graph convolutional networks; Self-attention},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots", IEEE Robotics and Automation Letters, vol. 5, Issue 4, pp. 6081-6088, October, 2020.
Abstract: Pointing at a person is usually deemed to be impolite. However, several different forms of person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we first analyzed pointing gestures in human-human dialogue interactions and observed different trends in the use of gesture types, based on the inter-personal relationships between dialogue partners. Then we conducted multiple subjective experiments by systematically creating behaviors in an android robot to investigate the effects of different types of pointing gestures on the impressions of its behaviors. Several factors were included: pointing gesture motion types (hand shapes, such as an open palm or an extended index finger, hand orientation, and motion direction), language types (formal or colloquial), gesture speeds, and gesture hold duration. Our evaluation results indicated that impressions of polite or casual are affected by the analyzed factors, and a behavior’s appropriateness depends on the inter-personal relationship with the dialogue partner.
BibTeX:
@Article{Ishi2020b,
  author   = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters},
  title    = {Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots},
  year     = {2020},
  abstract = {Pointing at a person is usually deemed to be impolite. However, several different forms of person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we first analyzed pointing gestures in human-human dialogue interactions and observed different trends in the use of gesture types, based on the inter-personal relationships between dialogue partners. Then we conducted multiple subjective experiments by systematically creating behaviors in an android robot to investigate the effects of different types of pointing gestures on the impressions of its behaviors. Several factors were included: pointing gesture motion types (hand shapes, such as an open palm or an extended index finger, hand orientation, and motion direction), language types (formal or colloquial), gesture speeds, and gesture hold duration. Our evaluation results indicated that impressions of polite or casual are affected by the analyzed factors, and a behavior’s appropriateness depends on the inter-personal relationship with the dialogue partner.},
  day      = {1},
  doi      = {10.1109/LRA.2020.3011354},
  month    = oct,
  pages    = {6081-6088},
  url      = {https://ieeexplore.ieee.org/document/9146747},
  volume   = {5, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2020 Program Committee for presentation at the Conference)},
  keywords = {Pointing gestures, politeness, motion types, inter-personal relationship, android robots},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Multi-modality Emotion Recognition Model with GAT-based Multi-head Inter-modality Attention", Sensors, vol. 20, Issue 17, no. 4894, pp. 1-15, August, 2020.
Abstract: Emotion recognition has been gaining increasing attention in recent years due to its applications on artificial agents. In order to achieve a good performance on this task, numerous research have been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, there still remains a research question of what is the appropriate way to fuse the information from different modalities. In this paper, we not only proposed some strategies, such as audio sample augmentation, an emotion-oriented encoder-decoder, to improve the performance of emotion recognition, but also discussed an inter-modality decision level fusion method based on graph attention network (GAT). Compared to the baseline, our model improves the weighted average F1-score from 64.18% to 68.31% and weighted average accuracy from 65.25% to 69.88%.
BibTeX:
@Article{Fu2020,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Multi-modality Emotion Recognition Model with GAT-based Multi-head Inter-modality Attention},
  year     = {2020},
  abstract = {Emotion recognition has been gaining increasing attention in recent years due to its applications on artificial agents. In order to achieve a good performance on this task, numerous research have been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, there still remains a research question of what is the appropriate way to fuse the information from different modalities. In this paper, we not only proposed some strategies, such as audio sample augmentation, an emotion-oriented encoder-decoder, to improve the performance of emotion recognition, but also discussed an inter-modality decision level fusion method based on graph attention network (GAT). Compared to the baseline, our model improves the weighted average F1-score from 64.18% to 68.31% and weighted average accuracy from 65.25% to 69.88%.},
  day      = {29},
  doi      = {10.3390/s20174894},
  month    = aug,
  number   = {4894},
  pages    = {1-15},
  url      = {https://www.mdpi.com/1424-8220/20/17/4894/htm},
  volume   = {20, Issue 17},
  keywords = {emotion recognition, multi-modality, graph attention network},
}
Takahisa Uchida, Hideyuki Takahashi, Midori Ban, Jiro Shimaya, Takashi Minato, Kohei Ogawa, Yuichiro Yoshikawa, Hiroshi Ishiguro, "Japanese Young Women Did not Discriminate between Robots and Humans as Listeners for Their Self-Disclosure -Pilot Study-", Multimodal Technologies and Interaction, vol. 4, Issue3, no. 35, pp. 1-16, June, 2020.
Abstract: Disclosing personal matters to other individuals often contributes to the maintenance of our mental health and social bonding. However, in face-to-face situations, it can be difficult to prompt others to self-disclose because people often feel embarrassed disclosing personal matters to others. Although artificial agents without strong social pressure for listeners to induce self-disclosure is a promising engineering method that can be applied in daily stress management and reduce depression, gender difference is known to make a drastic difference of the attitude toward robots. We hypothesized that, as compared to men, women tend to prefer robots as a listener for their self-disclosure. The experimental results that are based on questionnaires and the actual self-disclosure behavior indicate that men preferred to self-disclose to the human listener, while women did not discriminate between robots and humans as listeners for their self-disclosure in the willingness and the amount of self-disclosure. This also suggests that the gender difference needs to be considered when robots are used as a self-disclosure listener.
BibTeX:
@Article{Uchida2020,
  author   = {Takahisa Uchida and Hideyuki Takahashi and Midori Ban and Jiro Shimaya and Takashi Minato and Kohei Ogawa and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  title    = {Japanese Young Women Did not Discriminate between Robots and Humans as Listeners for Their Self-Disclosure -Pilot Study-},
  journal  = {Multimodal Technologies and Interaction},
  year     = {2020},
  volume   = {4, Issue3},
  number   = {35},
  pages    = {1-16},
  month    = jun,
  abstract = {Disclosing personal matters to other individuals often contributes to the maintenance of our mental health and social bonding. However, in face-to-face situations, it can be difficult to prompt others to self-disclose because people often feel embarrassed disclosing personal matters to others. Although artificial agents without strong social pressure for listeners to induce self-disclosure is a promising engineering method that can be applied in daily stress management and reduce depression, gender difference is known to make a drastic difference of the attitude toward robots. We hypothesized that, as compared to men, women tend to prefer robots as a listener for their self-disclosure. The experimental results that are based on questionnaires and the actual self-disclosure behavior indicate that men preferred to self-disclose to the human listener, while women did not discriminate between robots and humans as listeners for their self-disclosure in the willingness and the amount of self-disclosure. This also suggests that the gender difference needs to be considered when robots are used as a self-disclosure listener.},
  day      = {30},
  url      = {https://www.mdpi.com/2414-4088/4/3/35},
  doi      = {10.3390/mti4030035},
  keywords = {self-disclosure; gender difference; conversational robot},
}
Soheil Keshmiri, Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behaviour: a Case Study in Facial Pre-Touch Interaction", Applied Sciences, vol. 10, Issue 11, no. 3817, pp. 1-15, May, 2020.
Abstract: A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.
BibTeX:
@Article{Keshmiri2020c,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Applied Sciences},
  title    = {Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behaviour: a Case Study in Facial Pre-Touch Interaction},
  year     = {2020},
  abstract = {A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.},
  day      = {30},
  doi      = {10.3390/app10113817},
  month    = may,
  number   = {3817},
  pages    = {1-15},
  url      = {https://www.mdpi.com/2076-3417/10/11/3817},
  volume   = {10, Issue 11},
  keywords = {physical interaction; physical pre-touch distance; parametric analysis; non-parametric analysis; non-verbal behavior},
}
Liang-Yu Chen, Hidenobu Sumioka, Li-Ju Ke, Masahiro Shiomi, Liang-Kung Chen, "Effects of Teleoperated Humanoid Robot Application in Older Adults with Neurocognitive Disorders in Taiwan: A Report of Three Cases", Aging Medicine and Healtlcare, Full Universe Integrated Marketing Limited, pp. 67-71, May, 2020.
Abstract: Rising prevalence of major neurocognitive disorders (NCDs) is associated with a great variety of care needs and care stress for caregivers and families. A holistic care pathway to empower person-centered care is recommended, and non-pharmacological strategies are prioritized to manage neuropsychiatric symptoms (NPS) of people with NCDs. However, limited formal services, shortage of manpower, and unpleasant experiences related to NPS of these patients often discourage caregivers and cause the care stress and psychological burnout. Telenoid, a teleoperated humanoid robot, is a new technology that aims to improve the quality of life and to reduce the severity of NPS for persons with major NCDs by facilitating meaningful connection and social engagement. Herein, we presented 3 cases with major NCDs in a day care center in Taiwan who experienced interaction with the Telenoid. Overall, no fear neither distressed emotional response was observed during their conversation, neither worsening of delusion or hallucination after interaction with Telenoid. The severity of NCDs seemed to affect the verbal communication and the attention during conversation with Telenoid. Other factors, such as hearing impairment, insomnia, and environmental stimuli, may also hinder the efficacy of Telenoid in long-term care settings. Further studies with proper study design are needed to evaluate the effects of Telenoid application on older adults with major NCDs.
BibTeX:
@Article{Chen2020,
  author    = {Liang-Yu Chen and Hidenobu Sumioka and Li-Ju Ke and Masahiro Shiomi and Liang-Kung Chen},
  journal   = {Aging Medicine and Healtlcare},
  title     = {Effects of Teleoperated Humanoid Robot Application in Older Adults with Neurocognitive Disorders in Taiwan: A Report of Three Cases},
  year      = {2020},
  abstract  = {Rising prevalence of major neurocognitive disorders (NCDs) is associated with a great variety of care needs and care stress for caregivers and families. A holistic care pathway to empower person-centered care is recommended, and non-pharmacological strategies are prioritized to manage neuropsychiatric symptoms (NPS) of people with NCDs. However, limited formal services, shortage of manpower, and unpleasant experiences related to NPS of these patients often discourage caregivers and cause the care stress and psychological burnout. Telenoid, a teleoperated humanoid robot, is a new technology that aims to improve the quality of life and to reduce the severity of NPS for persons with major NCDs by facilitating meaningful connection and social engagement. Herein, we presented 3 cases with major NCDs in a day care center in Taiwan who experienced interaction with the Telenoid. Overall, no fear neither distressed emotional response was observed during their conversation, neither worsening of delusion or hallucination after interaction with Telenoid. The severity of NCDs seemed to affect the verbal communication and the attention during conversation with Telenoid. Other factors, such as hearing impairment, insomnia, and environmental stimuli, may also hinder the efficacy of Telenoid in long-term care settings. Further studies with proper study design are needed to evaluate the effects of Telenoid application on older adults with major NCDs.},
  day       = {27},
  doi       = {10.33879/AMH.2020.066-2001.003},
  month     = may,
  pages     = {67-71},
  url       = {https://www.agingmedhealthc.com/?p=21602},
  booktitle = {Aging Medicine and Healtlcare},
  editor    = {Asian Association for Frailty and Sarcopenia and Taiwan Association for Integrated Care},
  keywords  = {Dementia, neurocognitive disorder, neuropsychiatric symptom, Telenoid, uncanny valley},
  publisher = {Full Universe Integrated Marketing Limited},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Mediated Hugs Modulates Impression of Hearsay Information", Advanced Robotics, vol. 34, Issue12, pp. 781-788, May, 2020.
Abstract: Although it is perceivable that direct interpersonal touch affects recipient's impression of touch provider as well as the information relating to the provider alike, its utility in mediated interpersonal touch remains unclear to date. In this article, we report the alleviating effect of mediated interpersonal touch on social judgment. In particular, we show that mediated hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, participants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that mediated hug reduces the negative inferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via mediated hug.
BibTeX:
@Article{Nakanishi2020,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  journal  = {Advanced Robotics},
  title    = {Mediated Hugs Modulates Impression of Hearsay Information},
  year     = {2020},
  abstract = {Although it is perceivable that direct interpersonal touch affects recipient's impression of touch provider as well as the information relating to the provider alike, its utility in mediated interpersonal touch remains unclear to date. In this article, we report the alleviating effect of mediated interpersonal touch on social judgment. In particular, we show that mediated hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, participants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that mediated hug reduces the negative inferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via mediated hug.},
  day      = {6},
  doi      = {10.1080/01691864.2020.1760933},
  month    = may,
  pages    = {781-788},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2020.1760933},
  volume   = {34, Issue12},
  keywords = {Interpersonal touch, mediated touch, huggable communication media, impression bias, hearsay information, stress reduction},
}
Soheil Keshmiri, Masahiro Shiomi, Hidenobu Sumioka, Takashi Minato, Hiroshi Ishiguro, "Gentle Versus Strong Touch Classification: Preliminary Results, Challenges, and Potentials", Sensors, vol. 20, Issue 11, no. 3033, pp. 1-22, May, 2020.
Abstract: Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).
BibTeX:
@Article{Keshmiri2020d,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hidenobu Sumioka and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Gentle Versus Strong Touch Classification: Preliminary Results, Challenges, and Potentials},
  year     = {2020},
  abstract = {Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).},
  day      = {27},
  doi      = {10.3390/s20113033},
  month    = may,
  number   = {3033},
  pages    = {1-22},
  url      = {https://www.mdpi.com/1424-8220/20/11/3033},
  volume   = {20, Issue 11},
  keywords = {physical interaction; touch classification; human–agent physical interaction},
}
Soheil Keshmiri, Maryam Alimardani, Masahiro Shiomi, Hidenobu Sumioka, Hiroshi Ishiguro, Kazuo Hiraki, "Higher hypnotic suggestibility is associated with the lower EEG signal variability in theta, alpha, and beta frequency bands", PLOS ONE, vol. 15, no. 4, pp. 1-20, April, 2020.
Abstract: Variation of information in the firing rate of neural population, as reflected in different frequency bands of electroencephalographic (EEG) time series, provides direct evidence for change in neural responses of the brain to hypnotic suggestibility. However, realization of an effective biomarker for spiking behaviour of neural population proves to be an elusive subject matter with its impact evident in highly contrasting results in the literature. In this article, we took an information-theoretic stance on analysis of the EEG time series of the brain activity during hypnotic suggestions, thereby capturing the variability in pattern of brain neural activity in terms of its information content. For this purpose, we utilized differential entropy (DE, i.e., the average information content in a continuous time series) of theta, alpha, and beta frequency bands of fourteen-channel EEG time series recordings that pertain to the brain neural responses of twelve carefully selected high and low hypnotically suggestible individuals. Our results show that the higher hypnotic suggestibility is associated with a significantly lower variability in information content of theta, alpha, and beta frequencies. Moreover, they indicate that such a lower variability is accompanied by a significantly higher functional connectivity (FC, a measure of spatiotemporal synchronization) in the parietal and the parieto-occipital regions in the case of theta and alpha frequency bands and a non-significantly lower FC in the central region’s beta frequency band. Our results contribute to the field in two ways. First, they identify the applicability of DE as a unifying measure to reproduce the similar observations that are separately reported through adaptation of different hypnotic biomarkers in the literature. Second, they extend these previous findings that were based on neutral hypnosis (i.e., a hypnotic procedure that involves no specific suggestions other than those for becoming hypnotized) to the case of hypnotic suggestions, thereby identifying their presence as a potential signature of hypnotic experience.
BibTeX:
@Article{Keshmiri2020b,
  author   = {Soheil Keshmiri and Maryam Alimardani and Masahiro Shiomi and Hidenobu Sumioka and Hiroshi Ishiguro and Kazuo Hiraki},
  title    = {Higher hypnotic suggestibility is associated with the lower EEG signal variability in theta, alpha, and beta frequency bands},
  journal  = {PLOS ONE},
  year     = {2020},
  volume   = {15},
  number   = {4},
  pages    = {1-20},
  month    = apr,
  abstract = {Variation of information in the firing rate of neural population, as reflected in different frequency bands of electroencephalographic (EEG) time series, provides direct evidence for change in neural responses of the brain to hypnotic suggestibility. However, realization of an effective biomarker for spiking behaviour of neural population proves to be an elusive subject matter with its impact evident in highly contrasting results in the literature. In this article, we took an information-theoretic stance on analysis of the EEG time series of the brain activity during hypnotic suggestions, thereby capturing the variability in pattern of brain neural activity in terms of its information content. For this purpose, we utilized differential entropy (DE, i.e., the average information content in a continuous time series) of theta, alpha, and beta frequency bands of fourteen-channel EEG time series recordings that pertain to the brain neural responses of twelve carefully selected high and low hypnotically suggestible individuals. Our results show that the higher hypnotic suggestibility is associated with a significantly lower variability in information content of theta, alpha, and beta frequencies. Moreover, they indicate that such a lower variability is accompanied by a significantly higher functional connectivity (FC, a measure of spatiotemporal synchronization) in the parietal and the parieto-occipital regions in the case of theta and alpha frequency bands and a non-significantly lower FC in the central region’s beta frequency band. Our results contribute to the field in two ways. First, they identify the applicability of DE as a unifying measure to reproduce the similar observations that are separately reported through adaptation of different hypnotic biomarkers in the literature. Second, they extend these previous findings that were based on neutral hypnosis (i.e., a hypnotic procedure that involves no specific suggestions other than those for becoming hypnotized) to the case of hypnotic suggestions, thereby identifying their presence as a potential signature of hypnotic experience.},
  day      = {9},
  url      = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0230853},
  doi      = {10.1371/journal.pone.0230853},
}
Nobuhiro Jinnai, Hidenobu Sumioka, Takashi Minato, Hiroshi Ishiguro, "Multi-modal Interaction through Anthropomorphically Designed Communication Medium to Enhance the Self-Disclosures of Personal Information", Journal of Robotics and Mechatronics, vol. 32, no. 1, pp. 76-85, February, 2020.
Abstract: Although current communication media facilitate the interaction among individuals, researchers have warned that human relationships constructed by these media tend to lack the level of intimacy acquired through face-to-face communications. In this paper, \textcolorredwe investigate how long-term use of humanlike communication media affects the development of intimate relationships between human users. We examine changes in the relationship between individuals while having conversation with each other through humanlike communication media or mobile phones for about a month. The intimacy of their relationship was evaluated with the amount of self-disclosure of personal information. The result shows that a communication medium with humanlike appearance and soft material \textcolorredsignificantly increases the total amount of self-disclosure through the experiment, compared with typical mobile phone. The amount of self-disclosure showed cyclic variation through the experiment in humanlike communication media condition. Furthermore, we discuss a possible underlying mechanism of this effect from misattribution of a feeling caused by intimate distance with the medium to a conversation partner.
BibTeX:
@Article{Jinnai2020,
  author   = {Nobuhiro Jinnai and Hidenobu Sumioka and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Journal of Robotics and Mechatronics},
  title    = {Multi-modal Interaction through Anthropomorphically Designed Communication Medium to Enhance the Self-Disclosures of Personal Information},
  year     = {2020},
  abstract = {Although current communication media facilitate the interaction among individuals, researchers have warned that human relationships constructed by these media tend to lack the level of intimacy acquired through face-to-face communications. In this paper, \textcolor{red}{we investigate how long-term use of humanlike communication media affects the development of intimate relationships between human users.} We examine changes in the relationship between individuals while having conversation with each other through humanlike communication media or mobile phones for about a month. The intimacy of their relationship was evaluated with the amount of self-disclosure of personal information. The result shows that a communication medium with humanlike appearance and soft material \textcolor{red}{significantly increases the total amount of self-disclosure through the experiment, compared with typical mobile phone. The amount of self-disclosure showed cyclic variation through the experiment in humanlike communication media condition.} Furthermore, we discuss a possible underlying mechanism of this effect from misattribution of a feeling caused by intimate distance with the medium to a conversation partner.},
  day      = {20},
  doi      = {10.20965/jrm.2020.p0076},
  month    = feb,
  number   = {1},
  pages    = {76-85},
  url      = {https://www.fujipress.jp/jrm/rb_ja/},
  volume   = {32},
  keywords = {social presence, mediated social interaction, human relationship},
}
Soheil Keshmiri, Masahiro Shiomi, Hiroshi Ishiguro, "Emergence of the Affect from the Variation in the Whole-Brain Flow of Information", Brain Sciences, vol. 10, Issue 1, no. 8, pp. 1-32, January, 2020.
Abstract: Over the past few decades, the quest for discovering the brain substrates of the affect to understand the underlying neural basis of the human’s emotions has resulted in substantial and yet contrasting results. Whereas some point at distinct and independent brain systems for the Positive and Negative affects, others propose the presence of flexible brain regions. In this respect, there are two factors that are common among these previous studies. First, they all focused on the change in brain activation, thereby neglecting the findings that indicate that the stimuli with equivalent sensory and behavioral processing demands may not necessarily result in differential brain activation. Second, they did not take into consideration the brain regional interactivity and the findings that identify that the signals from individual cortical neurons are shared across multiple areas and thus concurrently contribute to multiple functional pathways. To address these limitations, we performed Granger causal analysis on the electroencephalography (EEG) recordings of the human subjects who watched movie clips that elicited Negative, Neutral, and Positive affects. This allowed us to look beyond the brain regional activation in isolation to investigate whether the brain regional interactivity can provide further insights for understanding the neural substrates of the affect. Our results indicated that the differential affect states emerged from subtle variation in information flow of the brain cortical regions that were in both hemispheres. They also showed that these regions that were rather common between affect states than distinct to a specific affect were characterized with both short- as well as long-range information flow. This provided evidence for the presence of simultaneous integration and differentiation in the brain functioning that leads to the emergence of different affects. These results are in line with the findings on the presence of intrinsic large-scale interacting brain networks that underlie the production of psychological events. These findings can help advance our understanding of the neural basis of the human’s emotions by identifying the signatures of differential affect in subtle variation that occurs in the whole-brain cortical flow of information.
BibTeX:
@Article{Keshmiri2020,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Brain Sciences},
  title    = {Emergence of the Affect from the Variation in the Whole-Brain Flow of Information},
  year     = {2020},
  abstract = {Over the past few decades, the quest for discovering the brain substrates of the affect to understand the underlying neural basis of the human’s emotions has resulted in substantial and yet contrasting results. Whereas some point at distinct and independent brain systems for the Positive and Negative affects, others propose the presence of flexible brain regions. In this respect, there are two factors that are common among these previous studies. First, they all focused on the change in brain activation, thereby neglecting the findings that indicate that the stimuli with equivalent sensory and behavioral processing demands may not necessarily result in differential brain activation. Second, they did not take into consideration the brain regional interactivity and the findings that identify that the signals from individual cortical neurons are shared across multiple areas and thus concurrently contribute to multiple functional pathways. To address these limitations, we performed Granger causal analysis on the electroencephalography (EEG) recordings of the human subjects who watched movie clips that elicited Negative, Neutral, and Positive affects. This allowed us to look beyond the brain regional activation in isolation to investigate whether the brain regional interactivity can provide further insights for understanding the neural substrates of the affect. Our results indicated that the differential affect states emerged from subtle variation in information flow of the brain cortical regions that were in both hemispheres. They also showed that these regions that were rather common between affect states than distinct to a specific affect were characterized with both short- as well as long-range information flow. This provided evidence for the presence of simultaneous integration and differentiation in the brain functioning that leads to the emergence of different affects. These results are in line with the findings on the presence of intrinsic large-scale interacting brain networks that underlie the production of psychological events. These findings can help advance our understanding of the neural basis of the human’s emotions by identifying the signatures of differential affect in subtle variation that occurs in the whole-brain cortical flow of information.},
  day      = {1},
  doi      = {10.3390/brainsci10010008},
  month    = jan,
  number   = {8},
  pages    = {1-32},
  url      = {https://www.mdpi.com/2076-3425/10/1/8},
  volume   = {10, Issue 1},
  keywords = {Granger causality; functional connectivity; information flow; affect; brain signal variability},
}
内田貴久, 港隆史, 石黒浩, "コミュニケーションロボットは人間と同等な主観を持つべきか", 日本ロボット学会誌(RSJ), vol. 39, no. 1, pp. 34-38, January, 2020.
Abstract: 近年,コミュニケーションロボットが我々の生活に浸透しつつある.人とコミュニケーションを行うために,ロボットは人間のような機能を持つことが重要である.昨今の音声認識技術や言語理解技術などの発展に見られるように,コミュニケーションに必要な機能はますます向上している.では,どこまで人間のような機能を有すれば,ロボットは人とのコミュニケーションを円滑に,そして豊かに行うことができるのであろうか.我々の日常対話で顕在化するものの一つに,個人の主観的な意見がある.本稿では,ロボットが主観的な経験を持つと想像できる(帰属する)かという問いに対して哲学的に調査・考察した研究を紹介する.次に,ロボットに対する主観的な意見を帰属することが,ロボットとのコミュニケーションにどのような影響を与えるのかについて,筆者らが行った研究を報告する.そして最後に,これらの研究をふまえ,コミュニケーションロボットは人間と同等な主観を持つべきかという問いに関して議論する.
BibTeX:
@Article{内田貴久2020a,
  author   = {内田貴久 and 港隆史 and 石黒浩},
  journal  = {日本ロボット学会誌(RSJ)},
  title    = {コミュニケーションロボットは人間と同等な主観を持つべきか},
  year     = {2020},
  abstract = {近年,コミュニケーションロボットが我々の生活に浸透しつつある.人とコミュニケーションを行うために,ロボットは人間のような機能を持つことが重要である.昨今の音声認識技術や言語理解技術などの発展に見られるように,コミュニケーションに必要な機能はますます向上している.では,どこまで人間のような機能を有すれば,ロボットは人とのコミュニケーションを円滑に,そして豊かに行うことができるのであろうか.我々の日常対話で顕在化するものの一つに,個人の主観的な意見がある.本稿では,ロボットが主観的な経験を持つと想像できる(帰属する)かという問いに対して哲学的に調査・考察した研究を紹介する.次に,ロボットに対する主観的な意見を帰属することが,ロボットとのコミュニケーションにどのような影響を与えるのかについて,筆者らが行った研究を報告する.そして最後に,これらの研究をふまえ,コミュニケーションロボットは人間と同等な主観を持つべきかという問いに関して議論する.},
  day      = {15},
  doi      = {10.7210/jrsj.39.34},
  etitle   = {Should Communication Robots Have the Same Subjectivity as Humans?},
  month    = jan,
  number   = {1},
  pages    = {34-38},
  url      = {https://www.rsj.or.jp/pub/jrsj/about.html},
  volume   = {39},
  keywords = {communication robot, dialogue robot, subjectiv-ity},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hirosh Ishiguro, "What Kinds of Robot's Touch Will Match Expressed Emotions?", IEEE Robotics and Automation Letters (RA-L), vol. 5, Issue1, pp. 127-134, January, 2020.
Abstract: This study investigated the effects of touch characteristics that change the strength and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot's touch characteristics that can affect people's perceived emotions received less focus. In this study, we concentrated on three touch characteristics (length, type, and part) based on arousal/valence perspectives, and their effects on the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happiness, and its counterpart emotion, (i.e., sadness), borrowing Ekman's definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggest that a brief pat and a longer contact by the fingers are better combinations to express happy and sad emotions with our robot. Since we only used a female android, we discussed future works with a male humanoid robot and/or a robot whose appearance is less humanoid.
BibTeX:
@Article{Zheng2019a,
  author   = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hirosh Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {What Kinds of Robot's Touch Will Match Expressed Emotions?},
  year     = {2020},
  abstract = {This study investigated the effects of touch characteristics that change the strength and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot's touch characteristics that can affect people's perceived emotions received less focus. In this study, we concentrated on three touch characteristics (length, type, and part) based on arousal/valence perspectives, and their effects on the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happiness, and its counterpart emotion, (i.e., sadness), borrowing Ekman's definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggest that a brief pat and a longer contact by the fingers are better combinations to express happy and sad emotions with our robot. Since we only used a female android, we discussed future works with a male humanoid robot and/or a robot whose appearance is less humanoid.},
  doi      = {10.1109/LRA.2019.2947010},
  month    = jan,
  pages    = {127-134},
  url      = {https://ieeexplore.ieee.org/document/8865356?source=authoralert},
  volume   = {5, Issue1},
  comment  = {(The contents of this paper were also selected by Humanoids 2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Masahiro Shiomi, Hiroshi Ishiguro, "Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability", Entropy, vol. 21, Issue 12, no. 1228, pp. 1-25, December, 2019.
Abstract: Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity.
BibTeX:
@Article{Keshmiri2019l,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Entropy},
  title    = {Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability},
  year     = {2019},
  abstract = {Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity.},
  day      = {16},
  doi      = {10.3390/e21121228},
  month    = dec,
  number   = {1228},
  pages    = {1-25},
  url      = {https://www.mdpi.com/1099-4300/21/12/1228/htm},
  volume   = {21, Issue 12},
  keywords = {entropy; differential entropy; affect; brain variability},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Masahiro Shiomi, Hiroshi Ishiguro, "Information Content of Prefrontal Cortex Activity Quantifies the Difficulty of Narrated Stories", Scientific Reports, vol. 9, no. 17959, November, 2019.
Abstract: The ability to realize the individuals' impressions during the verbal communication can enable social robots to play a significant role in facilitating our social interactions in such areas as child education and elderly care. However, such impressions are highly subjective and internalized and therefore cannot be easily comprehended through behavioural observations. Although brain-machine interface suggests the utility of the brain information in human-robot interaction, previous studies did not consider its potential for estimating the internal impressions during verbal communication. In this article, we introduce a novel approach to estimation of the individuals' perceived difficulty of stories using their prefrontal cortex activity. We demonstrate the robustness of our approach by showing its comparable performance in in-person, humanoid, speaker, and video-chat system. Our results contribute to the field of socially assistive robotics by taking a step toward enabling robots determine their human companions' perceived difficulty of conversations to sustain their communication by adapting to individuals' pace and interest in response to conversational nuances and complexity. They also verify the use of brain information to complement the behavioural-based study of a robotic theory of mind through critical investigation of its implications in humans' neurological responses while interacting with their synthetic companions.
BibTeX:
@Article{Keshmiri2019g,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Scientific Reports},
  title    = {Information Content of Prefrontal Cortex Activity Quantifies the Difficulty of Narrated Stories},
  year     = {2019},
  abstract = {The ability to realize the individuals' impressions during the verbal communication can enable social robots to play a significant role in facilitating our social interactions in such areas as child education and elderly care. However, such impressions are highly subjective and internalized and therefore cannot be easily comprehended through behavioural observations. Although brain-machine interface suggests the utility of the brain information in human-robot interaction, previous studies did not consider its potential for estimating the internal impressions during verbal communication. In this article, we introduce a novel approach to estimation of the individuals' perceived difficulty of stories using their prefrontal cortex activity. We demonstrate the robustness of our approach by showing its comparable performance in in-person, humanoid, speaker, and video-chat system. Our results contribute to the field of socially assistive robotics by taking a step toward enabling robots determine their human companions' perceived difficulty of conversations to sustain their communication by adapting to individuals' pace and interest in response to conversational nuances and complexity. They also verify the use of brain information to complement the behavioural-based study of a robotic theory of mind through critical investigation of its implications in humans' neurological responses while interacting with their synthetic companions.},
  day      = {29},
  doi      = {10.1038/s41598-019-54280-1},
  month    = nov,
  number   = {17959},
  url      = {https://www.nature.com/articles/s41598-019-54280-1},
  volume   = {9},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation", IEEE Robotics and Automation Letters (RA-L), vol. 4, Issue 4, pp. 4108-4115, October, 2019.
Abstract: In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.
BibTeX:
@Article{Keshmiri2019d,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation},
  year     = {2019},
  abstract = {In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.},
  doi      = {10.1109/LRA.2019.2930495},
  month    = oct,
  pages    = {4108-4115},
  url      = {https://ieeexplore.ieee.org/document/8769897},
  volume   = {4, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care", IEEE Robotics and Automation Letters (RA-L), vol. 4, Issue 4, pp. 3263-3269, October, 2019.
Abstract: In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.
BibTeX:
@Article{Keshmiri2019c,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care},
  year     = {2019},
  abstract = {In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.},
  doi      = {10.1109/LRA.2019.2925732},
  month    = oct,
  pages    = {3263-3269},
  url      = {https://ieeexplore.ieee.org/abstract/document/8750900},
  volume   = {4, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Hidenobu Suioka, Hiroshi Ishiguro Ryuji Yamazaki, "Differential Effect of the Physical Embodiment on the Prefrontal Cortex Activity as Quantified by Its Entropy", Entropy, vol. 21, Issue 9, no. 875, pp. 1-26, September, 2019.
Abstract: Computer-mediated-communication (CMC) research suggests that unembodied media can surpass in-person communication due to their utility to bypass the nonverbal components of verbal communication such as physical presence and facial expressions. However, recent results on communicative humanoids suggest the importance of the physical embodiment of conversational partners. These contradictory findings are strengthened by the fact that almost all of these results are based on the subjective assessments of the behavioural impacts of these systems. To investigate these opposing views of the potential role of the embodiment during communication, we compare the effect of a physically embodied medium that is remotely controlled by a human operator with such unembodied media as telephones and video-chat systems on the frontal brain activity of human subjects, given the pivotal role of this region in social cognition and verbal comprehension. Our results provide evidence that communicating through a physically embodied medium affects the frontal brain activity of humans whose patterns potentially resemble those of in-person communication. These findings argue for the significance of embodiment in naturalistic scenarios of social interaction, such as storytelling and verbal comprehension, and the potential application of brain information as a promising sensory gateway in the characterization of behavioural responses in human-robot interaction.
BibTeX:
@Article{Keshmiri2019i,
  author   = {Soheil Keshmiri and Hidenobu Suioka and Ryuji Yamazaki, Hiroshi Ishiguro},
  title    = {Differential Effect of the Physical Embodiment on the Prefrontal Cortex Activity as Quantified by Its Entropy},
  journal  = {Entropy},
  year     = {2019},
  volume   = {21, Issue 9},
  number   = {875},
  pages    = {1-26},
  month    = sep,
  abstract = {Computer-mediated-communication (CMC) research suggests that unembodied media can surpass in-person communication due to their utility to bypass the nonverbal components of verbal communication such as physical presence and facial expressions. However, recent results on communicative humanoids suggest the importance of the physical embodiment of conversational partners. These contradictory findings are strengthened by the fact that almost all of these results are based on the subjective assessments of the behavioural impacts of these systems. To investigate these opposing views of the potential role of the embodiment during communication, we compare the effect of a physically embodied medium that is remotely controlled by a human operator with such unembodied media as telephones and video-chat systems on the frontal brain activity of human subjects, given the pivotal role of this region in social cognition and verbal comprehension. Our results provide evidence that communicating through a physically embodied medium affects the frontal brain activity of humans whose patterns potentially resemble those of in-person communication. These findings argue for the significance of embodiment in naturalistic scenarios of social interaction, such as storytelling and verbal comprehension, and the potential application of brain information as a promising sensory gateway in the characterization of behavioural responses in human-robot interaction.},
  day      = {8},
  url      = {https://www.mdpi.com/1099-4300/21/9/875},
  doi      = {10.3390/e21090875},
  keywords = {differential entropy; embodied media; tele-communication; humanoid; prefrontal cortex},
}
Soheil Keshmiri, Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals", Scientific Reports, vol. 9, no. 11924, August, 2019.
Abstract: Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals’ facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals’ facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals’ behavioural responses but also these responses allow for a fine-grained categorization of individuals’ personality.
BibTeX:
@Article{Keshmiri2019h,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  title    = {Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals},
  journal  = {Scientific Reports},
  year     = {2019},
  volume   = {9},
  number   = {11924},
  month    = aug,
  abstract = {Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals’ facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals’ facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals’ behavioural responses but also these responses allow for a fine-grained categorization of individuals’ personality.},
  day      = {15},
  url      = {https://www.nature.com/articles/s41598-019-48481-x},
  doi      = {10.1038/s41598-019-48481-x},
}
Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Information-theoretic investigation of impact of huggable communication medium on prefrontal brain activation", Advanced Robotics, vol. 33, Issue19, pp. 1019-1029, August, 2019.
Abstract: This paper examines the effect of mediated hugs that are achieved with a huggable communication medium on the brain activities of users during conversations. We measured their brain activities with functional near-infrared spectroscopy (NIRS) and evaluated them with two information theoretic measures: permutation entropy, an indicator of relaxation, and multiscale entropy, which captures complexity in brain activation at multiple time scales. We first verify the influence of lip movements on brain activities during conversation and then compare brain activities during tele-conversation through a huggable communication medium with a mobile phone. Our analysis of NIRS signals shows that mediated hugs decrease permutation entropy and increase multiscale entropy. These results suggest that touch interaction through a mediated hug induces a relaxed state in our brain but increases complex patterns of brain activation.
BibTeX:
@Article{Sumioka2019h,
  author   = {Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  journal  = {Advanced Robotics},
  title    = {Information-theoretic investigation of impact of huggable communication medium on prefrontal brain activation},
  year     = {2019},
  abstract = {This paper examines the effect of mediated hugs that are achieved with a huggable communication medium on the brain activities of users during conversations. We measured their brain activities with functional near-infrared spectroscopy (NIRS) and evaluated them with two information theoretic measures: permutation entropy, an indicator of relaxation, and multiscale entropy, which captures complexity in brain activation at multiple time scales. We first verify the influence of lip movements on brain activities during conversation and then compare brain activities during tele-conversation through a huggable communication medium with a mobile phone. Our analysis of NIRS signals shows that mediated hugs decrease permutation entropy and increase multiscale entropy. These results suggest that touch interaction through a mediated hug induces a relaxed state in our brain but increases complex patterns of brain activation.},
  day      = {12},
  doi      = {10.1080/01691864.2019.1652114},
  month    = aug,
  pages    = {1019-1029},
  url      = {https://www.tandfonline.com/doi/abs/10.1080/01691864.2019.1652114},
  volume   = {33, Issue19},
  keywords = {Mediated hug, huggable communication, telecommunication, information theory, permutation entropy, multiscale entropy analysis},
}
Malcolm Doering, Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Dana Kulić, Hiroshi Ishiguro, "Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot", ACM Transactions on Human-Robot Interaction(THRI), vol. 8, Issue3, no. 15, pp. 1-24, July, 2019.
Abstract: Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive learning. Our robot first learns high-level dialog and spatial behavior patterns from offline examples of human-human interaction. Then, during live interactions, it chooses among appropriate actions according to its curiosity about the customer's expected behavior, continually updating its predictive model to learn and adapt to each individual. In a user study, we found that participants thought the curious robot was significantly more humanlike with respect to repetitiveness and diversity of behavior, more interesting, and better overall in comparison to a non-curious robot.
BibTeX:
@Article{Doering2019,
  author   = {Malcolm Doering and Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Dana Kulić and Hiroshi Ishiguro},
  journal  = {ACM Transactions on Human-Robot Interaction(THRI)},
  title    = {Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot},
  year     = {2019},
  abstract = {Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive learning. Our robot first learns high-level dialog and spatial behavior patterns from offline examples of human-human interaction. Then, during live interactions, it chooses among appropriate actions according to its curiosity about the customer's expected behavior, continually updating its predictive model to learn and adapt to each individual. In a user study, we found that participants thought the curious robot was significantly more humanlike with respect to repetitiveness and diversity of behavior, more interesting, and better overall in comparison to a non-curious robot.},
  day      = {23},
  doi      = {10.1145/3326462},
  month    = jul,
  number   = {15},
  pages    = {1-24},
  url      = {https://dl.acm.org/citation.cfm?id=3326462},
  volume   = {8, Issue3},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "Probabilistic nod generation model based on speech and estimated utterance categories", Advanced Robotics, vol. 33, Issue 15-16, pp. 731-741, May, 2019.
Abstract: We proposed and evaluated a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability distribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. The effectiveness of the proposed model was evaluated using an android robot, through subjective experiments. Experiment results indicated that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.
BibTeX:
@Article{Liu2019a,
  author   = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  title    = {Probabilistic nod generation model based on speech and estimated utterance categories},
  journal  = {Advanced Robotics},
  year     = {2019},
  volume   = {33, Issue 15-16},
  pages    = {731-741},
  month    = may,
  issn     = {0169-1864},
  abstract = {We proposed and evaluated a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability distribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. The effectiveness of the proposed model was evaluated using an android robot, through subjective experiments. Experiment results indicated that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.},
  day      = {4},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2019.1610063},
  doi      = {10.1080/01691864.2019.1610063},
  keywords = {Nod, motion generation, SVM, humanoid robot},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "Auditory Scene Reproduction for Tele-operated Robot Systems", Advanced Robotics, vol. 33, Issue 7-8, pp. 415-423, April, 2019.
Abstract: In a tele-operated robot environment, reproducing auditory scenes and conveying 3D spatial information of sound sources are inevitable in order to make operators feel more realistic presence. In this paper, we propose a tele-presence robot system that enables reproduction and manipulation of auditory scenes. This tele-presence system is carried out on the basis of 3D information about where targeted human voices are speaking, and matching with the operator's head orientation. We employed multiple microphone arrays and human tracking technologies to localize and separate voices around a robot. In the operator side, separated sound sources are rendered using head-related transfer functions (HRTF) according to the sound sources' spatial positions and the operator's head orientation that is being tracked real-time. Two-party and three-party interaction experiments indicated that the proposed system has significantly higher accuracy when perceiving direction of sounds and gains higher subjective scores in sense of presence and listenability, compared to a baseline system which uses stereo binaural sounds obtained by two microphones located at the humanoid robot's ears.
BibTeX:
@Article{Liu2019,
  author   = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  title    = {Auditory Scene Reproduction for Tele-operated Robot Systems},
  journal  = {Advanced Robotics},
  year     = {2019},
  volume   = {33, Issue 7-8},
  pages    = {415-423},
  month    = apr,
  issn     = {0169-1864},
  abstract = {In a tele-operated robot environment, reproducing auditory scenes and conveying 3D spatial information of sound sources are inevitable in order to make operators feel more realistic presence. In this paper, we propose a tele-presence robot system that enables reproduction and manipulation of auditory scenes. This tele-presence system is carried out on the basis of 3D information about where targeted human voices are speaking, and matching with the operator's head orientation. We employed multiple microphone arrays and human tracking technologies to localize and separate voices around a robot. In the operator side, separated sound sources are rendered using head-related transfer functions (HRTF) according to the sound sources' spatial positions and the operator's head orientation that is being tracked real-time. Two-party and three-party interaction experiments indicated that the proposed system has significantly higher accuracy when perceiving direction of sounds and gains higher subjective scores in sense of presence and listenability, compared to a baseline system which uses stereo binaural sounds obtained by two microphones located at the humanoid robot's ears.},
  day      = {2},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2019.1599729},
  doi      = {10.1080/01691864.2019.1599729},
  keywords = {Human–robot interaction, HRTF, sound source localization, beamforming},
}
Takahisa Uchida, Takashi Minato, Tora Koyama, Hiroshi Ishiguro, "Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues", Frontiers in Robotics and AI, vol. 6 Article 29, pp. 1-11, April, 2019.
Abstract: We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.
BibTeX:
@Article{Uchida2019a,
  author   = {Takahisa Uchida and Takashi Minato and Tora Koyama and Hiroshi Ishiguro},
  title    = {Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues},
  journal  = {Frontiers in Robotics and AI},
  year     = {2019},
  volume   = {6 Article 29},
  pages    = {1-11},
  month    = apr,
  abstract = {We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.},
  day      = {24},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2019.00029/full},
  doi      = {10.3389/frobt.2019.00029},
}
Soheil Keshmiri, Hidenobu Sumioka, Masataka Okubo, Hiroshi Ishiguro, "An Information-Theoretic Approach to Quantitative Analysis of the Correspondence Between Skin Blood Flow and Functional Near-Infrared Spectroscopy Measurement in Prefrontal Cortex Activity", Frontiers in Neuroscience, vol. 13, February, 2019.
Abstract: Effect of Skin blood flow (SBF) on functional near-infrared spectroscopy (fNIRS) measurement of cortical activity proves to be an illusive subject matter with divided stances in the neuroscientific literature on its extent. Whereas, some reports on its non-significant influence on fNIRS time series of cortical activity, others consider its impact misleading, even detrimental, in analysis of the brain activity as measured by fNIRS. This situation is further escalated by the fact that almost all analytical studies are based on comparison with functional Magnetic Resonance Imaging (fMRI). In this article, we pinpoint the lack of perspective in previous studies on preservation of information content of resulting fNIRS time series once the SBF is attenuated. In doing so, we propose information-theoretic criteria to quantify the necessary and sufficient conditions for SBF attenuation such that the information content of frontal brain activity in resulting fNIRS times series is preserved. We verify these criteria through evaluation of their utility in comparative analysis of principal component (PCA) and independent component (ICA) SBF attenuation algorithms. Our contributions are 2-fold. First, we show that mere reduction of SBF influence on fNIRS time series of frontal activity is insufficient to warrant preservation of cortical activity information. Second, we empirically justify a higher fidelity of PCA-based algorithm in preservation of the fontal activity's information content in comparison with ICA-based approach. Our results suggest that combination of the first two principal components of PCA-based algorithm results in most efficient SBF attenuation while preserving maximum frontal activity's information. These results contribute to the field by presenting a systematic approach to quantification of the SBF as an interfering process during fNIRS measurement, thereby drawing an informed conclusion on this debate. Furthermore, they provide evidence for a reliable choice among existing SBF attenuation algorithms and their inconclusive number of components, thereby ensuring minimum loss of cortical information during SBF attenuation process.
BibTeX:
@Article{Keshmirie,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Masataka Okubo and Hiroshi Ishiguro},
  title    = {An Information-Theoretic Approach to Quantitative Analysis of the Correspondence Between Skin Blood Flow and Functional Near-Infrared Spectroscopy Measurement in Prefrontal Cortex Activity},
  journal  = {Frontiers in Neuroscience},
  year     = {2019},
  volume   = {13},
  month    = feb,
  abstract = {Effect of Skin blood flow (SBF) on functional near-infrared spectroscopy (fNIRS) measurement of cortical activity proves to be an illusive subject matter with divided stances in the neuroscientific literature on its extent. Whereas, some reports on its non-significant influence on fNIRS time series of cortical activity, others consider its impact misleading, even detrimental, in analysis of the brain activity as measured by fNIRS. This situation is further escalated by the fact that almost all analytical studies are based on comparison with functional Magnetic Resonance Imaging (fMRI). In this article, we pinpoint the lack of perspective in previous studies on preservation of information content of resulting fNIRS time series once the SBF is attenuated. In doing so, we propose information-theoretic criteria to quantify the necessary and sufficient conditions for SBF attenuation such that the information content of frontal brain activity in resulting fNIRS times series is preserved. We verify these criteria through evaluation of their utility in comparative analysis of principal component (PCA) and independent component (ICA) SBF attenuation algorithms. Our contributions are 2-fold. First, we show that mere reduction of SBF influence on fNIRS time series of frontal activity is insufficient to warrant preservation of cortical activity information. Second, we empirically justify a higher fidelity of PCA-based algorithm in preservation of the fontal activity's information content in comparison with ICA-based approach. Our results suggest that combination of the first two principal components of PCA-based algorithm results in most efficient SBF attenuation while preserving maximum frontal activity's information. These results contribute to the field by presenting a systematic approach to quantification of the SBF as an interfering process during fNIRS measurement, thereby drawing an informed conclusion on this debate. Furthermore, they provide evidence for a reliable choice among existing SBF attenuation algorithms and their inconclusive number of components, thereby ensuring minimum loss of cortical information during SBF attenuation process.},
  day      = {15},
  url      = {https://www.frontiersin.org/articles/10.3389/fnins.2019.00079/full},
  doi      = {10.3389/fnins.2019.00079},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis", Entropy, vol. 21, Issue 2, pp. 1-16, February, 2019.
Abstract: Todays' communication media virtually impact and transform every aspect of our daily communication and yet extent of their embodiment on our brain is unexplored. Investigation of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the intelligent and interactive media that provide assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of physical embodiment on older peoples’ prefrontal cortex (PFC) activity while listening to the stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older peoples’ PFC activity and that such a shift in dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit the researchers in age-related cognitive function and rehabilitation that seek the use of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.
BibTeX:
@Article{Keshmiri2019,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis},
  journal  = {Entropy},
  year     = {2019},
  volume   = {21, Issue 2},
  pages    = {1-16},
  month    = feb,
  abstract = {Todays' communication media virtually impact and transform every aspect of our daily communication and yet extent of their embodiment on our brain is unexplored. Investigation of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the intelligent and interactive media that provide assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of physical embodiment on older peoples’ prefrontal cortex (PFC) activity while listening to the stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older peoples’ PFC activity and that such a shift in dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit the researchers in age-related cognitive function and rehabilitation that seek the use of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.},
  day      = {19},
  url      = {https://www.mdpi.com/1099-4300/21/2/199},
  doi      = {10.3390/e21020199},
  keywords = {multiscale entropy; embodied media; tele-communication; humanoid; prefrontal cortex},
}
Malcolm Doering, Dylan F. Glas, Hiroshi Ishiguro, "Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior", IEEE Transactions on Human-Machine Systems, February, 2019.
Abstract: We present an unsupervised, learning-by-imitation technique for learning social robot interaction behaviors from noisy, human-human interaction data full of natural linguistic variation. In particular our proposed system learns the space of common actions for a given domain, important contextual features relating to the interaction structure, and a set of human-readable rules for generating appropriate behaviors. We demonstrated our technique on a travel agent scenario where the robot learns to play the role of the travel agent while communicating with human customers. In this domain, we demonstrate how modeling the interaction structure can be used to resolve the often ambiguous customer speech. We introduce a novel clustering algorithm to automatically discover the interaction structure based on action co-occurrence frequency, revealing the topics of conversation. We then train a topic state estimator to determine the topic of conversation at runtime so the robot may present information pertaining the correct topic. In a human-robot evaluation, our proposed system significantly outperformed a nearest-neighbor baseline technique in both subjective and objective evaluations. In particular, participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with. Furthermore, we found that incorporation of the topic state into prediction significantly improved performance when responding to ambiguous questions.
BibTeX:
@Article{Doering2019a,
  author   = {Malcolm Doering and Dylan F. Glas and Hiroshi Ishiguro},
  journal  = {IEEE Transactions on Human-Machine Systems},
  title    = {Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior},
  year     = {2019},
  abstract = {We present an unsupervised, learning-by-imitation technique for learning social robot interaction behaviors from noisy, human-human interaction data full of natural linguistic variation. In particular our proposed system learns the space of common actions for a given domain, important contextual features relating to the interaction structure, and a set of human-readable rules for generating appropriate behaviors. We demonstrated our technique on a travel agent scenario where the robot learns to play the role of the travel agent while communicating with human customers. In this domain, we demonstrate how modeling the interaction structure can be used to resolve the often ambiguous customer speech. We introduce a novel clustering algorithm to automatically discover the interaction structure based on action co-occurrence frequency, revealing the topics of conversation. We then train a topic state estimator to determine the topic of conversation at runtime so the robot may present information pertaining the correct topic. In a human-robot evaluation, our proposed system significantly outperformed a nearest-neighbor baseline technique in both subjective and objective evaluations. In particular, participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with. Furthermore, we found that incorporation of the topic state into prediction significantly improved performance when responding to ambiguous questions.},
  day      = {26},
  doi      = {10.1109/THMS.2019.2895753},
  month    = feb,
  url      = {https://ieeexplore.ieee.org/document/8653359},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Analysis and generation of laughter motions, and evaluation in an android robot", APSIPA Transactions on Signal and Information Processing, vol. 8, no. e6, pp. 1-10, January, 2019.
Abstract: Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also to expressing some type of attitudes, having important social functions in communication. The background of the present work is to generate natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatching between audio and visual modalities, especially in laughter events. In the present work, we used a multimodal dialogue database, and analyzed facial, head, and body motion during laughing speech. Based on the analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results showed the effectiveness of controlling different parts of the face, head, and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion, and upper body motion control).
BibTeX:
@Article{Ishi2019,
  author   = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title    = {Analysis and generation of laughter motions, and evaluation in an android robot},
  journal  = {APSIPA Transactions on Signal and Information Processing},
  year     = {2019},
  volume   = {8},
  number   = {e6},
  pages    = {1-10},
  month    = jan,
  abstract = {Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also to expressing some type of attitudes, having important social functions in communication. The background of the present work is to generate natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatching between audio and visual modalities, especially in laughter events. In the present work, we used a multimodal dialogue database, and analyzed facial, head, and body motion during laughing speech. Based on the analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results showed the effectiveness of controlling different parts of the face, head, and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion, and upper body motion control).},
  day      = {25},
  url      = {https://www.cambridge.org/core/journals/apsipa-transactions-on-signal-and-information-processing/article/analysis-and-generation-of-laughter-motions-and-evaluation-in-an-android-robot/353D071416BDE0536FDB4E5B86696175},
  doi      = {10.1017/ATSIP.2018.32},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Hiroko Kase, "Use of Robotic Media as Persuasive Technology and Its Ethical Implications in Care Settings", Journal of Philosophy and Ethics in Health Care and Medicine, no. 12, pp. 45-58, December, 2018.
Abstract: Communication support for older adults has become a growing need, and as assistive technology robotic media are expected to facilitate social interactions in both verbal and nonverbal ways. Focusing on dementia care, we look into two studies exploring the potential of robotic media that could promote changes in subjectivity in older adults with behavioral and psychological symptoms of dementia (BPSD). Furthermore, we investigate the conditions that might facilitate such media’s use in therapeutic improvement. Based on case studies in dementia care, this paper aims to investigate the potential and conditions that allow robotic media to mediate changes in human subjects. The case studies indicate that those with dementia become open and prosocial through robotic intervention and that by setting suitable conversational topics their reactions can be extracted efficiently. Previous studies also mentioned the requirement of considering both the positive and negative aspects of using robotic media. With social robots being developed as persuasive agents, users have difficulty controlling the information flow, and thus when personal data is dealt with ethical concerns arise. The ethical implication is that persuasive technology puts human autonomy at risk. Finally, we discuss the ethical implications and the effects on emotions and behaviors by applying persuasive robotic media in care settings.
BibTeX:
@Article{Yamazaki2018,
  author   = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Hiroko Kase},
  title    = {Use of Robotic Media as Persuasive Technology and Its Ethical Implications in Care Settings},
  journal  = {Journal of Philosophy and Ethics in Health Care and Medicine},
  year     = {2018},
  number   = {12},
  pages    = {45-58},
  month    = dec,
  abstract = {Communication support for older adults has become a growing need, and as assistive technology robotic media are expected to facilitate social interactions in both verbal and nonverbal ways. Focusing on dementia care, we look into two studies exploring the potential of robotic media that could promote changes in subjectivity in older adults with behavioral and psychological symptoms of dementia (BPSD). Furthermore, we investigate the conditions that might facilitate such media’s use in therapeutic improvement. Based on case studies in dementia care, this paper aims to investigate the potential and conditions that allow robotic media to mediate changes in human subjects. The case studies indicate that those with dementia become open and prosocial through robotic intervention and that by setting suitable conversational topics their reactions can be extracted efficiently. Previous studies also mentioned the requirement of considering both the positive and negative aspects of using robotic media. With social robots being developed as persuasive agents, users have difficulty controlling the information flow, and thus when personal data is dealt with ethical concerns arise. The ethical implication is that persuasive technology puts human autonomy at risk. Finally, we discuss the ethical implications and the effects on emotions and behaviors by applying persuasive robotic media in care settings.},
  url      = {https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwit1fCiy5DiAhV7yIsBHUlkACkQFjAAegQIAhAC&url=http%3A%2F%2Fitetsu.jp%2Fmain%2Fwp-content%2Fuploads%2F2019%2F03%2FPEHCM12-yamazaki.pdf&usg=AOvVaw0ie8swnUm_nlGMgx2CPByB},
}
Rosario Sorbello, Carmelo Cali, Salvatore Tramonte, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "A Cognitive Model of Trust for Biological and Artificial Humanoid Robots", Procedia Computer Science, vol. 145, pp. 526-532, December, 2018.
Abstract: This paper presents a model of trust for biological and artificial humanoid robots and agents as antecedent condition of interaction. We discuss the cognitive engines of social perception that accounts for the units on which agents operate and the rules they follow when they bestow trust and assess trustworthiness. We propose that this structural information is the domain of the model. The model represents it in terms of modular cognitive structures connected by a parallel architecture. Finally we give a preliminary formalization of the model in the mathematical framework of the I/O automata for future computational and human-humanoid application.
BibTeX:
@Article{Sorbello2018b,
  author   = {Rosario Sorbello and Carmelo Cali and Salvatore Tramonte and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {A Cognitive Model of Trust for Biological and Artificial Humanoid Robots},
  journal  = {Procedia Computer Science},
  year     = {2018},
  volume   = {145},
  pages    = {526-532},
  month    = Dec,
  abstract = {This paper presents a model of trust for biological and artificial humanoid robots and agents as antecedent condition of interaction. We discuss the cognitive engines of social perception that accounts for the units on which agents operate and the rules they follow when they bestow trust and assess trustworthiness. We propose that this structural information is the domain of the model. The model represents it in terms of modular cognitive structures connected by a parallel architecture. Finally we give a preliminary formalization of the model in the mathematical framework of the I/O automata for future computational and human-humanoid application.},
  day      = {11},
  url      = {https://www.sciencedirect.com/science/article/pii/S1877050918324050},
  doi      = {10.1016/j.procs.2018.11.117},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "A huggable communication medium can provide a sustained listening support for students with special needs in a classroom", Computers in Human Behavior, vol. 93, pp. 106-113, October, 2018.
Abstract: Poor listening ability has been a serious problem for students with a wide range of developmental disabilities. We conducted a memory test to students with special needs in a typical listening situation and a situation with a huggable communication medium, called Hugvie, to evaluate how well the students can listen to others at morning meetings. The results showed that listening via Hugvies improved the scores of their memories for information provided by teachers. In particular, the memories of distracted students with emotional troubles tended to be highly improved. It was worthy of note that the improvement of their skills kept maintaining for three months. Besides, the students' perception and impression of Hugvies were preferable for long-term use.
BibTeX:
@Article{Nakanishi2018a,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {A huggable communication medium can provide a sustained listening support for students with special needs in a classroom},
  journal  = {Computers in Human Behavior},
  year     = {2018},
  volume   = {93},
  pages    = {106-113},
  month    = oct,
  abstract = {Poor listening ability has been a serious problem for students with a wide range of developmental disabilities. We conducted a memory test to students with special needs in a typical listening situation and a situation with a huggable communication medium, called Hugvie, to evaluate how well the students can listen to others at morning meetings. The results showed that listening via Hugvies improved the scores of their memories for information provided by teachers. In particular, the memories of distracted students with emotional troubles tended to be highly improved. It was worthy of note that the improvement of their skills kept maintaining for three months. Besides, the students' perception and impression of Hugvies were preferable for long-term use.},
  day      = {3},
  url      = {https://www.journals.elsevier.com/computers-in-human-behavior},
  doi      = {10.1016/j.chb.2018.10.008},
}
Carlos Ishi, Daichi Machiyashiki, Ryusuke Mikata, Hiroshi Ishiguro, "A speech-driven hand gesture generation method and evaluation in android robots", IEEE Robotics and Automation Letters (RA-L), vol. 3, Issue4, pp. 3757-3764, July, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.
BibTeX:
@Article{Ishi2018e,
  author   = {Carlos Ishi and Daichi Machiyashiki and Ryusuke Mikata and Hiroshi Ishiguro},
  title    = {A speech-driven hand gesture generation method and evaluation in android robots},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  year     = {2018},
  volume   = {3, Issue4},
  pages    = {3757-3764},
  month    = jul,
  abstract = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.},
  day      = {16},
  url      = {https://ieeexplore.ieee.org/document/8411101},
  doi      = {10.1109/LRA.2018.2856281},
  comment  = {(The contents of this paper were also selected by IROS2018 Program Committee for presentation at the Conference)},
  keywords = {Android robots, Emotion, Hand Gesture, Motion generation, Speech-driven},
}
Carlos T. Ishi, Chaoran Liu, Jani Even, Norihiro Hagita, "A sound-selective hearing support system using environment sensor network", Acoustic Science and Technology, vol. 39, Issue 4, pp. 287-294, July, 2018.
Abstract: We have developed a sound-selective hearing support system by making use of an environment sensor network, so that individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources can be reconstructed. The performance of the selective sound separation module was evaluated under different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. Subjective intelligibility tests were conducted in the same noise condition. For words with high familiarity, intelligibility rates increased from 67% to 90% for normal hearing subjects and from 50% to 70% for elderly subjects, when the proposed system was applied.
BibTeX:
@Article{Ishi2018d,
  author   = {Carlos T. Ishi and Chaoran Liu and Jani Even and Norihiro Hagita},
  title    = {A sound-selective hearing support system using environment sensor network},
  journal  = {Acoustic Science and Technology},
  year     = {2018},
  volume   = {39, Issue 4},
  pages    = {287-294},
  month    = Jul,
  abstract = {We have developed a sound-selective hearing support system by making use of an environment sensor network, so that individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources can be reconstructed. The performance of the selective sound separation module was evaluated under different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. Subjective intelligibility tests were conducted in the same noise condition. For words with high familiarity, intelligibility rates increased from 67% to 90% for normal hearing subjects and from 50% to 70% for elderly subjects, when the proposed system was applied.},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/ast/39/4/39_E1757/_article/-char/en},
  doi      = {10.1250/ast.39.287},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "How Should a Robot React Before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot's Face", IEEE Robotics and Automation Letters (RA-L), pp. 3773-3780, July, 2018.
Abstract: This study addresses the pre touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another's touch by observing a data set of human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with participants. Our experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.
BibTeX:
@Article{Shiomi2018a,
  author   = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {How Should a Robot React Before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot's Face},
  year     = {2018},
  abstract = {This study addresses the pre touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another's touch by observing a data set of human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with participants. Our experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.},
  day      = {16},
  doi      = {10.1109/LRA.2018.2856303},
  month    = jul,
  pages    = {3773-3780},
  url      = {https://ieeexplore.ieee.org/document/8411337},
  comment  = {(The contents of this paper were also selected by IROS2018 Program Committee for presentation at the Conference)},
}
Christian Penaloza, Shuichi Nishio, "BMI control of a third arm for multitasking", Science Robotics, vol. 3, Issue20, July, 2018.
Abstract: Brain-machine interface (BMI) systems have been widely studied to allow people with motor paralysis conditions to control assistive robotic devices that replace or recover lost function but not to extend the capabilities of healthy users. We report an experiment in which healthy participants were able to extend their capabilities by using a noninvasive BMI to control a human-like robotic arm and achieve multitasking. Experimental results demonstrate that participants were able to reliably control the robotic arm with the BMI to perform a goal-oriented task while simultaneously using their own arms to do a different task. This outcome opens possibilities to explore future human body augmentation applications for healthy people that not only enhance their capability to perform a particular task but also extend their physical capabilities to perform multiple tasks simultaneously.
BibTeX:
@Article{Penaloza2018a,
  author   = {Christian Penaloza and Shuichi Nishio},
  title    = {BMI control of a third arm for multitasking},
  journal  = {Science Robotics},
  year     = {2018},
  volume   = {3, Issue20},
  month    = Jul,
  abstract = {Brain-machine interface (BMI) systems have been widely studied to allow people with motor paralysis conditions to control assistive robotic devices that replace or recover lost function but not to extend the capabilities of healthy users. We report an experiment in which healthy participants were able to extend their capabilities by using a noninvasive BMI to control a human-like robotic arm and achieve multitasking. Experimental results demonstrate that participants were able to reliably control the robotic arm with the BMI to perform a goal-oriented task while simultaneously using their own arms to do a different task. This outcome opens possibilities to explore future human body augmentation applications for healthy people that not only enhance their capability to perform a particular task but also extend their physical capabilities to perform multiple tasks simultaneously.},
  day      = {25},
  url      = {http://www.geminoid.jp/misc/scirobotics.aat1228.html},
  doi      = {10.1126/scirobotics.aat1228},
}
Soheil Keshmiri, Hidenobu Sumioka, Junya Nakanishi, Hiroshi Ishiguro, "Bodily-Contact Communication Medium Induces Relaxed Mode of Brain Activity While Increasing Its Dynamical Complexity: A Pilot Study", Frontiers in Psychology, vol. 9, Article1192, July, 2018.
Abstract: We present the results of the analysis of the effect of a bodily-contact communication medium on the brain activity of the individuals during verbal communication. Our results suggest that the communicated content that is mediated through such a device induces a significant effect on electroencephalogram (EEG) time series of human subjects. Precisely, we find a significant reduction of overall power of the EEG signals of the individuals. This observation that is supported by the analysis of the permutation entropy (PE) of the EEG time series of brain activity of the participants suggests the positive effect of such a medium on the stress relief and the induced sense of relaxation. Additionally, multiscale entropy (MSE) analysis of our data implies that such a medium increases the level of complexity that is exhibited by EEG time series of our participants, thereby suggesting their sustained sense of involvement in their course of communication. These findings that are in accord with the results reported by cognitive neuroscience research suggests that the use of such a medium can be beneficial as a complementary step in treatment of developmental disorders, attentiveness of schoolchildren and early child development, as well as scenarios where intimate physical interaction over distance is desirable (e.g., distance-parenting).
BibTeX:
@Article{Keshmiri2018b,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Junya Nakanishi and Hiroshi Ishiguro},
  title    = {Bodily-Contact Communication Medium Induces Relaxed Mode of Brain Activity While Increasing Its Dynamical Complexity: A Pilot Study},
  journal  = {Frontiers in Psychology},
  year     = {2018},
  volume   = {9, Article1192},
  month    = Jul,
  abstract = {We present the results of the analysis of the effect of a bodily-contact communication medium on the brain activity of the individuals during verbal communication. Our results suggest that the communicated content that is mediated through such a device induces a significant effect on electroencephalogram (EEG) time series of human subjects. Precisely, we find a significant reduction of overall power of the EEG signals of the individuals. This observation that is supported by the analysis of the permutation entropy (PE) of the EEG time series of brain activity of the participants suggests the positive effect of such a medium on the stress relief and the induced sense of relaxation. Additionally, multiscale entropy (MSE) analysis of our data implies that such a medium increases the level of complexity that is exhibited by EEG time series of our participants, thereby suggesting their sustained sense of involvement in their course of communication. These findings that are in accord with the results reported by cognitive neuroscience research suggests that the use of such a medium can be beneficial as a complementary step in treatment of developmental disorders, attentiveness of schoolchildren and early child development, as well as scenarios where intimate physical interaction over distance is desirable (e.g., distance-parenting).},
  day      = {9},
  url      = {https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01192/full},
  doi      = {10.3389/fpsyg.2018.01192},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated with Working Memory", Frontiers in Neuroinformatics, vol. 12, June, 2018.
Abstract: Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are suboptimal in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.
BibTeX:
@Article{Keshmiri2018a,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated with Working Memory},
  journal  = {Frontiers in Neuroinformatics},
  year     = {2018},
  volume   = {12},
  month    = Jun,
  abstract = {Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are suboptimal in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.},
  url      = {https://www.frontiersin.org/articles/10.3389/fninf.2018.00033/full},
  doi      = {10.3389/fninf.2018.00033},
}
Hiroaki Hatano, Cheng Chao Song, Carlos T. Ishi, Makiko Matsuda, "Automatic evaluation of accentuation of Japanese read speech", Digital Resources for Learning Japanese, pp. 1-10, June, 2018.
Abstract: Japanese is a typical mora-timed language with lexical pitch-accent (Beckman 1986, Kubozono 1996, McCawley 1978). A mora is a seg-mental unit of sound with a relatively constant duration. Phonologically, the accent's location corresponds to the mora before the pitch drop (Haraguchi 1999) and its location are arbitrary. For learners of Japanese, such prosodic characteristics complicate realizing correct word accents. Incorrect pitch accents cause misunderstanding of word meaning and lead to unnatural-sounding speech in non-native Japanese speakers (Isomura 1996, Toda 2003). The acquisition of pitch accents is critical for Japanese language learners (A 2015). Although students often express a desire to learn Japanese pronunciation including accents, the practice is rare in Japanese education (Fujiwara and Negishi 2005, Tago and Isomura 2014). The main reason is that the priority of teaching pronunciation is relatively low, and many teachers lack the confidence to evaluate the accents of learners. Non-native Japanese-language teachers in their own countries have these tendencies. Much effort has stressed acoustic-based evaluations of Japanese accentuation. However, most work has focused on word-level accent evaluation. If the learners of Japanese were given a chance to participate in such activities as speech contests, their scripts might contain large word varieties. We believe that a text-independent evaluation system is required for Japanese accents. Our research is investigating a text-independent automatic evaluation method for Japanese accentuation based on acoustic features.
BibTeX:
@Article{Hatano2018,
  author   = {Hiroaki Hatano and Cheng Chao Song and Carlos T. Ishi and Makiko Matsuda},
  title    = {Automatic evaluation of accentuation of Japanese read speech},
  journal  = {Digital Resources for Learning Japanese},
  year     = {2018},
  pages    = {1-10},
  month    = jun,
  issn     = {2283-8910},
  abstract = {Japanese is a typical mora-timed language with lexical pitch-accent (Beckman 1986, Kubozono 1996, McCawley 1978). A mora is a seg-mental unit of sound with a relatively constant duration. Phonologically, the accent's location corresponds to the mora before the pitch drop (Haraguchi 1999) and its location are arbitrary. For learners of Japanese, such prosodic characteristics complicate realizing correct word accents. Incorrect pitch accents cause misunderstanding of word meaning and lead to unnatural-sounding speech in non-native Japanese speakers (Isomura 1996, Toda 2003). The acquisition of pitch accents is critical for Japanese language learners (A 2015). Although students often express a desire to learn Japanese pronunciation including accents, the practice is rare in Japanese education (Fujiwara and Negishi 2005, Tago and Isomura 2014). The main reason is that the priority of teaching pronunciation is relatively low, and many teachers lack the confidence to evaluate the accents of learners. Non-native Japanese-language teachers in their own countries have these tendencies. Much effort has stressed acoustic-based evaluations of Japanese accentuation. However, most work has focused on word-level accent evaluation. If the learners of Japanese were given a chance to participate in such activities as speech contests, their scripts might contain large word varieties. We believe that a text-independent evaluation system is required for Japanese accents. Our research is investigating a text-independent automatic evaluation method for Japanese accentuation based on acoustic features.},
  day      = {5},
  url      = {https://www.digibup.com/products/digital-resources},
}
Abdelkader Nasreddine Belkacem, Shuichi Nishio, Takafumi Suzuki, Hiroshi Ishiguro, Masayuki Hirata, "Neuromagnetic decoding of simultatenous bilateral hand movements for multidimensional brain-machine interfaces", IEEE Transactions on Neural Systems and Rehalibitaion Engineering, vol. 26, no. Issue 6, pp. 1301-1310, May, 2018.
Abstract: To provide multidimensional control, we describe the first reported decoding of bilateral hand movements by using single-trial magnetoencephalography signals as a new approach to enhance a user's ability to interact with a complex environment through a multidimensional brain-machine interface. Ten healthy participants performed or imagined four types of bilateral hand movements during neuromagnetic measurements. By applying a support vector machine (SVM) method to classify the four movements regarding the sensor data obtained from the sensorimotor area, we found the mean accuracy of a two-class classification using the amplitudes of neuromagnetic fields to be particularly suitable for real-time applications, with accuracies comparable to those obtained in previous studies involving unilateral movement. The sensor data from over the sensorimotor cortex showed discriminative time-series waveforms and time-frequency maps in the bilateral hemispheres according to the four tasks. Furthermore, we used four-class classification algorithms based on the SVM method to decode all types of bilateral movements. Our results provided further proof that the slow components of neuromagnetic fields carry sufficient neural information to classify even bilateral hand movements and demonstrated the potential utility of decoding bilateral movements for engineering purposes such as multidimensional motor control.
BibTeX:
@Article{Belkacem2018d,
  author   = {Abdelkader Nasreddine Belkacem and Shuichi Nishio and Takafumi Suzuki and Hiroshi Ishiguro and Masayuki Hirata},
  title    = {Neuromagnetic decoding of simultatenous bilateral hand movements for multidimensional brain-machine interfaces},
  journal  = {IEEE Transactions on Neural Systems and Rehalibitaion Engineering},
  year     = {2018},
  volume   = {26},
  number   = {Issue 6},
  pages    = {1301-1310},
  month    = May,
  abstract = {To provide multidimensional control, we describe the first reported decoding of bilateral hand movements by using single-trial magnetoencephalography signals as a new approach to enhance a user's ability to interact with a complex environment through a multidimensional brain-machine interface. Ten healthy participants performed or imagined four types of bilateral hand movements during neuromagnetic measurements. By applying a support vector machine (SVM) method to classify the four movements regarding the sensor data obtained from the sensorimotor area, we found the mean accuracy of a two-class classification using the amplitudes of neuromagnetic fields to be particularly suitable for real-time applications, with accuracies comparable to those obtained in previous studies involving unilateral movement. The sensor data from over the sensorimotor cortex showed discriminative time-series waveforms and time-frequency maps in the bilateral hemispheres according to the four tasks. Furthermore, we used four-class classification algorithms based on the SVM method to decode all types of bilateral movements. Our results provided further proof that the slow components of neuromagnetic fields carry sufficient neural information to classify even bilateral hand movements and demonstrated the potential utility of decoding bilateral movements for engineering purposes such as multidimensional motor control.},
  day      = {15},
  url      = {https://ieeexplore.ieee.org/document/8359204},
  doi      = {10.1109/TNSRE.2018.2837003},
}
Jakub Złotowski, Hidenobu Sumioka, Friederike Eyssel, Shuichi Nishio, Christoph Bartneck, Hiroshi Ishiguro, "Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism", International Journal of Social Robotics, pp. 1-14, April, 2018.
Abstract: Anthropomorphism, the attribution of humanlike characteristics to nonhuman entities, may be resulting from a dual process: first, a fast and intuitive (Type 1) process permits to quickly classify an object as humanlike and results in implicit anthropomorphism. Second, a reflective (Type 2) process may moderate the initial judgment based on conscious effort and result in explicit anthropomorphism. In this study, we manipulated both participants’ motivation for Type 2 processing and a robot’s emotionality to investigate the role of Type 1 versus Type 2 processing in forming judgments about the robot Robovie R2. We did so by having participants play the “Jeopardy!” game with the robot. Subsequently, we directly and indirectly measured anthropomorphism by administering self-report measures and a priming task, respectively. Furthermore, we measured treatment of the robot as a social actor to establish its relation with implicit and explicit anthropomorphism. The results suggested that the model of dual anthropomorphism can explain when responses are likely to reflect judgments based on Type 1 and Type 2 processes. Moreover, we showed that the social treatment of a robot, as described by the Media Equation theory, is related with implicit, but not explicit anthropomorphism.
BibTeX:
@Article{Zlotowski2018,
  author   = {Jakub Złotowski and Hidenobu Sumioka and Friederike Eyssel and Shuichi Nishio and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism},
  journal  = {International Journal of Social Robotics},
  year     = {2018},
  pages    = {1-14},
  month    = Apr,
  abstract = {Anthropomorphism, the attribution of humanlike characteristics to nonhuman entities, may be resulting from a dual process: first, a fast and intuitive (Type 1) process permits to quickly classify an object as humanlike and results in implicit anthropomorphism. Second, a reflective (Type 2) process may moderate the initial judgment based on conscious effort and result in explicit anthropomorphism. In this study, we manipulated both participants’ motivation for Type 2 processing and a robot’s emotionality to investigate the role of Type 1 versus Type 2 processing in forming judgments about the robot Robovie R2. We did so by having participants play the “Jeopardy!” game with the robot. Subsequently, we directly and indirectly measured anthropomorphism by administering self-report measures and a priming task, respectively. Furthermore, we measured treatment of the robot as a social actor to establish its relation with implicit and explicit anthropomorphism. The results suggested that the model of dual anthropomorphism can explain when responses are likely to reflect judgments based on Type 1 and Type 2 processes. Moreover, we showed that the social treatment of a robot, as described by the Media Equation theory, is related with implicit, but not explicit anthropomorphism.},
  day      = {4},
  url      = {https://link.springer.com/article/10.1007/s12369-018-0476-5},
  doi      = {10.1007/s12369-018-0476-5},
}
Christian Penaloza, Maryam Alimardani, Shuichi Nishio, "Android Feedback-based Training modulates Sensorimotor Rhythms during Motor Imagery", IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, Issue 3, pp. 666-674, March, 2018.
Abstract: EEG-based brain computer interface (BCI) systems have demonstrated potential to assist patients with devastating motor paralysis conditions. However, there is great interest in shifting the BCI trend towards applications aimed to healthy users. Although BCI operation depends on technological factors (i.e. EEG pattern classification algorithm) and human factors (i.e. how well the person is able to generate good quality EEG patterns), it is the latter the least investigated. In order to control a Motor Imagery based BCI, the user needs to learn to modulate his/her sensorimotor brain rhythms by practicing Motor Imagery using a classical training protocol with an abstract visual feedback. In this paper, we investigate a different BCI training protocol using a human-like android robot (Geminoid HI-2) to provide realistic visual feedback. The proposed training protocol addresses deficiencies of the classical approach and takes advantage of body-abled user capabilities. Experimental results suggest that android feedback based BCI training improves the modulation of sensorimotor rhythms during motor imagery task. Moreover, we discuss how the influence of body ownership transfer illusion towards the android might have an effect in the modulation of event related desynchronization/synchronization (ERD/ERS) activity.
BibTeX:
@Article{Penaloza2018,
  author   = {Christian Penaloza and Maryam Alimardani and Shuichi Nishio},
  title    = {Android Feedback-based Training modulates Sensorimotor Rhythms during Motor Imagery},
  journal  = {IEEE Transactions on Neural Systems and Rehabilitation Engineering},
  year     = {2018},
  volume   = {26, Issue 3},
  pages    = {666-674},
  month    = Mar,
  abstract = {EEG-based brain computer interface (BCI) systems have demonstrated potential to assist patients with devastating motor paralysis conditions. However, there is great interest in shifting the BCI trend towards applications aimed to healthy users. Although BCI operation depends on technological factors (i.e. EEG pattern classification algorithm) and human factors (i.e. how well the person is able to generate good quality EEG patterns), it is the latter the least investigated. In order to control a Motor Imagery based BCI, the user needs to learn to modulate his/her sensorimotor brain rhythms by practicing Motor Imagery using a classical training protocol with an abstract visual feedback. In this paper, we investigate a different BCI training protocol using a human-like android robot (Geminoid HI-2) to provide realistic visual feedback. The proposed training protocol addresses deficiencies of the classical approach and takes advantage of body-abled user capabilities. Experimental results suggest that android feedback based BCI training improves the modulation of sensorimotor rhythms during motor imagery task. Moreover, we discuss how the influence of body ownership transfer illusion towards the android might have an effect in the modulation of event related desynchronization/synchronization (ERD/ERS) activity.},
  url      = {http://ieeexplore.ieee.org/document/8255672/},
  doi      = {10.1109/TNSRE.2018.2792481},
}
Carlos T. Ishi, Jun Arai, "Periodicity, spectral and electroglottographic analyses of pressed voice in expressive speech", Acoustic Science and Technology, vol. 39, Issue 2, pp. 101-108, March, 2018.
Abstract: Pressed voice is a type of voice quality produced by pressing/straining the vocal folds, which often appears in Japanese conversational speech when expressing paralinguistic information related to emotional or attitudinal behaviors of the speaker. With the aim of clarifying the acoustic and physiological features involved in pressed voice production, in present work, acoustic and electroglottographic (EGG) analyses have been conducted on pressed voice segments extracted from spontaneous dialogue speech of several speakers. Periodicity analysis indicated that pressed voice is usually accompanied by creaky or harsh voices, having irregularities in periodicity, but can also be accompanied by periodic voices with fundamental frequencies in the range of modal phonation. A spectral measure H1'-A1' was proposed for characterizing pressed voice segments which commonly has few or no harmonicity. Vocal fold vibratory pattern analysis from the EGG signals revealed that most pressed voice segments are characterized by glottal pulses with closed intervals longer than open intervals on average, regardless of periodicity.
BibTeX:
@Article{Ishi2018,
  author   = {Carlos T. Ishi and Jun Arai},
  title    = {Periodicity, spectral and electroglottographic analyses of pressed voice in expressive speech},
  journal  = {Acoustic Science and Technology},
  year     = {2018},
  volume   = {39, Issue 2},
  pages    = {101-108},
  month    = Mar,
  abstract = {Pressed voice is a type of voice quality produced by pressing/straining the vocal folds, which often appears in Japanese conversational speech when expressing paralinguistic information related to emotional or attitudinal behaviors of the speaker. With the aim of clarifying the acoustic and physiological features involved in pressed voice production, in present work, acoustic and electroglottographic (EGG) analyses have been conducted on pressed voice segments extracted from spontaneous dialogue speech of several speakers. Periodicity analysis indicated that pressed voice is usually accompanied by creaky or harsh voices, having irregularities in periodicity, but can also be accompanied by periodic voices with fundamental frequencies in the range of modal phonation. A spectral measure H1'-A1' was proposed for characterizing pressed voice segments which commonly has few or no harmonicity. Vocal fold vibratory pattern analysis from the EGG signals revealed that most pressed voice segments are characterized by glottal pulses with closed intervals longer than open intervals on average, regardless of periodicity.},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/ast/39/2/39_E1732/_article},
  doi      = {10.1250/ast.39.101},
  file     = {Ishi2018.pdf:pdf/Ishi2018.pdf:PDF},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "Embodied responses to musical experience detected by human bio-feedback brain features in a Geminoid augmented architecture", Biologically Inspired Cognitive Architectures, vol. 23, pp. 19-26, January, 2018.
Abstract: This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). On the grounds of the theoretical and experimental literature on the biological foundation of music, the grammar of music perception and the perception and feeling of emotions in music hearing, we argue that music cognition is specific and that it is realized by a cognitive capacity for music that consists of conceptual and affective constituents. We discuss the relationship between such constituents that enables understanding, that is extracting meaning from music at the different levels of the organization of sounds that are felt as bearers of affects and emotions. To account for the way such cognitive mechanisms are realized in music hearing and extended to movements and gestures we bring in the construct of tensions and of music experience as a cognitive frame. Finally, we describe the principled approach to the design and the architecture of a BCI-controlled robotic system that can be employed to map and specify the constituents of the cognitive capacity for music as well as to simulate their contribution to music meaning understanding in the context of music experience by displaying it through the Geminoid robot movements.
BibTeX:
@Article{Sorbello2018,
  author   = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {Embodied responses to musical experience detected by human bio-feedback brain features in a Geminoid augmented architecture},
  journal  = {Biologically Inspired Cognitive Architectures},
  year     = {2018},
  volume   = {23},
  pages    = {19-26},
  month    = Jan,
  abstract = {This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). On the grounds of the theoretical and experimental literature on the biological foundation of music, the grammar of music perception and the perception and feeling of emotions in music hearing, we argue that music cognition is specific and that it is realized by a cognitive capacity for music that consists of conceptual and affective constituents. We discuss the relationship between such constituents that enables understanding, that is extracting meaning from music at the different levels of the organization of sounds that are felt as bearers of affects and emotions. To account for the way such cognitive mechanisms are realized in music hearing and extended to movements and gestures we bring in the construct of tensions and of music experience as a cognitive frame. Finally, we describe the principled approach to the design and the architecture of a BCI-controlled robotic system that can be employed to map and specify the constituents of the cognitive capacity for music as well as to simulate their contribution to music meaning understanding in the context of music experience by displaying it through the Geminoid robot movements.},
  url      = {https://www.sciencedirect.com/science/article/pii/S2212683X17301044},
  doi      = {10.1016/j.bica.2018.01.001},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "An android architecture for bio-inspired honest signalling in Human- Humanoid Interaction", Biologically Inspired Cognitive Architectures, vol. 23, pp. 27-34, January, 2018.
Abstract: This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.
BibTeX:
@Article{Sorbello2018a,
  author   = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {An android architecture for bio-inspired honest signalling in Human- Humanoid Interaction},
  journal  = {Biologically Inspired Cognitive Architectures},
  year     = {2018},
  volume   = {23},
  pages    = {27-34},
  month    = Jan,
  abstract = {This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.},
  url      = {https://www.sciencedirect.com/science/article/pii/S2212683X17301032},
  doi      = {10.1016/j.bica.2017.12.001},
}
Takashi Ikeda, Masayuki Hirata, Masashi Kasaki, Maryam Alimardani, Kojiro Matsushita, Tomoyuki Yamamoto, Shuichi Nishio, Hiroshi Ishiguro, "Subthalamic nucleus detects unnatural android movement", Scientific Reports, vol. 7, no. 17851, December, 2017.
Abstract: An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.
BibTeX:
@Article{Ikeda2017,
  author   = {Takashi Ikeda and Masayuki Hirata and Masashi Kasaki and Maryam Alimardani and Kojiro Matsushita and Tomoyuki Yamamoto and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Subthalamic nucleus detects unnatural android movement},
  journal  = {Scientific Reports},
  year     = {2017},
  volume   = {7},
  number   = {17851},
  month    = Dec,
  abstract = {An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.},
  day      = {19},
  url      = {https://www.nature.com/articles/s41598-017-17849-2},
  doi      = {10.1038/s41598-017-17849-2},
}
Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Novel Speech Motion Generation by Modelling Dynamics of Human Speech Production", Frontiers in Robotics and AI, vol. 4 Article 49, pp. 1-14, October, 2017.
Abstract: We have developed a method to automatically generate humanlike trunk motions based on speech (i.e., the neck and waist motions involved in speech) for a conversational android from its speech in real time. To generate humanlike movements, a mechanical limitation of the android (i.e., limited number of joint) needs to be compensated in order to express an emotional type of motion. By expressly presenting the synchronization of speech and motion in the android, the method enables us to compensate for its mechanical limitations. Moreover, the motion can be modulated for expressing emotions by tuning the parameters in the dynamical model. This method's model is based on a spring-damper dynamical model driven by voice features to simulate a human's trunk movement involved in speech. In contrast to the existing methods based on machine learning, our system can easily modulate the motions generated due to speech patterns because the model's parameters correspond to muscle stiffness. The experimental results show that the android motions generated by the our model can be perceived as more natural and thus motivate users to talk with the android more, compared with a system that simply copies human motions. In addition, it is possible to make the model generate emotional speech motions by tuning its parameters.
BibTeX:
@Article{Sakai2017,
  author   = {Kurima Sakai and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title    = {Novel Speech Motion Generation by Modelling Dynamics of Human Speech Production},
  journal  = {Frontiers in Robotics and AI},
  year     = {2017},
  volume   = {4 Article 49},
  pages    = {1-14},
  month    = Oct,
  abstract = {We have developed a method to automatically generate humanlike trunk motions based on speech (i.e., the neck and waist motions involved in speech) for a conversational android from its speech in real time. To generate humanlike movements, a mechanical limitation of the android (i.e., limited number of joint) needs to be compensated in order to express an emotional type of motion. By expressly presenting the synchronization of speech and motion in the android, the method enables us to compensate for its mechanical limitations. Moreover, the motion can be modulated for expressing emotions by tuning the parameters in the dynamical model. This method's model is based on a spring-damper dynamical model driven by voice features to simulate a human's trunk movement involved in speech. In contrast to the existing methods based on machine learning, our system can easily modulate the motions generated due to speech patterns because the model's parameters correspond to muscle stiffness. The experimental results show that the android motions generated by the our model can be perceived as more natural and thus motivate users to talk with the android more, compared with a system that simply copies human motions. In addition, it is possible to make the model generate emotional speech motions by tuning its parameters.},
  day      = {27},
  url      = {http://journal.frontiersin.org/article/10.3389/frobt.2017.00049/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Robotics_and_AI&id=219035},
  doi      = {10.3389/frobt.2017.00049},
  file     = {Sakai2017.pdf:pdf/Sakai2017.pdf:PDF},
}
Hideyuki Takahashi, Midori Ban, Hirotaka Osawa, Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Huggable communication medium maintains level of trust during conversation game", Frontiers in Psychology, vol. 8, no. 1862, pp. 1-8, October, 2017.
Abstract: The present research is based on the hypothesis that using Hugvie maintains users' level of trust toward their conversation partners in situations prone to suspicion. The level of trust felt toward other remote game players was compared between participants using Hugvie and those using a basic communication device while playing a modified version of Werewolf, a conversation-based game, designed to evaluate trust. Although there are always winners and losers in the regular version of Werewolf, the rules were modified to generate a possible scenario in which no enemy was present among the players and all players would win if they trusted each other. We examined the effect of using Hugvie while playing Werewolf on players' level of trust toward each other and our results demonstrated that in those using Hugvie, the level of trust toward other players was maintained.
BibTeX:
@Article{Takahashi2017,
  author   = {Hideyuki Takahashi and Midori Ban and Hirotaka Osawa and Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {Huggable communication medium maintains level of trust during conversation game},
  journal  = {Frontiers in Psychology},
  year     = {2017},
  volume   = {8},
  number   = {1862},
  pages    = {1-8},
  month    = oct,
  abstract = {The present research is based on the hypothesis that using Hugvie maintains users' level of trust toward their conversation partners in situations prone to suspicion. The level of trust felt toward other remote game players was compared between participants using Hugvie and those using a basic communication device while playing a modified version of Werewolf, a conversation-based game, designed to evaluate trust. Although there are always winners and losers in the regular version of Werewolf, the rules were modified to generate a possible scenario in which no enemy was present among the players and all players would win if they trusted each other. We examined the effect of using Hugvie while playing Werewolf on players' level of trust toward each other and our results demonstrated that in those using Hugvie, the level of trust toward other players was maintained.},
  day      = {25},
  url      = {https://www.frontiersin.org/journals/psychology#},
  doi      = {10.3389/fpsyg.2017.01862},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions and motion generation in android robots", IEEE Robotics and Automation Letters (RA-L), vol. 2 Issue 3, pp. 1748-1784, July, 2017.
Abstract: Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.
BibTeX:
@Article{Ishi2017d,
  author   = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Motion analysis in vocalized surprise expressions and motion generation in android robots},
  year     = {2017},
  abstract = {Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.},
  day      = {17},
  doi      = {10.1109/LRA.2017.2700941},
  month    = Jul,
  pages    = {1748-1784},
  url      = {http://www.ieee-ras.org/publications/ra-l},
  volume   = {2 Issue 3},
  comment  = {(The contents of this paper were also selected by IROS2017 Program Committee for presentation at the Conference)},
  file     = {Ishi2017d.pdf:pdf/Ishi2017d.pdf:PDF},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series", Frontiers in Human Neuroscience, vol. 11, no. 15, pp. 1-14, February, 2017.
Abstract: We present a non-parametric approach to prediction of the n-back n ∈ 1, 2 task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.
BibTeX:
@Article{Keshmiri2017b,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series},
  journal  = {Frontiers in Human Neuroscience},
  year     = {2017},
  volume   = {11},
  number   = {15},
  pages    = {1-14},
  month    = Feb,
  abstract = {We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.},
  url      = {http://journal.frontiersin.org/article/10.3389/fnhum.2017.00015/full},
  doi      = {10.3389/fnhum.2017.00015},
  file     = {Keshmiri2017b.pdf:pdf/Keshmiri2017b.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "A Model for Generating Socially-Appropriate Deictic Behaviors Towards People", International Journal of Social Robotics, vol. 9, no. Issue 1, pp. 33-49, January, 2017.
Abstract: Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.
BibTeX:
@Article{Liu2017a,
  author   = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {A Model for Generating Socially-Appropriate Deictic Behaviors Towards People},
  journal  = {International Journal of Social Robotics},
  year     = {2017},
  volume   = {9},
  number   = {Issue 1},
  pages    = {33-49},
  month    = Jan,
  abstract = {Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.},
  url      = {http://link.springer.com/article/10.1007%2Fs12369-016-0348-9},
  doi      = {10.1007/s12369-016-0348-9},
  file     = {Liu2017a.pdf:pdf/Liu2017a.pdf:PDF},
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan F. Glas, Christoph Bartneck, Hiroshi Ishiguro, "Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy", Paladyn, Journal of Behavioral Robotics, vol. 7, no. 1, pp. 55-66, December, 2016.
Abstract: An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.
BibTeX:
@Article{Zlotowski2016a,
  author   = {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan F. Glas and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy},
  journal  = {Paladyn, Journal of Behavioral Robotics},
  year     = {2016},
  volume   = {7},
  number   = {1},
  pages    = {55-66},
  month    = Dec,
  abstract = {An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.},
  url      = {https://www.degruyter.com/view/j/pjbr.2016.7.issue-1/pjbr-2016-0005/pjbr-2016-0005.xml},
  file     = {Zlotowski2016a.pdf:pdf/Zlotowski2016a.pdf:PDF},
}
Jani Even, Jonas Furrer, Yoichi Morales, Carlos T. Ishi, Norihiro Hagita, "Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting", IEEE Transactions on Robotics (T-RO), vol. 33 Issue2, pp. 333-345, December, 2016.
Abstract: This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.
BibTeX:
@Article{Even2016a,
  author   = {Jani Even and Jonas Furrer and Yoichi Morales and Carlos T. Ishi and Norihiro Hagita},
  title    = {Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting},
  journal  = {IEEE Transactions on Robotics (T-RO)},
  year     = {2016},
  volume   = {33 Issue2},
  pages    = {333-345},
  month    = Dec,
  abstract = {This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.},
  url      = {http://ieeexplore.ieee.org/document/7790815/},
  doi      = {10.1109/TRO.2016.2630053},
  file     = {Even2016a.pdf:pdf/Even2016a.pdf:PDF},
}
Dylan F. Glas, Kanae Wada, Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "Personal Greetings: Personalizing Robot Utterances Based on Novelty of Observed Behavior", International Journal of Social Robotics, November, 2016.
Abstract: One challenge in creating conversational service robots is how to reproduce the kind of individual recognition and attention that a human can provide. We believe that interactions can be made to seem more warm and humanlike by using sensors to observe a person's behavior or appearance over time, and programming the robot to comment when a novel feature, such as a new hairstyle, is observed. To create a system capable of recognizing such novelty, we collected one month of training data from customers in a shopping mall and recorded features of people's visits, such as time of day and group size. We then trained SVM classifiers to identify each feature as novel, typical, or neither, based on the inputs of a human coder, and we trained an additional classifier to choose an appropriate topic for a personalized greeting. An utterance generator was developed to generate text for the robot to speak, based on the selected topic and sensor data. A cross-validation analysis showed that the trained classifiers could accurately reproduce human novelty judgments with 88% accuracy and topic selection with 93% accuracy. We then deployed a teleoperated robot using this system to greet customers in a shopping mall for three weeks, and we present an example interaction and results from interviews showing that customers appreciated the robot's personalized greetings and felt a sense of familiarity with the robot.
BibTeX:
@Article{Glas2016c,
  author   = {Dylan F. Glas and Kanae Wada and Masahiro Shiomi and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Personal Greetings: Personalizing Robot Utterances Based on Novelty of Observed Behavior},
  journal  = {International Journal of Social Robotics},
  year     = {2016},
  month    = Nov,
  abstract = {One challenge in creating conversational service robots is how to reproduce the kind of individual recognition and attention that a human can provide. We believe that interactions can be made to seem more warm and humanlike by using sensors to observe a person's behavior or appearance over time, and programming the robot to comment when a novel feature, such as a new hairstyle, is observed. To create a system capable of recognizing such novelty, we collected one month of training data from customers in a shopping mall and recorded features of people's visits, such as time of day and group size. We then trained SVM classifiers to identify each feature as novel, typical, or neither, based on the inputs of a human coder, and we trained an additional classifier to choose an appropriate topic for a personalized greeting. An utterance generator was developed to generate text for the robot to speak, based on the selected topic and sensor data. A cross-validation analysis showed that the trained classifiers could accurately reproduce human novelty judgments with 88% accuracy and topic selection with 93% accuracy. We then deployed a teleoperated robot using this system to greet customers in a shopping mall for three weeks, and we present an example interaction and results from interviews showing that customers appreciated the robot's personalized greetings and felt a sense of familiarity with the robot.},
  url      = {http://link.springer.com/article/10.1007/s12369-016-0385-4},
  doi      = {10.1007/s12369-016-0385-4},
  file     = {Glas2016c.pdf:pdf/Glas2016c.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot", Scientific Reports, vol. 6, no. 33514, September, 2016.
Abstract: Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.
BibTeX:
@Article{Alimardani2016,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot},
  journal         = {Scientific Reports},
  year            = {2016},
  volume          = {6},
  number          = {33514},
  month           = Sep,
  abstract        = {Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.},
  url             = {http://www.nature.com/articles/srep33514},
  doi             = {10.1038/srep33514},
  file            = {Alimardani2016.pdf:pdf/Alimardani2016.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning", PLOS ONE, pp. 1-17, September, 2016.
Abstract: Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.
BibTeX:
@Article{Alimardani2016a,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning},
  journal         = {PLOS ONE},
  year            = {2016},
  pages           = {1-17},
  month           = Sep,
  abstract        = {Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.},
  day             = {6},
  url             = {http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0161945},
  doi             = {10.1371/journal.pone.0161945},
  file            = {Alimardani2016a.pdf:pdf/Alimardani2016a.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "Data-driven HRI: Learning social behaviors by example from human-human interaction", IEEE Transactions on Robotics, vol. 32, no. 4, pp. 988-1008, August, 2016.
Abstract: Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.
BibTeX:
@Article{Liu2016d,
  author   = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Data-driven HRI: Learning social behaviors by example from human-human interaction},
  journal  = {IEEE Transactions on Robotics},
  year     = {2016},
  volume   = {32},
  number   = {4},
  pages    = {988-1008},
  month    = Aug,
  abstract = {Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.},
  url      = {http://ieeexplore.ieee.org/document/7539621/},
  file     = {Liu2016d.pdf:pdf/Liu2016d.pdf:PDF},
}
Kaiko Kuwamura, Shuichi Nishio, Shinichi Sato, "Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease", Frontiers in Psychology, vol. 7, no. 1066, pp. 1-13, July, 2016.
Abstract: This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.
BibTeX:
@Article{Kuwamura2016a,
  author          = {Kaiko Kuwamura and Shuichi Nishio and Shinichi Sato},
  title           = {Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease},
  journal         = {Frontiers in Psychology},
  year            = {2016},
  volume          = {7},
  number          = {1066},
  pages           = {1-13},
  month           = Jul,
  abstract        = {This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.},
  day             = {19},
  url             = {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01066},
  doi             = {10.3389/fpsyg.2016.01066},
  file            = {Kuwamura2016a.pdf:pdf/Kuwamura2016a.pdf:PDF},
  keywords        = {Elderly care robot, Teleoperated robot, Alzheimer's disease, Elderly care facility, Gerontology},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening", Frontiers in Psychology, section Human-Media Interaction, vol. 7, no. 510, pp. 1-10, April, 2016.
Abstract: In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.
BibTeX:
@Article{Nakanishi2016,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening},
  journal  = {Frontiers in Psychology, section Human-Media Interaction},
  year     = {2016},
  volume   = {7},
  number   = {510},
  pages    = {1-10},
  month    = Apr,
  abstract = {In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.},
  day      = {19},
  url      = {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.00510},
  doi      = {10.3389/fpsyg.2016.00510},
  file     = {Nakanishi2016.pdf:pdf/Nakanishi2016.pdf:PDF},
}
Ryuji Yamazaki, Louise Christensen, Kate Skov, Chi-Chih Chang, Malene F. Damholdt, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie", Frontiers in Psychology, vol. 7, no. 537, April, 2016.
Abstract: There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.
BibTeX:
@Article{Yamazaki2016,
  author   = {Ryuji Yamazaki and Louise Christensen and Kate Skov and Chi-Chih Chang and Malene F. Damholdt and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie},
  journal  = {Frontiers in Psychology},
  year     = {2016},
  volume   = {7},
  number   = {537},
  month    = Apr,
  abstract = {There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.},
  url      = {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705},
  doi      = {10.3389/fpsyg.2016.00537},
  file     = {Yamazaki2016.pdf:pdf/Yamazaki2016.pdf:PDF},
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication", Interaction Studies, vol. 16, no. 2, pp. 249-271, November, 2015.
Abstract: In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.
BibTeX:
@Article{Kuwamura2013a,
  author          = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication},
  journal         = {Interaction Studies},
  year            = {2015},
  volume          = {16},
  number          = {2},
  pages           = {249-271},
  month           = NOV,
  abstract        = {In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.},
  file            = {Kuwamura2013a.pdf:pdf/Kuwamura2013a.pdf:PDF},
  keywords        = {teleoperated android; telecomunication; robot; appearance; personality},
}
Malene F. Damholdt, Marco Nørskov, Ryuji Yamazaki, Raul Hakli, Catharina V. Hansen, Christina Vestergaard, Johanna Seibt, "Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality", Frontiers in Psychology, vol. 6, no. 1701, November, 2015.
Abstract: Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.
BibTeX:
@Article{Damholdt2015,
  author   = {Malene F. Damholdt and Marco Nørskov and Ryuji Yamazaki and Raul Hakli and Catharina V. Hansen and Christina Vestergaard and Johanna Seibt},
  title    = {Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality},
  journal  = {Frontiers in Psychology},
  year     = {2015},
  volume   = {6},
  number   = {1701},
  month    = Nov,
  abstract = {Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.},
  url      = {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705},
  doi      = {10.3389/fpsyg.2015.01701},
  file     = {Damholdt2015.pdf:pdf/Damholdt2015.pdf:PDF},
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan Glas, Christoph Bartneck, Hiroshi Ishiguro, "Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception", Frontiers in Psychology, June, 2015.
Abstract: The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.
BibTeX:
@Article{Zlotowski,
  author   = {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan Glas and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception},
  journal  = {Frontiers in Psychology},
  year     = {2015},
  month    = JUN,
  abstract = {The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.},
  url      = {http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00883/abstract},
  doi      = {10.3389/fpsyg.2015.00883},
  file     = {Jakub2014a.pdf:pdf/Jakub2014a.pdf:PDF},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot", International Journal of Humanoid Robotics, vol. 12, issue 01, pp. 1550002 (22 pages), 2015.
Abstract: To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.
BibTeX:
@Article{Cooney2013b,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2015},
  volume          = {12, issue 01},
  pages           = {1550002 (22 pages)},
  abstract        = {To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.},
  doi             = {10.1142/S0219843615500024},
  file            = {Cooney2014a.pdf:pdf/Cooney2014a.pdf:PDF},
  keywords        = {Affection; multi-modal; play; small humanoid robot, human-robot interaction},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior", ACM Transactions on Interactive Intelligent Systems, vol. 4, no. 4, pp. 32, December, 2014.
Abstract: Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.
BibTeX:
@Article{Cooney2014c,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior},
  journal         = {{ACM} Transactions on Interactive Intelligent Systems},
  year            = {2014},
  volume          = {4},
  number          = {4},
  pages           = {32},
  month           = Dec,
  abstract        = {Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.},
  url             = {http://dl.acm.org/citation.cfm?doid=2688469.2685395},
  doi             = {10.1145/2685395},
  file            = {Cooney2014b.pdf:pdf/Cooney2014b.pdf:PDF},
  keywords        = {human-robot interaction; activity recognition; small humanoid companion robot; affectionate touch behavior; intelligent systems},
}
Rosario Sorbello, Antonio Chella, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction", Robotics and Autonomous Systems Journal, vol. 62, issue 9, pp. 1329-1341, September, 2014.
Abstract: The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.
BibTeX:
@Article{Sorbello2013a,
  author   = {Rosario Sorbello and Antonio Chella and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction},
  journal  = {Robotics and Autonomous Systems Journal},
  year     = {2014},
  volume   = {62, issue 9},
  pages    = {1329-1341},
  month    = SEP,
  abstract = {The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.},
  url      = {http://www.sciencedirect.com/science/article/pii/S092188901400061X},
  doi      = {10.1016/j.robot.2014.03.017},
  file     = {Sorbello2013a.pdf:pdf/Sorbello2013a.pdf:PDF},
  keywords = {Telenoid; Geminoid; Social Robot; Human-Humanoid Robot Interaction},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Effect of biased feedback on motor imagery learning in BCI-teleoperation system", Frontiers in Systems Neuroscience, vol. 8, no. 52, April, 2014.
Abstract: Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.
BibTeX:
@Article{Alimardani2014a,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Effect of biased feedback on motor imagery learning in BCI-teleoperation system},
  journal         = {Frontiers in Systems Neuroscience},
  year            = {2014},
  volume          = {8},
  number          = {52},
  month           = Apr,
  abstract        = {Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.},
  url             = {http://journal.frontiersin.org/Journal/10.3389/fnsys.2014.00052/full},
  doi             = {10.3389/fnsys.2014.00052},
  file            = {Alimardani2014a.pdf:pdf/Alimardani2014a.pdf:PDF},
  keywords        = {body ownership illusion; BCI‐teleoperation; motor imagery learning; feedback effect; training},
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: communication device for encouraging good relationship through the act of hugging", Lovotics, vol. Vol. 1, Issue 1, pp. 10000104, February, 2014.
Abstract: In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.
BibTeX:
@Article{Kuwamura2014a,
  author          = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Hugvie: communication device for encouraging good relationship through the act of hugging},
  journal         = {Lovotics},
  year            = {2014},
  volume          = {Vol. 1, Issue 1},
  pages           = {10000104},
  month           = Feb,
  abstract        = {In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.},
  url             = {http://www.omicsonline.com/open-access/hugvie_communication_device_for_encouraging_good_relationship_through_the_act_of_hugging.pdf?aid=24445},
  doi             = {10.4172/2090-9888.10000104},
  file            = {Kuwamura2014a.pdf:pdf/Kuwamura2014a.pdf:PDF},
  keywords        = {hug; co-presence; telecommunication},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.", International Journal of Social Robotics, vol. 6, no. 1, pp. 67-83, January, 2014.
Abstract: Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.
BibTeX:
@Article{Putten2011b,
  author          = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.},
  journal         = {International Journal of Social Robotics},
  year            = {2014},
  volume          = {6},
  number          = {1},
  pages           = {67-83},
  month           = Jan,
  abstract        = {Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.},
  url             = {http://link.springer.com/article/10.1007/s12369-013-0198-7},
  doi             = {10.1007/s12369-013-0198-7},
  file            = {Putten2011b.pdf:pdf/Putten2011b.pdf:PDF},
  keywords        = {human-robot interaction; field study; observation; multimodal evaluation of human interaction with robots; Uncanny Valley},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care", International Journal of Social Robotics, vol. 6, no. 3, pp. 429-442, 2014.
Abstract: We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@Article{Yamazaki2013a,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title           = {Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care},
  journal         = {International Journal of Social Robotics},
  year            = {2014},
  volume          = {6},
  number          = {3},
  pages           = {429-442},
  abstract        = {We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  doi             = {10.1007/s12369-014-0247-x},
  file            = {Yamazaki2013a.pdf:pdf/Yamazaki2013a.pdf:PDF},
  keywords        = {teleoperated android; minimal design; embodied communication; social isolation; elderly care; social acceptance},
}
Hidenobu Sumioka, Shuichi Nishio, Takashi Minato, Ryuji Yamazaki, Hiroshi Ishiguro, "Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence", Cognitive Computation, vol. 6, Issue 4, pp. 760-774, 2014.
Abstract: Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.
BibTeX:
@Article{Sumioka2013e,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Takashi Minato and Ryuji Yamazaki and Hiroshi Ishiguro},
  title           = {Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence},
  journal         = {Cognitive Computation},
  year            = {2014},
  volume          = {6, Issue 4},
  pages           = {760-774},
  abstract        = {Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.},
  url             = {http://link.springer.com/article/10.1007%2Fs12559-014-9270-3},
  doi             = {10.1007/s12559-014-9270-3},
  file            = {Sumioka2014.pdf:pdf/Sumioka2014.pdf:PDF},
  keywords        = {Human–robot Interaction; Minimal design; Elderly care; Android science},
}
Kurima Sakai, Hidenobu Sumioka, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of Interactive Small Humanoid Robot with Visual Illusion", International Journal of Innovative Computing, Information and Control, vol. 9, no. 12, pp. 4725-4736, December, 2013.
Abstract: This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.
BibTeX:
@Article{Sakai2013,
  author          = {Kurima Sakai and Hidenobu Sumioka and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Motion Design of Interactive Small Humanoid Robot with Visual Illusion},
  journal         = {International Journal of Innovative Computing, Information and Control},
  year            = {2013},
  volume          = {9},
  number          = {12},
  pages           = {4725-4736},
  month           = Dec,
  abstract        = {This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.},
  url             = {http://www.ijicic.org/apchi12-275.pdf},
  file            = {Sakai2013.pdf:pdf/Sakai2013.pdf:PDF},
  keywords        = {Tele-communication; Nonverbal communication; Portable robot avatar; Visual illusion of motion},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot", Lovotics, November, 2013.
Abstract: Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.
BibTeX:
@Article{Cooney2013d,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot},
  journal         = {Lovotics},
  year            = {2013},
  month           = Nov,
  abstract        = {Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.},
  url             = {http://www.omicsonline.com/open-access/designing_robots_for_well_being_theoretical_background_and_visual.pdf?aid=24444},
  doi             = {10.4172/2090-9888.1000101},
  file            = {Cooney2013d.pdf:pdf/Cooney2013d.pdf:PDF},
  keywords        = {Human-robot interaction; well-being; enjoyment; affection; recognizing typical behavior; small humanoid robot},
}
Hidenobu Sumioka, Aya Nakae, Ryota Kanai, Hiroshi Ishiguro, "Huggable communication medium decreases cortisol levels", Scientific Reports, vol. 3, no. 3034, October, 2013.
Abstract: Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.
BibTeX:
@Article{Sumioka2013d,
  author          = {Hidenobu Sumioka and Aya Nakae and Ryota Kanai and Hiroshi Ishiguro},
  title           = {Huggable communication medium decreases cortisol levels},
  journal         = {Scientific Reports},
  year            = {2013},
  volume          = {3},
  number          = {3034},
  month           = Oct,
  abstract        = {Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.},
  url             = {http://www.nature.com/srep/2013/131023/srep03034/full/srep03034.html},
  doi             = {10.1038/srep03034},
  file            = {Sumioka2013d.pdf:pdf/Sumioka2013d.pdf:PDF},
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot", International Journal of Social Robotics, vol. 6, pp. 173-193, September, 2013.
Abstract: Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and naïve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a naïve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.
BibTeX:
@Article{Cooney2013,
  author          = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  title           = {Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot},
  journal         = {International Journal of Social Robotics},
  year            = {2013},
  volume          = {6},
  pages           = {173-193},
  month           = Sep,
  abstract        = {Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and na\"{i}ve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a na\"{i}ve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.},
  url             = {http://link.springer.com/article/10.1007%2Fs12369-013-0212-0},
  doi             = {10.1007/s12369-013-0212-0},
  file            = {Cooney2013.pdf:pdf/Cooney2013.pdf:PDF},
  keywords        = {Interaction design for enjoyment; Playful human-robot interaction; Full-body gesture recognition; Inertial sensing; Small humanoid robot},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators", Scientific Reports, vol. 3, no. 2396, August, 2013.
Abstract: Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.
BibTeX:
@Article{Alimardani2013,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators},
  journal         = {Scientific Reports},
  year            = {2013},
  volume          = {3},
  number          = {2396},
  month           = Aug,
  abstract        = {Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.},
  day             = {9},
  url             = {http://www.nature.com/srep/2013/130809/srep02396/full/srep02396.html},
  doi             = {10.1038/srep02396},
  file            = {alimardani2013a.pdf:pdf/alimardani2013a.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Teleoperated Android Robot as Emotion Regulation Media", International Journal of Social Robotics, vol. 5, no. 4, pp. 563-573, July, 2013.
Abstract: In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.
BibTeX:
@Article{Nishio2013a,
  author          = {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title           = {Teleoperated Android Robot as Emotion Regulation Media},
  journal         = {International Journal of Social Robotics},
  year            = {2013},
  volume          = {5},
  number          = {4},
  pages           = {563-573},
  month           = Jul,
  abstract        = {In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.},
  url             = {http://link.springer.com/article/10.1007%2Fs12369-013-0201-3},
  doi             = {10.1007/s12369-013-0201-3},
  file            = {Nishio2013a.pdf:pdf/Nishio2013a.pdf:PDF},
  keywords        = {Teleoperated android robot; Emotion regulation; Facial feedback hypothesis; Body ownership transfer},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350009(1-19), April, 2013.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.
BibTeX:
@Article{Liu2012a,
  author          = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2013},
  volume          = {10},
  number          = {1},
  pages           = {1350009(1-19)},
  month           = Apr,
  abstract        = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.},
  day             = {2},
  url             = {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500096},
  doi             = {10.1142/S0219843613500096},
  file            = {Liu2012a.pdf:pdf/Liu2012a.pdf:PDF},
  keywords        = {Head motion; dialogue acts; gazing; motion generation},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Kohei Matsumura, Takashi Minato, Hiroshi Ishiguro, Tsutomu Fujinami, Masaru Nishikawa, "Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350007(1-25), April, 2013.
Abstract: Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.
BibTeX:
@Article{Yamazaki2012e,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Kohei Matsumura and Takashi Minato and Hiroshi Ishiguro and Tsutomu Fujinami and Masaru Nishikawa},
  title           = {Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2013},
  volume          = {10},
  number          = {1},
  pages           = {1350007(1-25)},
  month           = Apr,
  abstract        = {Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.},
  day             = {2},
  url             = {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500072},
  doi             = {10.1142/S0219843613500072},
  file            = {Yamazaki2012e.pdf:pdf/Yamazaki2012e.pdf:PDF},
  keywords        = {Telecommunication; android robot; minimal design; cooperation; role differentiation; inter-generational relationship; embodied communication; teleoperation; socialization},
}
Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of relationship between head motion events and speech in dialogue conversations", Speech Communication, Special issue on Gesture and speech in interaction, pp. 233-243, 2013.
Abstract: Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.
BibTeX:
@Article{Ishi2013,
  author   = {Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Analysis of relationship between head motion events and speech in dialogue conversations},
  journal  = {Speech Communication, Special issue on Gesture and speech in interaction},
  year     = {2013},
  pages    = {233-243},
  abstract = {Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.},
  file     = {Ishi2013.pdf:pdf/Ishi2013.pdf:PDF},
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Giuseppe Balistreri, Tetsuya Watanabe, Hiroshi Ishiguro, "Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World", Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 15, no. 5, pp. 592-597, July, 2011.
Abstract: This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.
BibTeX:
@Article{Ogawa2011,
  author          = {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Giuseppe Balistreri and Tetsuya Watanabe and Hiroshi Ishiguro},
  title           = {Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World},
  journal         = {Journal of Advanced Computational Intelligence and Intelligent Informatics},
  year            = {2011},
  volume          = {15},
  number          = {5},
  pages           = {592--597},
  month           = Jul,
  abstract        = {This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.},
  url             = {http://www.fujipress.jp/finder/xslt.php?mode=present&inputfile=JACII001500050012.xml},
  file            = {Ogawa2011.pdf:Ogawa2011.pdf:PDF},
  keywords        = {Telenoid; Geminoid; human robot interaction},
}
Shuichi Nishio, Hiroshi Ishiguro, "Attitude Change Induced by Different Appearances of Interaction Agents", International Journal of Machine Consciousness, vol. 3, no. 1, pp. 115-126, 2011.
Abstract: Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.
BibTeX:
@Article{Nishio2011,
  author          = {Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Attitude Change Induced by Different Appearances of Interaction Agents},
  journal         = {International Journal of Machine Consciousness},
  year            = {2011},
  volume          = {3},
  number          = {1},
  pages           = {115--126},
  abstract        = {Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.},
  url             = {http://www.worldscinet.com/ijmc/03/0301/S1793843011000637.html},
  doi             = {10.1142/S1793843011000637},
  file            = {Nishio2011.pdf:Nishio2011.pdf:PDF},
  keywords        = {Robot; appearance; interaction agents; human-robot interaction},
}
Christian Becker-Asano, Hiroshi Ishiguro, "Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F", Journal of Artificial Intelligence and Soft Computing Research, vol. 1, no. 3, pp. 215-231, 2011.
Abstract: As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.
BibTeX:
@Article{Becker-Asano2011,
  author          = {Christian Becker-Asano and Hiroshi Ishiguro},
  title           = {Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F},
  journal         = {Journal of Artificial Intelligence and Soft Computing Research},
  year            = {2011},
  volume          = {1},
  number          = {3},
  pages           = {215--231},
  abstract        = {As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.},
  url             = {http://jaiscr.eu/issues.aspx},
}
Takayuki Kanda, Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Interactive Humanoid Robots and Androids in Children's Lives", Children, Youth and Environments, vol. 19, no. 1, pp. 12-33, 2009.
Abstract: This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.
BibTeX:
@Article{Kanda2009,
  author          = {Takayuki Kanda and Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Interactive Humanoid Robots and Androids in Children's Lives},
  journal         = {Children, Youth and Environments},
  year            = {2009},
  volume          = {19},
  number          = {1},
  pages           = {12--33},
  abstract        = {This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.},
  file            = {Kanda2009.pdf:Kanda2009.pdf:PDF;19_1_02_HumanoidRobots.pdf:http\://www.colorado.edu/journals/cye/19_1/19_1_02_HumanoidRobots.pdf:PDF},
}
Hiroshi Ishiguro, Shuichi Nishio, "Building artificial humans to understand humans", Journal of Artificial Organs, vol. 10, no. 3, pp. 133-142, September, 2007.
Abstract: If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.
BibTeX:
@Article{Ishiguro2007,
  author      = {Hiroshi Ishiguro and Shuichi Nishio},
  title       = {Building artificial humans to understand humans},
  journal     = {Journal of Artificial Organs},
  year        = {2007},
  volume      = {10},
  number      = {3},
  pages       = {133--142},
  month       = Sep,
  abstract    = {If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.},
  url         = {http://www.springerlink.com/content/pmv076w723140244/},
  doi         = {10.1007/s10047-007-0381-4},
  file        = {Ishiguro2007.pdf:Ishiguro2007.pdf:PDF},
  institution = {{ATR} Intelligent Robotics and Communication Laboratories, Department of Adaptive Machine Systems, Osaka University, Osaka, Japan.},
  keywords    = {Behavior; Behavioral Sciences, methods; Cognitive Science, methods; Facial Expression; Female; Humans, anatomy /&/ histology/psychology; Male; Movement; Perception; Robotics, instrumentation/methods},
  medline-pst = {ppublish},
  pmid        = {17846711},
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children", Psychologia, vol. 50, no. 4, pp. 330-342, 2007.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Article{Nishio2007,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children},
  journal         = {Psychologia},
  year            = {2007},
  volume          = {50},
  number          = {4},
  pages           = {330--342},
  abstract        = {Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  url             = {http://www.jstage.jst.go.jp/article/psysoc/50/4/50_330/_article},
  doi             = {10.2117/psysoc.2007.330},
  file            = {Nishio2007.pdf:Nishio2007.pdf:PDF},
}
Reviewed Conference Papers
Hidenobu Sumioka, David Achanccaray, Javier Andreu-Perez, "Possible applications of bio-signals in an avatar-symbiotic society", In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) Workshop on 'The Grand Challenge of Cybernetic Avatars: Dreams and Facts', Abu Dhabi, UAE, October, 2024.
Abstract: 本論文では、アバター操作者の生体情報についてどのような利用が可能かについて、2つの事例を紹介しながら考察する。
BibTeX:
@InProceedings{Sumioka2024b,
  author    = {Hidenobu Sumioka and David Achanccaray and Javier Andreu-Perez},
  booktitle = {2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) Workshop on 'The Grand Challenge of Cybernetic Avatars: Dreams and Facts'},
  title     = {Possible applications of bio-signals in an avatar-symbiotic society},
  year      = {2024},
  address   = {Abu Dhabi, UAE},
  day       = {14-18},
  month     = oct,
  url       = {https://dil.atr.jp/ITB/en/workshop-for-iros2024/},
  abstract  = {本論文では、アバター操作者の生体情報についてどのような利用が可能かについて、2つの事例を紹介しながら考察する。},
}
Kexin Wang, Carlos Ishi, Ryoko Hayashi, "A multimodal analysis of different types of laughter expression in conversational dialogues", In INTERSPEECH 2024, Kos Island, Greece, pp. 4673-4677, September, 2024.
BibTeX:
@InProceedings{Wang2024b,
  author    = {Kexin Wang and Carlos Ishi and Ryoko Hayashi},
  booktitle = {INTERSPEECH 2024},
  title     = {A multimodal analysis of different types of laughter expression in conversational dialogues},
  year      = {2024},
  address   = {Kos Island, Greece},
  day       = {1-5},
  doi       = {https://doi.org/10.21437/Interspeech.2024-782},
  month     = sep,
  pages     = {4673-4677},
  url       = {https://interspeech2024.org/},
  keywords  = {laughter, facial expression, body motion, gaze, laughter function},
}
David Achanccaray, Javier Andreu-Perez, Hidenobu Sumioka, "fNIRS-Based Neural Profile of Teleoperation Skills", In VIII Biennial Meeting of the Society for functional near-infrared spectroscopy (fNIRS2024), Birmingham, UK, September, 2024.
Abstract: The teleoperation conditions affect the operator's performance due to the alteration in his/her workload and mental state. Decoding the neural profile of teleoperation skills might mitigate it. The teleoperation interface could assist the operator based on the decoded profile. This work developed teleoperation experiments of social tasks based on questions and answers during a dyadic interaction between an operator and another individual. The operator's brain activity was recorded by a fNIRS device. Then, fNIRS features and performance metrics of 32 participants were analyzed for neural profiling. We found that the width of hemoglobin oxygenation was greater in the high-performance participants.
BibTeX:
@InProceedings{Achanccaray2024c,
  author    = {David Achanccaray and Javier Andreu-Perez and Hidenobu Sumioka},
  booktitle = {VIII Biennial Meeting of the Society for functional near-infrared spectroscopy (fNIRS2024)},
  title     = {fNIRS-Based Neural Profile of Teleoperation Skills},
  year      = {2024},
  address   = {Birmingham, UK},
  day       = {11-15},
  month     = sep,
  url       = {https://fnirs2024.fnirs.org/},
  abstract  = {The teleoperation conditions affect the operator's performance due to the alteration in his/her workload and mental state. Decoding the neural profile of teleoperation skills might mitigate it. The teleoperation interface could assist the operator based on the decoded profile. This work developed teleoperation experiments of social tasks based on questions and answers during a dyadic interaction between an operator and another individual. The operator's brain activity was recorded by a fNIRS device. Then, fNIRS features and performance metrics of 32 participants were analyzed for neural profiling. We found that the width of hemoglobin oxygenation was greater in the high-performance participants.},
}
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "X-E-Speech: Joint Training Framework of Non-Autoregressive Cross-lingual Emotional Text-to-Speech and Voice Conversion", In INTERSPEECH 2024, Kos Island, Greece, pp. 4983-4987, September, 2024.
Abstract: Large language models (LLMs) have been widely used in cross-lingual and emotional speech synthesis, but they require extensive data and retain the drawbacks of previous autoregressive (AR) speech models, such as slow inference speed and lack of robustness and interpretation. In this paper, we propose a cross-lingual emotional speech generation model, X-E-Speech, which achieves the disentanglement of speaker style and crosslingual content features by jointly training non-autoregressive (NAR) voice conversion (VC) and text-to-speech (TTS) models. For TTS, we freeze the style-related model components and fine-tune the content-related structures to enable cross-lingual emotional speech synthesis. For VC, we improve the emotion similarity between the generated results and the reference speech by introducing the similarity loss between content features for VC and text for TTS.
BibTeX:
@InProceedings{Guo2024,
  author    = {Houjian Guo and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {INTERSPEECH 2024},
  title     = {X-E-Speech: Joint Training Framework of Non-Autoregressive Cross-lingual Emotional Text-to-Speech and Voice Conversion},
  year      = {2024},
  address   = {Kos Island, Greece},
  day       = {1-5},
  doi       = {10.21437/Interspeech.2024-589},
  month     = sep,
  pages     = {4983-4987},
  url       = {https://interspeech2024.org/},
  abstract  = {Large language models (LLMs) have been widely used in cross-lingual and emotional speech synthesis, but they require extensive data and retain the drawbacks of previous autoregressive (AR) speech models, such as slow inference speed and lack of robustness and interpretation. In this paper, we propose a cross-lingual emotional speech generation model, X-E-Speech, which achieves the disentanglement of speaker style and crosslingual content features by jointly training non-autoregressive (NAR) voice conversion (VC) and text-to-speech (TTS) models. For TTS, we freeze the style-related model components and fine-tune the content-related structures to enable cross-lingual emotional speech synthesis. For VC, we improve the emotion similarity between the generated results and the reference speech by introducing the similarity loss between content features for VC and text for TTS.},
  keywords  = {joint training, text-to-speech, voice conversion, cross-lingual, emotional},
}
Aya Nakae, Wei-Chuan Chang, Chie Kishimoto, Hani M Bu-Omer, Hidenobu Sumioka, "Endocrinological Investigation of Exercise Effects of Pain Using Group and Individual instruction", In The International Association on the Study of Pain 2024 World Congress on Pain (IASP 2024), no. FR272, Amsterdam, Netherlands, August, 2024.
Abstract: Fourteen participants have been recruited in this study. They answered Revised NEO Personality Inventory (NEO PI- R) to evaluate their personality traits. By using ELISA methods, we measured five hormones, including cortisol and GH in serum, before, during, and after 45-minute group and individual supervised exercise task. Participants received experimental heat stimulation and requested to evaluate their pain continuously. After the heat stimulation, they answered the Short Form-McGill Pain Questionnaire-2 (SF-MPQ-2). Statistics were performed using JMP17.0 with an appropriate one-tailed test and a significance level of 5%.To the best of our knowledge, this work might be the first study to investigate the relationship between personality traits, hormonal responses to training exercises, and individual differences in pain perception. The study results implied a causal relationship between personality trait, especially the trait of neuroticism, and cortisol secretion via the HPA axis after training exercise; suggestion therefore, an association between personality trait and pain perception. Moreover, individual differences in training effectiveness and motivation may be related to the HPA-axis function.but it has been suggested that repeated intensive interventions, such as personal training, could bring the pattern of hormone secretion of individuals with poor training efficiency closer to that of a more autonomously trained people. As a result, pain hypersensitivity, which is disadvantageous in painful conditions, may change for the better following regular training.
BibTeX:
@InProceedings{Nakae2024,
  author    = {Aya Nakae and Wei-Chuan Chang and Chie Kishimoto and Hani M Bu-Omer and Hidenobu Sumioka},
  booktitle = {The International Association on the Study of Pain 2024 World Congress on Pain (IASP 2024)},
  title     = {Endocrinological Investigation of Exercise Effects of Pain Using Group and Individual instruction},
  year      = {2024},
  address   = {Amsterdam, Netherlands},
  day       = {5-9},
  month     = aug,
  number    = {FR272},
  url       = {https://posters.worldcongress2024.org/poster/endocrinological-investigation-of-exercise-effects-on-pain-using-group-and-individual-instruction/},
  abstract  = {Fourteen participants have been recruited in this study. They answered Revised NEO Personality Inventory (NEO PI- R) to evaluate their personality traits. By using ELISA methods, we measured five hormones, including cortisol and GH in serum, before, during, and after 45-minute group and individual supervised exercise task. Participants received experimental heat stimulation and requested to evaluate their pain continuously. After the heat stimulation, they answered the Short Form-McGill Pain Questionnaire-2 (SF-MPQ-2). Statistics were performed using JMP17.0 with an appropriate one-tailed test and a significance level of 5%.To the best of our knowledge, this work might be the first study to investigate the relationship between personality traits, hormonal responses to training exercises, and individual differences in pain perception. The study results implied a causal relationship between personality trait, especially the trait of neuroticism, and cortisol secretion via the HPA axis after training exercise; suggestion therefore, an association between personality trait and pain perception. Moreover, individual differences in training effectiveness and motivation may be related to the HPA-axis function.but it has been suggested that repeated intensive interventions, such as personal training, could bring the pattern of hormone secretion of individuals with poor training efficiency closer to that of a more autonomously trained people. As a result, pain hypersensitivity, which is disadvantageous in painful conditions, may change for the better following regular training.},
}
Kexin Wang, Carlos Toshinori Ishi, Ryoko Hayashi, "Acoustic analysis of laughter bout in conversational dialogues", In Speech Prosody 2024 (SP2024), Leiden, The Netherlands, pp. 667-671, July, 2024.
Abstract: Previous studies suggest the existence of two distinct forms of laughter: mirthful/spontaneous laughter and social/intentional laughter. The current work aims to expand our understanding of the motives behind laughter and its functions in social conversation. About 1000 laughter bouts from 4 males and 4 females were extracted from multi-speaker conversation data, and the four predominant categories were used for acoustic analysis: mirthful, boosting, smoothing, and softening. Mirthful laughter and boosting laughter exhibit longer duration, higher F0 mean, intensity and HNR, as well as lower H1-A1 than other types, which suggest that laughter produced with positive emotion or attitude tends to be longer, higher and tenser voice quality. On the other hand, smoothing laughter and softening laughter displayed opposite characteristics, which indicates that intentional laughter emitted to smooth the interaction or soften the atmosphere can be acoustically identified to some extent from those with positive emotions. This work provides evidence that laughter with different functions has different acoustic characteristics that help us understand what laughter means in dialogue.
BibTeX:
@InProceedings{Wang2024,
  author    = {Kexin Wang and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {Speech Prosody 2024 (SP2024)},
  title     = {Acoustic analysis of laughter bout in conversational dialogues},
  year      = {2024},
  address   = {Leiden, The Netherlands},
  day       = {2-5},
  doi       = {10.21437/SpeechProsody.2024-135},
  month     = jul,
  pages     = {667-671},
  url       = {https://www.isca-archive.org/speechprosody_2024/wang24b_speechprosody.pdf},
  abstract  = {Previous studies suggest the existence of two distinct forms of laughter: mirthful/spontaneous laughter and social/intentional laughter. The current work aims to expand our understanding of the motives behind laughter and its functions in social conversation. About 1000 laughter bouts from 4 males and 4 females were extracted from multi-speaker conversation data, and the four predominant categories were used for acoustic analysis: mirthful, boosting, smoothing, and softening. Mirthful laughter and boosting laughter exhibit longer duration, higher F0 mean, intensity and HNR, as well as lower H1-A1 than other types, which suggest that laughter produced with positive emotion or attitude tends to be longer, higher and tenser voice quality. On the other hand, smoothing laughter and softening laughter displayed opposite characteristics, which indicates that intentional laughter emitted to smooth the interaction or soften the atmosphere can be acoustically identified to some extent from those with positive emotions. This work provides evidence that laughter with different functions has different acoustic characteristics that help us understand what laughter means in dialogue.},
  keywords  = {laughter types, laughter functions, acousticfeatures, voice quality},
}
Kexin Wang, Carlos Toshinori Ishi, Ryoko Hayashi, "Preliminary analysis of facial expressions and body movements of four types oflaughter", In Laughter and Other Non-Verbal Vocalisations Workshop 2024, Belfast, United Kingdom, pp. pp.21-23, July, 2024.
Abstract: In the present work, we explored the facial expressions and body movements of different laughter types. 1806 laughter events were extracted from a multimodal dataset of three-party conversations, categorized into four types: mirthful, boosting, smoothing, and softening. The results showed that laughter performing different social functions is related to different visual expression patterns. Mirthful and boosting laughter showed a similar tendency that be accompanied by larger changes in facial expressions and body movements than smoothing and softening laughter, for example, higher cheeks raising, wider mouth open and apparent upper body frontward. We also attempted to analyze the dynamic changes occurring within a laughter event. Our findings provide hints for the design of human-like conversational agents.
BibTeX:
@InProceedings{Wang2024a,
  author    = {Kexin Wang and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {Laughter and Other Non-Verbal Vocalisations Workshop 2024},
  title     = {Preliminary analysis of facial expressions and body movements of four types oflaughter},
  year      = {2024},
  address   = {Belfast, United Kingdom},
  day       = {16-17},
  month     = jul,
  pages     = {pp.21-23},
  url       = {https://www.isca-archive.org/lw_2024/wang24_lw.pdf},
  abstract  = {In the present work, we explored the facial expressions and body movements of different laughter types. 1806 laughter events were extracted from a multimodal dataset of three-party conversations, categorized into four types: mirthful, boosting, smoothing, and softening. The results showed that laughter performing different social functions is related to different visual expression patterns. Mirthful and boosting laughter showed a similar tendency that be accompanied by larger changes in facial expressions and body movements than smoothing and softening laughter, for example, higher cheeks raising, wider mouth open and apparent upper body frontward. We also attempted to analyze the dynamic changes occurring within a laughter event. Our findings provide hints for the design of human-like conversational agents.},
  keywords  = {laughter types, facial expression, bodymovement, spontaneous conversation},
}
Takehiro Hasegawa, Max Austin, Hidenobu Sumioka, Yasuo Kuniyoshi, Kohei Nakajima, "Takorobo V1: Towards Closed-Loop Body Driven Locomotion Processing", In The 2024 Conference on Artificial Life(ALIFE2024), no. 81, Copenhagen, Denmark, pp. 1-9, July, 2024.
Abstract: The potential softness of robots that navigate the real worldis frequently hamstrung by the requirement of rigid elementssuch as traditional computing technology. One avenue thatmay remedy this is the application of physical reservoir computing.It has been shown that by leveraging the underactuatednonlinear dynamics of soft mechanisms complexcomputing tasks can be achieved. In this study we presenta new octopus inspired walking and swimming robot: Takorobo.Using its four soft sensory tentacles, we investigate thedegree to which locomotion significant tasks (including bodymotion prediction and direct actuator control) can be embeddedinto this robot. It was found that the robot was able toaccurately compute its body motions and successfully implementdirect closed-loop PRC control both on land and waterfor some control signals.
BibTeX:
@InProceedings{Hasegawa2024,
  author    = {Takehiro Hasegawa and Max Austin and Hidenobu Sumioka and Yasuo Kuniyoshi and Kohei Nakajima},
  booktitle = {The 2024 Conference on Artificial Life(ALIFE2024)},
  title     = {Takorobo V1: Towards Closed-Loop Body Driven Locomotion Processing},
  year      = {2024},
  address   = {Copenhagen, Denmark},
  day       = {22-26},
  month     = jul,
  number    = {81},
  pages     = {1-9},
  url       = {https://2024.alife.org/detailed_program.html},
  abstract  = {The potential softness of robots that navigate the real worldis frequently hamstrung by the requirement of rigid elementssuch as traditional computing technology. One avenue thatmay remedy this is the application of physical reservoir computing.It has been shown that by leveraging the underactuatednonlinear dynamics of soft mechanisms complexcomputing tasks can be achieved. In this study we presenta new octopus inspired walking and swimming robot: Takorobo.Using its four soft sensory tentacles, we investigate thedegree to which locomotion significant tasks (including bodymotion prediction and direct actuator control) can be embeddedinto this robot. It was found that the robot was able toaccurately compute its body motions and successfully implementdirect closed-loop PRC control both on land and waterfor some control signals.},
}
David Achanccaray, Javier Andreu-Perez, Hidenobu Sumioka, "Neural Profiling of Teleoperator's Skills of Social Tasks", In 2024 IEEE International Conference on Robotics and Automation (ICRA2024) Workshop: Society of Avatar-Symbiosis through Social Field Experiments, パシフィコ横浜, 神奈川, May, 2024.
Abstract: The teleoperation conditions can affect the operator’s performance due to the alteration in his/her workload and mental state. Decoding the neural profile of teleoperation skills might help to mitigate it and provide assistance by the teleoperation interface. This work proposed teleoperation experiments of social tasks based on questions and answers during a dyadic interaction between an operator and another individual (questioner). The operator's brain activity was recorded by a fNIRS device during this interaction. Then, brain features and performance metrics of low- and high-performance participants were analyzed for neural profiling. We found that the width hemoglobin oxygenation (Hbo) wave was an indicator of the teleoperation performance. Thus, high-performance participants reached greater Hbo-width.
BibTeX:
@InProceedings{Achanccaray2024a,
  author    = {David Achanccaray and Javier Andreu-Perez and Hidenobu Sumioka},
  booktitle = {2024 IEEE International Conference on Robotics and Automation (ICRA2024) Workshop: Society of Avatar-Symbiosis through Social Field Experiments},
  title     = {Neural Profiling of Teleoperator's Skills of Social Tasks},
  year      = {2024},
  address   = {パシフィコ横浜, 神奈川},
  day       = {13},
  month     = may,
  url       = {https://2024.ieee-icra.org/},
  abstract  = {The teleoperation conditions can affect the operator’s performance due to the alteration in his/her workload and mental state. Decoding the neural profile of teleoperation skills might help to mitigate it and provide assistance by the teleoperation interface. This work proposed teleoperation experiments of social tasks based on questions and answers during a dyadic interaction between an operator and another individual (questioner). The operator's brain activity was recorded by a fNIRS device during this interaction. Then, brain features and performance metrics of low- and high-performance participants were analyzed for neural profiling. We found that the width hemoglobin oxygenation (Hbo) wave was an indicator of the teleoperation performance. Thus, high-performance participants reached greater Hbo-width.},
}
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Using joint training speaker encoder with consitency loss to achieve cross-lingual voice conversion and expressive voice conversion", In 2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023), Taipen, Taiwan, December, 2023.
Abstract: Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study,we propose a novel approach that combines a joint training speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally,we introduce a speaker consistency loss to the joint encoder,which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech.
BibTeX:
@InProceedings{Guo2023a,
  author    = {Houjian Guo and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023)},
  title     = {Using joint training speaker encoder with consitency loss to achieve cross-lingual voice conversion and expressive voice conversion},
  year      = {2023},
  address   = {Taipen, Taiwan},
  day       = {16-21},
  doi       = {10.48550/arXiv.2307.00393},
  month     = dec,
  url       = {979-8-3503-0689-7/23/},
  abstract  = {Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study,we propose a novel approach that combines a joint training speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally,we introduce a speaker consistency loss to the joint encoder,which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech.},
  keywords  = {cross-lingual voice conversion, expressivevoice conversion, joint speaker encoder, speaker consistencyloss},
}
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "QuickVC: Any-to-many Voice Conversion Using Inverse Short-time Fourier Transform for Faster Conversion", In 2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023), no. 979-8-3503-0689-7/23/, Taipei, Taiwan, December, 2023.
Abstract: With the development of automatic speech recognition and text-to-speech technology, high-quality voice conversion can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However,current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based voice conversion model that uses the HuBERTSoft model to extract content information features. Unlike the original VITS model, we use the inverse short-time Fourier transform to replace the most computationally expensive part. Through subjective and objective experiments on synthesized speech, the proposed model is capable of natural speech generation and it is very efficient at inference time. Experimental results show that our model can generate samples at over 5000 KHz on the 3090 GPU and over 250 KHz on the i9-10900K CPU, achieving faster speed in comparison to baseline methods using the same hardware configuration.
BibTeX:
@InProceedings{Guo2023,
  author    = {Houjian Guo and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023)},
  title     = {QuickVC: Any-to-many Voice Conversion Using Inverse Short-time Fourier Transform for Faster Conversion},
  year      = {2023},
  address   = {Taipei, Taiwan},
  day       = {16-21},
  doi       = {10.48550/arXiv.2302.08296},
  month     = dec,
  number    = {979-8-3503-0689-7/23/},
  url       = {https://arxiv.org/abs/2302.08296},
  abstract  = {With the development of automatic speech recognition and text-to-speech technology, high-quality voice conversion can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However,current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based voice conversion model that uses the HuBERTSoft model to extract content information features. Unlike the original VITS model, we use the inverse short-time Fourier transform to replace the most computationally expensive part. Through subjective and objective experiments on synthesized speech, the proposed model is capable of natural speech generation and it is very efficient at inference time. Experimental results show that our model can generate samples at over 5000 KHz on the 3090 GPU and over 250 KHz on the i9-10900K CPU, achieving faster speed in comparison to baseline methods using the same hardware configuration.},
  keywords  = {Voice conversion, lightweight model, inverseshort-time Fourier transform},
}
David Achanccaray, Hidenobu Sumioka, "A Physiological Approach of Presence and VR Sickness in Simulated Teleoperated Social Tasks", In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), no. 979-8-3503-3702-0/23/, Maui, Hawaii, USA (online), pp. 4562-4567, October, 2023.
Abstract: The presence (or telepresence) feeling and virtual reality (VR) sickness affect the task execution in teleoperation. Most teleoperation works have assessed these concepts using objective (physiological signals) and subjective (questionnaires) measurements. However, these works did not include social tasks. To the best of our knowledge, there was no physiological approach in teleoperation of social tasks. We measured presence and VR sickness in a simulation of teleoperated social tasks by questionnaires and analyzed the correlation between their scores and multimodal biomarkers. The results showed some different correlations from the findings of non-teleoperation studies. These correlations were between presence and neural biomarkers in the frontal-central and central regions (for the beta and delta bands) and between VR sickness and brain biomarkers in the occipital region (for the alpha and beta bands) and the mean temperature. This work revealed significant correlations to support some biomarkers as predictors of the trend of presence and VR sickness in simulated teleoperated social tasks. These biomarkers might also be valid to predict the trend of telepresence and motion sickness in teleoperated social tasks in a remote environment.
BibTeX:
@InProceedings{Achanccaray2023b,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
  title     = {A Physiological Approach of Presence and VR Sickness in Simulated Teleoperated Social Tasks},
  year      = {2023},
  address   = {Maui, Hawaii, USA (online)},
  day       = {1-4},
  month     = oct,
  number    = {979-8-3503-3702-0/23/},
  pages     = {4562-4567},
  url       = {https://ieeesmc2023.org/},
  abstract  = {The presence (or telepresence) feeling and virtual reality (VR) sickness affect the task execution in teleoperation. Most teleoperation works have assessed these concepts using objective (physiological signals) and subjective (questionnaires) measurements. However, these works did not include social tasks. To the best of our knowledge, there was no physiological approach in teleoperation of social tasks. We measured presence and VR sickness in a simulation of teleoperated social tasks by questionnaires and analyzed the correlation between their scores and multimodal biomarkers. The results showed some different correlations from the findings of non-teleoperation studies. These correlations were between presence and neural biomarkers in the frontal-central and central regions (for the beta and delta bands) and between VR sickness and brain biomarkers in the occipital region (for the alpha and beta bands) and the mean temperature. This work revealed significant correlations to support some biomarkers as predictors of the trend of presence and VR sickness in simulated teleoperated social tasks. These biomarkers might also be valid to predict the trend of telepresence and motion sickness in teleoperated social tasks in a remote environment.},
  keywords  = {Teleoperation, Social tasks, Presence, VR sickness, Biomarkers, Virtual reality},
}
Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Bowen Wu, Hiroshi Ishiguro, "Recognizing Real-World Intentions using A Multimodal Deep Learning Approach with Spatial-Temporal Graph Convolutional Networks", In The 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), no. 978-1-6654-9190-7/23/, Detroit, Michigan, USA, pp. 3819-3826, October, 2023.
Abstract: Identifying intentions is a critical task for comprehending the actions of others, anticipating their future behavior, and making informed decisions. However, it is challenging to recognize intentions due to the uncertainty of future human activities and the complex influence factors. In this work, we explore the method of recognizing intentions alluded under human behaviors in the real world, aiming to boost intelligent systems’ ability to recognize potential intentions and understand human behaviors. We collect data containing realworld human behaviors before using a hand dispenser and a temperature scanner at the building entrance. These data are processed and labeled into intention categories. A questionnaire is conducted to survey the human ability in inferring the intentions of others. Skeleton data and image features are extracted inspired by the answer to the questionnaire. For skeleton-based intention recognition, we propose a spatial-temporal graph convolutional network that performs graph convolutions on both part-based graphs and adaptive graphs, which achieves the best performance compared with baseline models in the same task. A deep-learning-based method using multimodal features is proposed to automatically infer intentions, which is demonstrated to accurately predict intentions based on past behaviors in the experiment, significantly outperforming humans.
BibTeX:
@InProceedings{Shi2023,
  author    = {Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Bowen Wu and Hiroshi Ishiguro},
  booktitle = {The 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)},
  title     = {Recognizing Real-World Intentions using A Multimodal Deep Learning Approach with Spatial-Temporal Graph Convolutional Networks},
  year      = {2023},
  address   = {Detroit, Michigan, USA},
  day       = {1-5},
  doi       = {10.1109/IROS55552.2023.10341981},
  month     = oct,
  number    = {978-1-6654-9190-7/23/},
  pages     = {3819-3826},
  url       = {https://ieeexplore.ieee.org/document/10341981},
  abstract  = {Identifying intentions is a critical task for comprehending the actions of others, anticipating their future behavior, and making informed decisions. However, it is challenging to recognize intentions due to the uncertainty of future human activities and the complex influence factors. In this work, we explore the method of recognizing intentions alluded under human behaviors in the real world, aiming to boost intelligent systems’ ability to recognize potential intentions and understand human behaviors. We collect data containing realworld human behaviors before using a hand dispenser and a temperature scanner at the building entrance. These data are processed and labeled into intention categories. A questionnaire is conducted to survey the human ability in inferring the intentions of others. Skeleton data and image features are extracted inspired by the answer to the questionnaire. For skeleton-based intention recognition, we propose a spatial-temporal graph convolutional network that performs graph convolutions on both part-based graphs and adaptive graphs, which achieves the best performance compared with baseline models in the same task. A deep-learning-based method using multimodal features is proposed to automatically infer intentions, which is demonstrated to accurately predict intentions based on past behaviors in the experiment, significantly outperforming humans.},
}
David Achanccaray, Hidenobu Sumioka, "Analysis of Physiological Response of Attention and Stress States in Teleoperation Performance of Social Tasks", In 45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBC2023), Sydney, Australia, July, 2023.
Abstract: Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We analyzed the physiological response of attention and stress mental states by computing the correlation between multimodal biomarkers and performance, pleasure-arousal scale, and workload. Physiological data were recorded during simulated teleoperated social tasks to induce mental states, such as normal, attention, and stress. The results showed that task performance and workload subscales achieved moderate correlations with some multimodal biomarkers. The correlations depended on the induced state. The cognitive workload was related to brain biomarkers of attention in the frontal and frontal-central regions. These regions were close to the frontopolar region, which is commonly reported in attentional studies. Thus, some multimodal biomarkers of attention and stress mental states could monitor or predict metrics related to the performance in teleoperation of social tasks.
BibTeX:
@InProceedings{Achanccaray2023a,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBC2023)},
  title     = {Analysis of Physiological Response of Attention and Stress States in Teleoperation Performance of Social Tasks},
  year      = {2023},
  address   = {Sydney, Australia},
  day       = {24-27},
  month     = jul,
  url       = {https://embc.embs.org/2023/},
  abstract  = {Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We analyzed the physiological response of attention and stress mental states by computing the correlation between multimodal biomarkers and performance, pleasure-arousal scale, and workload. Physiological data were recorded during simulated teleoperated social tasks to induce mental states, such as normal, attention, and stress. The results showed that task performance and workload subscales achieved moderate correlations with some multimodal biomarkers. The correlations depended on the induced state. The cognitive workload was related to brain biomarkers of attention in the frontal and frontal-central regions. These regions were close to the frontopolar region, which is commonly reported in attentional studies. Thus, some multimodal biomarkers of attention and stress mental states could monitor or predict metrics related to the performance in teleoperation of social tasks.},
}
Changzeng Fu, Zhenghan Chen, Jiaqi Shi, Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "HAG: Hierarchical Attention with Graph Network for Dialogue Act Classification in Conversation", In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes Island, Greece, pp. 1-5, June, 2023.
Abstract: The prediction of dialogue acts (DA) labels on utterancelevelin conversations can be treated as a sequence labelingproblem, which requires context- and speaker-aware semanticcomprehension, especially for Japanese. In this study, we proposeda hierarchical attention with the graph neural network(HAG) to consider the contextual interconnections as wellas the semantics carried by the sentence itself. Concretely,the model use long-short term memory networks (LSTMs)to perform a context-aware encoding within a dialogue window.Then, we construct the context graph by aggregatingthe neighboring utterances. Subsequently, a speaker featuretransformation is executed with a graph attention network(GAT) to calculate the interconnections, while a context-levelfeature selection is performed with a gated graph convolutionalnetwork (GatedGCN) to select the salient utterancesthat contribute to the DA classification. Finally, we merge therepresentations of different levels and conduct a classificationwith two dense layers. We evaluate the proposed model onJapanese dialogue act dataset (JPS-DA). The experimentalresults show that our method outperforms the baselines.
BibTeX:
@InProceedings{Fu2023,
  author    = {Changzeng Fu and Zhenghan Chen and Jiaqi Shi and Bowen Wu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023)},
  title     = {HAG: Hierarchical Attention with Graph Network for Dialogue Act Classification in Conversation},
  year      = {2023},
  address   = {Rhodes Island, Greece},
  day       = {4-9},
  doi       = {10.1109/ICASSP49357.2023.10096805},
  month     = jun,
  pages     = {1-5},
  url       = {https://ieeexplore.ieee.org/document/10096805/authors#authors},
  abstract  = {The prediction of dialogue acts (DA) labels on utterancelevelin conversations can be treated as a sequence labelingproblem, which requires context- and speaker-aware semanticcomprehension, especially for Japanese. In this study, we proposeda hierarchical attention with the graph neural network(HAG) to consider the contextual interconnections as wellas the semantics carried by the sentence itself. Concretely,the model use long-short term memory networks (LSTMs)to perform a context-aware encoding within a dialogue window.Then, we construct the context graph by aggregatingthe neighboring utterances. Subsequently, a speaker featuretransformation is executed with a graph attention network(GAT) to calculate the interconnections, while a context-levelfeature selection is performed with a gated graph convolutionalnetwork (GatedGCN) to select the salient utterancesthat contribute to the DA classification. Finally, we merge therepresentations of different levels and conduct a classificationwith two dense layers. We evaluate the proposed model onJapanese dialogue act dataset (JPS-DA). The experimentalresults show that our method outperforms the baselines.},
  keywords  = {Semantics, Oral communication, Logic gates, Signal processing, Feature extraction, Graph neural networks, Encoding},
}
David Achanccaray, Hidenobu Sumioka, "Physiological Analysis of Attention and Stress States in Teleoperation of Social Tasks", In 2023 IEEE International Conference on Robotics and Automation, Workshop on 'Avatar-Symbiotic Society'(ICRA2023 Workshop MW25), London, UK (online), pp. 1-2, May, 2023.
Abstract: Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We explored the physiological response of mental states during the simulated teleoperation of social tasks to determine its influence by analyzing statistical differences/correlations in/between multimodal biomarkers, performance metrics, emotional scale, workload, presence, and VR sickness symptoms among tasks to induce normal, attention, and stress mental states. Thus, this work revealed significant correlations to support some biomarkers as predictors of workload, presence, and VR sickness in simulated teleoperated social tasks.
BibTeX:
@InProceedings{Achanccaray2023,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {2023 IEEE International Conference on Robotics and Automation, Workshop on 'Avatar-Symbiotic Society'(ICRA2023 Workshop MW25)},
  title     = {Physiological Analysis of Attention and Stress States in Teleoperation of Social Tasks},
  year      = {2023},
  address   = {London, UK (online)},
  day       = {29-2},
  month     = may,
  pages     = {1-2},
  url       = {https://www.icra2023.org/welcome},
  abstract  = {Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We explored the physiological response of mental states during the simulated teleoperation of social tasks to determine its influence by analyzing statistical differences/correlations in/between multimodal biomarkers, performance metrics, emotional scale, workload, presence, and VR sickness symptoms among tasks to induce normal, attention, and stress mental states. Thus, this work revealed significant correlations to support some biomarkers as predictors of workload, presence, and VR sickness in simulated teleoperated social tasks.},
  keywords  = {Teleoperation, Social tasks, Workload, Emotions, Presence, Virtual reality sickness, Virtual reality},
}
Takuto Akiyoshi, Hidenobu Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Masahiro Shiomi, Hirokazu Kato, "Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care", In the 18th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI 2023), Stockholm, Sweden (online), pp. 572-575, March, 2023.
Abstract: One of the important roles of social robots is to support mentalhealth through conversations with people. In this study, we focusedon the column method to support cognitive restructuring, whichis also used as one of the programs in psychiatric day care, and tohelp patients think flexibly and understand their own characteristics.To develop a robot that assists psychiatric day care patients inorganizing their thoughts about their worries and goals throughconversation, we designed the robot’s conversation content basedon the column method and implemented its autonomous conversationfunction. This paper reports on the preliminary experimentsconducted to evaluate and improve the effectiveness of this prototypesystem in an actual psychiatric day care setting, and on thecomments from participants in the experiments and day care staff.
BibTeX:
@InProceedings{Akiyoshi2023,
  author    = {Takuto Akiyoshi and Hidenobu Sumioka and Hirokazu Kumazaki and Junya Nakanishi and Masahiro Shiomi and Hirokazu Kato},
  booktitle = {the 18th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI 2023)},
  title     = {Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care},
  year      = {2023},
  address   = {Stockholm, Sweden (online)},
  day       = {13-16},
  month     = mar,
  pages     = {572-575},
  url       = {https://humanrobotinteraction.org/2023/},
  abstract  = {One of the important roles of social robots is to support mentalhealth through conversations with people. In this study, we focusedon the column method to support cognitive restructuring, whichis also used as one of the programs in psychiatric day care, and tohelp patients think flexibly and understand their own characteristics.To develop a robot that assists psychiatric day care patients inorganizing their thoughts about their worries and goals throughconversation, we designed the robot’s conversation content basedon the column method and implemented its autonomous conversationfunction. This paper reports on the preliminary experimentsconducted to evaluate and improve the effectiveness of this prototypesystem in an actual psychiatric day care setting, and on thecomments from participants in the experiments and day care staff.},
  keywords  = {human-robot interaction, cognitive reconstruction, stress-coping, psychiatric day care},
}
Chaoran Liu, Carlos Toshinori Ishi, "A Smartphone Pose Auto-calibration Method using Hash-based DOA Estimation", In The 2023 IEEE/SICE International Symposium on System Integrations (SII 2023), Atlanta , USA, pp. 1-6, January, 2023.
Abstract: This paper presents a method to utilize multiple off-the-shelf smartphones to localize speakers. For DOA (direction of arrival) estimation on every single smartphone, we proposed an O(1) complexity hash table-based modified phase transform (PHAT) estimation method without scanning all possible directions to achieve lower CPU usage and longer battery life. Additionally, to increase DOA estimation accuracy, we measured two types of smartphone impulse responses and made them publicly available. In the auto-calibration process,each smartphone detects a pure tone emitted from another smartphone’s speaker. Assuming that all smartphones are on the same desktop surface, each smartphone’s 2D position and rotation are estimated using these detected DOAs and the speaker position relative to their central point. A bundle adjustment-like optimization method is employed to reduce the re-projection error in this process. After auto-calibration, we can easily integrate the DOAs found by each smartphone and estimate the speaker’s position using simple triangulation. The experimental results show that the proposed hash table-based DOA estimation method and 2D version bundle adjustment can perform auto-calibration precisely.
BibTeX:
@InProceedings{Liu2023,
  author    = {Chaoran Liu and Carlos Toshinori Ishi},
  booktitle = {The 2023 IEEE/SICE International Symposium on System Integrations (SII 2023)},
  title     = {A Smartphone Pose Auto-calibration Method using Hash-based DOA Estimation},
  year      = {2023},
  address   = {Atlanta , USA},
  day       = {17-20},
  doi       = {10.1109/SII55687.2023.10039085},
  month     = jan,
  pages     = {1-6},
  url       = {https://www.sice-si.org/conf/SII2023/approved_special_session.html},
  abstract  = {This paper presents a method to utilize multiple off-the-shelf smartphones to localize speakers. For DOA (direction of arrival) estimation on every single smartphone, we proposed an O(1) complexity hash table-based modified phase transform (PHAT) estimation method without scanning all possible directions to achieve lower CPU usage and longer battery life. Additionally, to increase DOA estimation accuracy, we measured two types of smartphone impulse responses and made them publicly available. In the auto-calibration process,each smartphone detects a pure tone emitted from another smartphone’s speaker. Assuming that all smartphones are on the same desktop surface, each smartphone’s 2D position and rotation are estimated using these detected DOAs and the speaker position relative to their central point. A bundle adjustment-like optimization method is employed to reduce the re-projection error in this process. After auto-calibration, we can easily integrate the DOAs found by each smartphone and estimate the speaker’s position using simple triangulation. The experimental results show that the proposed hash table-based DOA estimation method and 2D version bundle adjustment can perform auto-calibration precisely.},
}
Carlos Toshinori Ishi, Chaoran Liu, Takashi Minato, "An attention-based sound selective hearing support system: evaluation by subjects with age-related hearing loss", In 2023 IEEE/SICE international Symposium on Sustem Integration (SII2023), Atlanta, USA, pp. 1-6, January, 2023.
Abstract: In order to overcome the problems of current hearing aid devices, we proposed an attention-based sound selective hearing support system, where individual target and anti-target sound sources in the environment can be selected, and the target sources in the facing direction are emphasized. New functions were implemented by accounting for system’s practicability and usability. The performance of the proposed system was evaluated under different noise conditions, by elderly subjects with different levels of hearing loss. Intelligibility tests and subjective impressions in three-party dialogue interactions indicated clear improvements by using the proposed hearing support system under noisy conditions.
BibTeX:
@InProceedings{Ishi2023,
  author    = {Carlos Toshinori Ishi and Chaoran Liu and Takashi Minato},
  booktitle = {2023 IEEE/SICE international Symposium on Sustem Integration (SII2023)},
  title     = {An attention-based sound selective hearing support system: evaluation by subjects with age-related hearing loss},
  year      = {2023},
  address   = {Atlanta, USA},
  day       = {17-20},
  doi       = {10.1109/SII55687.2023.10039165},
  month     = jan,
  pages     = {1-6},
  url       = {https://www.sice-si.org/conf/SII2023/index.html},
  abstract  = {In order to overcome the problems of current hearing aid devices, we proposed an attention-based sound selective hearing support system, where individual target and anti-target sound sources in the environment can be selected, and the target sources in the facing direction are emphasized. New functions were implemented by accounting for system’s practicability and usability. The performance of the proposed system was evaluated under different noise conditions, by elderly subjects with different levels of hearing loss. Intelligibility tests and subjective impressions in three-party dialogue interactions indicated clear improvements by using the proposed hearing support system under noisy conditions.},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "C-CycleTransGAN: A Non-parallel Controllable Cross-gender Voice Conversion Model with CycleGAN and Transformer", In Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2022 (APSIPA ASC 2022), no. 978-616-590-477-3, Chiang Mai, Thailand, pp. 1-7, November, 2022.
Abstract: In this study, we propose a conversion intensitycontrollable model for the cross-gender voice conversion (VC)1.In particular, we combine the CycleGAN and transformer module,and build a condition embedding network as an intensitycontroller. The model is firstly pre-trained with self-supervisedlearning on the single-gender voice reconstruction task, withthe condition set to male-to-male or female-to-female. Then, wefine-tune the model on the cross-gender voice conversion taskafter the pretraining is completed, with the condition set tomale-to-female or female-to-male. In the testing procedure, thecondition is expected to be employed as a controllable parameter(scale) to adjust the conversion intensity. The proposed methodwas evaluated on the Voice Conversion Challenge dataset andcompared to two baselines (CycleGAN, CycleTransGAN) withobjective and subjective evaluations. The results show that ourproposed model is able to equip the model with an additionalfunction of cross-gender controllability and without hurting thevoice conversion performance.
BibTeX:
@InProceedings{Fu2022c,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2022 (APSIPA ASC 2022)},
  title     = {C-CycleTransGAN: A Non-parallel Controllable Cross-gender Voice Conversion Model with CycleGAN and Transformer},
  year      = {2022},
  address   = {Chiang Mai, Thailand},
  day       = {7-10},
  doi       = {10.23919/APSIPAASC55919.2022.9979821},
  month     = nov,
  number    = {978-616-590-477-3},
  pages     = {1-7},
  url       = {https://www.apsipa2022.org/},
  abstract  = {In this study, we propose a conversion intensitycontrollable model for the cross-gender voice conversion (VC)1.In particular, we combine the CycleGAN and transformer module,and build a condition embedding network as an intensitycontroller. The model is firstly pre-trained with self-supervisedlearning on the single-gender voice reconstruction task, withthe condition set to male-to-male or female-to-female. Then, wefine-tune the model on the cross-gender voice conversion taskafter the pretraining is completed, with the condition set tomale-to-female or female-to-male. In the testing procedure, thecondition is expected to be employed as a controllable parameter(scale) to adjust the conversion intensity. The proposed methodwas evaluated on the Voice Conversion Challenge dataset andcompared to two baselines (CycleGAN, CycleTransGAN) withobjective and subjective evaluations. The results show that ourproposed model is able to equip the model with an additionalfunction of cross-gender controllability and without hurting thevoice conversion performance.},
  keywords  = {controllable cross-gender voice conversion, cycle-consistent adversarial networks, transformer},
}
Ryuichiro Higashinaka, Takashi Minato, Kurima Sakai, Tomo Funayama, Hiromitsu Nishizaki, Takuya Nagai, "Dialogue Robot Competition for Developing Android Robot with Hospitality", In 2022 IEEE 11th Global Conference on Consumer Electronics (GCCE 2022), Senri Life Science Center, Osaka, October, 2022.
Abstract: To promote the research and development of an android robot with hospitality, we organized the Dialogue Robot Competition where the task is to serve a customer in a travel destination recommendation task. The robot acts as a salesperson at a travel agency and needs to help customers choose their desired destinations. This paper describes the task setting, software distributed for the competition, evaluation procedure, and results of the preliminary and final rounds of the competition.
BibTeX:
@InProceedings{Higashinaka2022,
  author    = {Ryuichiro Higashinaka and Takashi Minato and Kurima Sakai and Tomo Funayama and Hiromitsu Nishizaki and Takuya Nagai},
  booktitle = {2022 IEEE 11th Global Conference on Consumer Electronics (GCCE 2022)},
  title     = {Dialogue Robot Competition for Developing Android Robot with Hospitality},
  year      = {2022},
  address   = {Senri Life Science Center, Osaka},
  day       = {18-21},
  doi       = {10.1109/GCCE56475.2022.10014410},
  month     = oct,
  url       = {https://www.ieee-gcce.org/2022/index.html},
  abstract  = {To promote the research and development of an android robot with hospitality, we organized the Dialogue Robot Competition where the task is to serve a customer in a travel destination recommendation task. The robot acts as a salesperson at a travel agency and needs to help customers choose their desired destinations. This paper describes the task setting, software distributed for the competition, evaluation procedure, and results of the preliminary and final rounds of the competition.},
  keywords  = {Human-robot interaction, spoken-language processing, competition},
}
Bowem Wu, Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Controlling the Impression of Robots via GAN-based Gesture Generation", In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto International Conference Center, Kyoto, pp. 9288-9295, October, 2022.
Abstract: As a type of body language, gestures can largelyaffect the impressions of human-like robots perceived byusers. Recent data-driven approaches to the generation of cospeechgestures have successfully promoted the naturalnessof produced gestures. These approaches also possess greatergeneralizability to work under various contexts than rule-basedmethods. However, most have no direct control over the humanimpressions of robots. The main obstacle is that creating adataset that covers various impression labels is not trivial. Inthis study, based on previous findings in cognitive science onrobot impressions, we present a heuristic method to controlthem without manual labeling, and demonstrate its effectivenesson a virtual agent and partially on a humanoid robot throughsubjective experiments with 50 participants.
BibTeX:
@InProceedings{Wu2022,
  author    = {Bowem Wu and Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)},
  title     = {Controlling the Impression of Robots via GAN-based Gesture Generation},
  year      = {2022},
  address   = {Kyoto International Conference Center, Kyoto},
  day       = {23-27},
  month     = oct,
  pages     = {9288-9295},
  url       = {https://iros2022.org/},
  abstract  = {As a type of body language, gestures can largelyaffect the impressions of human-like robots perceived byusers. Recent data-driven approaches to the generation of cospeechgestures have successfully promoted the naturalnessof produced gestures. These approaches also possess greatergeneralizability to work under various contexts than rule-basedmethods. However, most have no direct control over the humanimpressions of robots. The main obstacle is that creating adataset that covers various impression labels is not trivial. Inthis study, based on previous findings in cognitive science onrobot impressions, we present a heuristic method to controlthem without manual labeling, and demonstrate its effectivenesson a virtual agent and partially on a humanoid robot throughsubjective experiments with 50 participants.},
}
Qi An, Akito Tanaka, Kazuto Nakashima, Hidenobu Sumioka, Masahiro Shiomi, Ryo Kurazume, "Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors", In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC2022), Prague, Czech Republic, pp. 1866-1871, October, 2022.
Abstract: Assisting patients with dementia is an importantsocial issue, and currently a mutli-modal care technique calledHumanitude is attracting attention. In Humanitude, it isimportant to have the patient stand up by utilizing theirown motor functions as much as possible. Humanitude caretechnique encourages caregivers to increase the area of contactwith patients during the sit-to-stand motion, but this caretechnique is not well understood for novice caregivers. Here, wedeveloped smock-type wearable sensors to measure proximitybetween caregivers and care recipients while assisting sit-tostandmotion. A measurement experiment was conducted toevaluate how proximity differs when the caregivers performsHumanitude care or they simulated care of novice. In addition,the effect of different care techniques on center of mass(CoM) trajectory and muscle activity of the care recipient wereinvestigated. As a result, it was found that the caregivers tendto bring their top and middle trunk closer in Humanitude carethan in novice simulated care. Furthermore, it resulted thatCoM trajectory and muscle activity under Humanitude carebecame more similar to those when the care recipient stood-upindependently than the condition with novice care. These resultsvalidate the effectiveness of Humanitude care and provideimportant aspect for learning techniques in Humanitude.
BibTeX:
@InProceedings{An2022,
  author    = {Qi An and Akito Tanaka and Kazuto Nakashima and Hidenobu Sumioka and Masahiro Shiomi and Ryo Kurazume},
  booktitle = {2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC2022)},
  title     = {Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors},
  year      = {2022},
  address   = {Prague, Czech Republic},
  day       = {9-12},
  month     = oct,
  pages     = {1866-1871},
  url       = {https://ieeesmc2022.org/},
  abstract  = {Assisting patients with dementia is an importantsocial issue, and currently a mutli-modal care technique calledHumanitude is attracting attention. In Humanitude, it isimportant to have the patient stand up by utilizing theirown motor functions as much as possible. Humanitude caretechnique encourages caregivers to increase the area of contactwith patients during the sit-to-stand motion, but this caretechnique is not well understood for novice caregivers. Here, wedeveloped smock-type wearable sensors to measure proximitybetween caregivers and care recipients while assisting sit-tostandmotion. A measurement experiment was conducted toevaluate how proximity differs when the caregivers performsHumanitude care or they simulated care of novice. In addition,the effect of different care techniques on center of mass(CoM) trajectory and muscle activity of the care recipient wereinvestigated. As a result, it was found that the caregivers tendto bring their top and middle trunk closer in Humanitude carethan in novice simulated care. Furthermore, it resulted thatCoM trajectory and muscle activity under Humanitude carebecame more similar to those when the care recipient stood-upindependently than the condition with novice care. These resultsvalidate the effectiveness of Humanitude care and provideimportant aspect for learning techniques in Humanitude.},
  keywords  = {Wearable tactile sensor, Humanitude care, Sitto-stand},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "A CONTROLLABLE CROSS-GENDER VOICE CONVERSION FOR SOCIAL ROBOT", In ACII2022 WORKSHOP ON AFFECTIVE HUMAN-ROBOT INTERACTION (AHRI), online, October, 2022.
Abstract: In this study, we propose a conversion intensity controllablemodel for voice conversion (VC). In particular, we combinethe CycleGAN and transformer module, and build a conditionembedding network as a control parameter. The modelis first pre-trained with self-supervised learning on the voicereconstruction task, with the condition set to male-to-male orfemale-to-female. Then, we retrain the model on the crossgendervoice conversion task after the pretraining is completed,with the condition set to male-to-female or femaleto-male. In the testing procedure, the condition is expectedto be employed as a controllable parameter (scale). The proposedmethod was evaluated on the Voice Conversion Challengedataset and compared to two baselines (CycleGAN, CycleTransGAN)with objective and subjective evaluations. Theresults show that our proposed model is able to convert voicewith competitive performance, with the additional function ofcross-gender controllability.
BibTeX:
@InProceedings{Fu2022b,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {ACII2022 WORKSHOP ON AFFECTIVE HUMAN-ROBOT INTERACTION (AHRI)},
  title     = {A CONTROLLABLE CROSS-GENDER VOICE CONVERSION FOR SOCIAL ROBOT},
  year      = {2022},
  address   = {online},
  day       = {17},
  month     = oct,
  url       = {https://www.a-hri.me/},
  abstract  = {In this study, we propose a conversion intensity controllablemodel for voice conversion (VC). In particular, we combinethe CycleGAN and transformer module, and build a conditionembedding network as a control parameter. The modelis first pre-trained with self-supervised learning on the voicereconstruction task, with the condition set to male-to-male orfemale-to-female. Then, we retrain the model on the crossgendervoice conversion task after the pretraining is completed,with the condition set to male-to-female or femaleto-male. In the testing procedure, the condition is expectedto be employed as a controllable parameter (scale). The proposedmethod was evaluated on the Voice Conversion Challengedataset and compared to two baselines (CycleGAN, CycleTransGAN)with objective and subjective evaluations. Theresults show that our proposed model is able to convert voicewith competitive performance, with the additional function ofcross-gender controllability.},
  keywords  = {speech conversion, cycle-consistent adversarialnetworks},
}
Aya Nakae, Ehsan Alizadeh Kashtiban, Tetsuro Honda, Chie Kishimoto, Kunihiro Nakai, "Objective evaluation of pain from experimental pressure stimulation by EEG", In IASP 2022 World Congress on Pain, Tronto, Canada, September, 2022.
Abstract: As pain is subjective symptom and moreover, to communicating the amount of pain is sometimes difficult, to prescribe appropriate amounts of analgesics is often challenging for doctors. To avoid the misuse of analgesics, the system of objective evaluation of pain will contribute to standardize pain treatment. By using the pooled EEG data from healthy volunteers with experimental heat pain stimulation, the absolute amplitudes, frequency power and frequency coherence were amplified and then, the features of the EEG were extracted and the EEG-based pain score algorithm by regression model was developed. The aim of this study is to evaluate the experimental ischemic pain with two different grades objectively by our EEG-based pain score algorithm. The qualities of pain evoked by KAATSU MASTER which could control the amount of blood flow and could imitate ischemic pain were Numbness, Throbbing pain, Shooting pain, Aching pain, and Electric -shock pain. Different levels of experimental pressure pain were successfully discriminated by the electroencephalogram data using machine learning technique.
BibTeX:
@InProceedings{Nakae2022a,
  author    = {Aya Nakae and Ehsan Alizadeh Kashtiban and Tetsuro Honda and Chie Kishimoto and Kunihiro Nakai},
  booktitle = {IASP 2022 World Congress on Pain},
  title     = {Objective evaluation of pain from experimental pressure stimulation by EEG},
  year      = {2022},
  address   = {Tronto, Canada},
  day       = {19-23},
  month     = sep,
  url       = {https://iaspworldcongress2022.org/},
  abstract  = {As pain is subjective symptom and moreover, to communicating the amount of pain is sometimes difficult, to prescribe appropriate amounts of analgesics is often challenging for doctors. To avoid the misuse of analgesics, the system of objective evaluation of pain will contribute to standardize pain treatment. By using the pooled EEG data from healthy volunteers with experimental heat pain stimulation, the absolute amplitudes, frequency power and frequency coherence were amplified and then, the features of the EEG were extracted and the EEG-based pain score algorithm by regression model was developed. The aim of this study is to evaluate the experimental ischemic pain with two different grades objectively by our EEG-based pain score algorithm. The qualities of pain evoked by KAATSU MASTER which could control the amount of blood flow and could imitate ischemic pain were Numbness, Throbbing pain, Shooting pain, Aching pain, and Electric -shock pain. Different levels of experimental pressure pain were successfully discriminated by the electroencephalogram data using machine learning technique.},
}
Taiken Shintani, carlos Toshinori Ishi, Hiroshi Ishiguro, "Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues", In 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2022), Naples, Italy, pp. 1534-1541, August, 2022.
Abstract: In this study, we describe an improved versionof our proposed model to generate gaze movements (eye andhead movements) of a dialogue robot in multi-party dialoguesituations, and investigated how the impressions change formodels created by data of speakers with different personalities.For that purpose, we used a multimodal three-party dialoguedata, and first analyzed the distributions of (1) the gaze target(towards dialogue partners or gaze aversion), (2) the gazeduration, and (3) the eyeball direction during gaze aversion.We then generated gaze behaviors in an android robot (Nikola)with the data of two people who were found to have distinctivepersonalities, and conducted subjective evaluation experiments.Results showed that a significant difference was found in theperceived personalities between the motions generated by thetwo models.
BibTeX:
@InProceedings{Shintani2022,
  author    = {Taiken Shintani and carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {31st IEEE International Conference on Robot \& Human Interactive Communication (RO-MAN 2022)},
  title     = {Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues},
  year      = {2022},
  address   = {Naples, Italy},
  day       = {29-2},
  month     = aug,
  pages     = {1534-1541},
  url       = {http://www.smile.unina.it/ro-man2022/},
  abstract  = {In this study, we describe an improved versionof our proposed model to generate gaze movements (eye andhead movements) of a dialogue robot in multi-party dialoguesituations, and investigated how the impressions change formodels created by data of speakers with different personalities.For that purpose, we used a multimodal three-party dialoguedata, and first analyzed the distributions of (1) the gaze target(towards dialogue partners or gaze aversion), (2) the gazeduration, and (3) the eyeball direction during gaze aversion.We then generated gaze behaviors in an android robot (Nikola)with the data of two people who were found to have distinctivepersonalities, and conducted subjective evaluation experiments.Results showed that a significant difference was found in theperceived personalities between the motions generated by thetwo models.},
}
Xinyue Li, Carlos Toshinori Ishi, Changzeng Fu, Ryoko Hayashi, "Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation by Chinese learners and Japanese Native Speakers", In Speech Prosody 2022, Lisbon, Portugal, pp. 550-554, May, 2022.
Abstract: The present study documents (1) how Japanese nativespeakers and L1-Chinese learners of L2 Japanese differ in theproduction of filled pauses during spontaneous conversations,and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation.Prosodic and voice quality measurements were extractedfrom vowels in filled pauses and ordinary lexical itemsproduced by Japanese native speakers and Chinese learners ofL2 Japanese. Statistical results revealed that there aresignificant differences in prosodic and voice qualitymeasurements including duration, F0mean, intensity, spectraltilt-related indices, jitter and shimmer, (1) between Japanesenative speakers and Chinese learners of L2 Japanese, as wellas (2) between filled pauses and ordinary lexical items. Inaddition, random forest analysis was conducted to examinehow much the measurements contribute to the classification offilled pauses and ordinary lexical items. Results indicate thatduration and intensity play the most significant role, whilevoice quality related features make a secondary contribution tothe classification. Results also suggest that the filled pauseproduction patterns of Chinese learners of L2 Japanese areinfluenced by L1 background.
BibTeX:
@InProceedings{Li2022a,
  author    = {Xinyue Li and Carlos Toshinori Ishi and Changzeng Fu and Ryoko Hayashi},
  booktitle = {Speech Prosody 2022},
  title     = {Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation by Chinese learners and Japanese Native Speakers},
  year      = {2022},
  address   = {Lisbon, Portugal},
  day       = {23-26},
  doi       = {10.21437/SpeechProsody.2022-112},
  month     = may,
  pages     = {550-554},
  url       = {http://labfon.letras.ulisboa.pt/sp2022/about.html},
  abstract  = {The present study documents (1) how Japanese nativespeakers and L1-Chinese learners of L2 Japanese differ in theproduction of filled pauses during spontaneous conversations,and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation.Prosodic and voice quality measurements were extractedfrom vowels in filled pauses and ordinary lexical itemsproduced by Japanese native speakers and Chinese learners ofL2 Japanese. Statistical results revealed that there aresignificant differences in prosodic and voice qualitymeasurements including duration, F0mean, intensity, spectraltilt-related indices, jitter and shimmer, (1) between Japanesenative speakers and Chinese learners of L2 Japanese, as wellas (2) between filled pauses and ordinary lexical items. Inaddition, random forest analysis was conducted to examinehow much the measurements contribute to the classification offilled pauses and ordinary lexical items. Results indicate thatduration and intensity play the most significant role, whilevoice quality related features make a secondary contribution tothe classification. Results also suggest that the filled pauseproduction patterns of Chinese learners of L2 Japanese areinfluenced by L1 background.},
  keywords  = {filled pauses, second language acquisition, spontaneous conversation, prosody, voice quality},
}
Ehsan Alizadeh Kashtiban, Tetsuro Honda, Chie Kishimoto, Yuya Onishi, Hidenobu Sumioka, Masahiro Shiomi, Aya Nakae, "THE EFFECT OF BEING HUGGED BY A ROBOT ON PAIN", In 12th Congress of the European Pain Federation(EFIC2022), online, April, 2022.
Abstract: As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain.Pain is a subjective symptom; however, it is sometimes difficult to prescribe analgesics based on subjective complaints. The development of an objective evaluation method is desired. We have developed an algorithm based on EEG data with experimental pain stimuli.The purpose of this study was to objectively evaluate the effect of hugging by a robot on pain, using pain score (PS). PS could allow us to objectively evaluate the effect of hugging by the robot on pain.
BibTeX:
@InProceedings{Alizadeh2022,
  author    = {Ehsan Alizadeh Kashtiban and Tetsuro Honda and Chie Kishimoto and Yuya Onishi and Hidenobu Sumioka and Masahiro Shiomi and Aya Nakae},
  booktitle = {12th Congress of the European Pain Federation(EFIC2022)},
  title     = {THE EFFECT OF BEING HUGGED BY A ROBOT ON PAIN},
  year      = {2022},
  address   = {online},
  day       = {27-30},
  month     = apr,
  url       = {https://efic-congress.org/},
  abstract  = {As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain.Pain is a subjective symptom; however, it is sometimes difficult to prescribe analgesics based on subjective complaints. The development of an objective evaluation method is desired. We have developed an algorithm based on EEG data with experimental pain stimuli.The purpose of this study was to objectively evaluate the effect of hugging by a robot on pain, using pain score (PS). PS could allow us to objectively evaluate the effect of hugging by the robot on pain.},
}
Aya Nakae, Ikan Chou, Tetsuro Honda, Chie Kishimoto, Hidenobu Sumioka, Yuya Onishi, Masahiro Shiomi, "CAN ROBOT’S HUG ALLEVIATE HUMAN PAIN?", In 12th Congress of the European Pain Federation(EFIC2022), Dublin (online), April, 2022.
Abstract: As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain. It has been reported that growth hormone secretion is decreased in fibromyalgia patients, and may be involved in the pain mechanism. We investigated the possibility that robot's hug could alleviate pain, along with changes in the secretion of growth hormone (GH). The results show that robots' hug has the potential to alleviate human pain. Its effects may be egulated via GH secretion.
BibTeX:
@InProceedings{Nakae2022,
  author    = {Aya Nakae and Ikan Chou and Tetsuro Honda and Chie Kishimoto and Hidenobu Sumioka and Yuya Onishi and Masahiro Shiomi},
  booktitle = {12th Congress of the European Pain Federation(EFIC2022)},
  title     = {CAN ROBOT’S HUG ALLEVIATE HUMAN PAIN?},
  year      = {2022},
  address   = {Dublin (online)},
  day       = {27-30},
  month     = apr,
  url       = {https://efic-congress.org/},
  abstract  = {As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain. It has been reported that growth hormone secretion is decreased in fibromyalgia patients, and may be involved in the pain mechanism. We investigated the possibility that robot's hug could alleviate pain, along with changes in the secretion of growth hormone (GH). The results show that robots' hug has the potential to alleviate human pain. Its effects may be egulated via GH secretion.},
}
Takashi Takuma, Koki Haruno, Kosuke Yamada, Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, "Stretchable Multi-modal Sensor using Capacitive Cloth for Soft Mobile Robot Passing through Gap", In 2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2021), Sanya, China (online), pp. 1960-1967, December, 2021.
Abstract: A challenging issue for soft robots is developingsoft sensors that measure such non-contact informationas the distance between a robot and obstaclesas well as contact information such as stretch lengthby external force. Another issue is to adopt the sensorto the mobile robot to measure topography of pathway.We adopt capacitive cloth, which contains conductiveand insulation layers, and measure not only suchcontact information as the robot’s deformation but alsosuch non-contact information as the distance betweenthe cloth and objects. Because the cloth cannot stretchthough it deforms, it is processed by the Kirigami structureand embedded into a silicone plate. This papershows the cloth’s basic specifications by measuring therelationship between the capacitance and the stretchlength that corresponds to the contact information andthe one and distance that corresponds to the noncontactinformation. The cloth is also embedded ina soft mobile robot that passes through a narrowgap while making contact with it. The pathway’sshape is estimated by observing the profile of thecloth’s capacitance by using contact information. Fromthe results of the first experiment, which measuredthe stretch length, we observed a strong correlationbetween the stretch length and the capacitance. Inthe second experiment on non-contact information anddistance, the capacitance greatly changed when the conductive material was close to cloth, although lessconductivematerial did not greatly affect the capacitance. In the last experiment in which we embeddedthe cloth into the soft robot, the gap’s height andlength of the pathway were detected by observing theprofile of the cloth’s capacitance. These results suggestthat capacitive cloth has multi-modal sensing ability,including both conventional contact and novel non-contact information.
BibTeX:
@InProceedings{Takuma2021,
  author    = {Takashi Takuma and Koki Haruno and Kosuke Yamada and Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi},
  booktitle = {2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2021)},
  title     = {Stretchable Multi-modal Sensor using Capacitive Cloth for Soft Mobile Robot Passing through Gap},
  year      = {2021},
  address   = {Sanya, China (online)},
  day       = {27-31},
  month     = dec,
  pages     = {1960-1967},
  url       = {https://ieee-robio.org/2021/},
  abstract  = {A challenging issue for soft robots is developingsoft sensors that measure such non-contact informationas the distance between a robot and obstaclesas well as contact information such as stretch lengthby external force. Another issue is to adopt the sensorto the mobile robot to measure topography of pathway.We adopt capacitive cloth, which contains conductiveand insulation layers, and measure not only suchcontact information as the robot’s deformation but alsosuch non-contact information as the distance betweenthe cloth and objects. Because the cloth cannot stretchthough it deforms, it is processed by the Kirigami structureand embedded into a silicone plate. This papershows the cloth’s basic specifications by measuring therelationship between the capacitance and the stretchlength that corresponds to the contact information andthe one and distance that corresponds to the noncontactinformation. The cloth is also embedded ina soft mobile robot that passes through a narrowgap while making contact with it. The pathway’sshape is estimated by observing the profile of thecloth’s capacitance by using contact information. Fromthe results of the first experiment, which measuredthe stretch length, we observed a strong correlationbetween the stretch length and the capacitance. Inthe second experiment on non-contact information anddistance, the capacitance greatly changed when the conductive material was close to cloth, although lessconductivematerial did not greatly affect the capacitance. In the last experiment in which we embeddedthe cloth into the soft robot, the gap’s height andlength of the pathway were detected by observing theprofile of the cloth’s capacitance. These results suggestthat capacitive cloth has multi-modal sensing ability,including both conventional contact and novel non-contact information.},
}
Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN", In The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2021 (The workshop is as an official workshop of ACM ICMI’21), Virtual, pp. 194-201, October, 2021.
Abstract: Gestures are crucial for increasing the human-likeness of agents and robots to achieve smoother interactions with humans. The realization of an effective system to model human gestures, which are matched with the speech utterances, is necessary to be embedded in these agents. In this work, we propose a GRU-based autoregressive generation model for gesture generation, which is trained with a CNN-based discriminator in an adversarial manner using a WGAN-based learning algorithm. The model is trained to output the rotation angles of the joints in the upper body, and implemented to animate a CG avatar. The motions synthesized by the proposed system are evaluated via an objective measure and a subjective experiment, showing that the proposed model outperforms a baseline model which is trained by a state-of-the-art GAN-based algorithm, using the same dataset. This result reveals that it is essential to develop a stable and robust learning algorithm for training gesture generation models. Our code can be found in https://github.com/wubowen416/gesture-generation.
BibTeX:
@Inproceedings{Wu2021,
  author    = {Bowen Wu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  title     = {Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN},
  booktitle = {The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2021 (The workshop is as an official workshop of ACM ICMI’21)},
  year      = {2021},
  pages     = {194-201},
  address   = {Virtual},
  month     = oct,
  day       = {22},
  doi       = {doi.org/10.1145/3461615.3485407},
  url       = {https://dl.acm.org/doi/10.1145/3461615.3485407},
  abstract  = {Gestures are crucial for increasing the human-likeness of agents and robots to achieve smoother interactions with humans. The realization of an effective system to model human gestures, which are matched with the speech utterances, is necessary to be embedded in these agents. In this work, we propose a GRU-based autoregressive generation model for gesture generation, which is trained with a CNN-based discriminator in an adversarial manner using a WGAN-based learning algorithm. The model is trained to output the rotation angles of the joints in the upper body, and implemented to animate a CG avatar. The motions synthesized by the proposed system are evaluated via an objective measure and a subjective experiment, showing that the proposed model outperforms a baseline model which is trained by a state-of-the-art GAN-based algorithm, using the same dataset. This result reveals that it is essential to develop a stable and robust learning algorithm for training gesture generation models. Our code can be found in https://github.com/wubowen416/gesture-generation.},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: Motivated by the fact that some human emotional expression promotes affiliating functions such as signaling, social change and support which have social benefits, we investigate how these behaviors can be extended to Human-Robot Interaction (HRI) scenario. Specifically, we explored how an android robot could be furnished with socially motivated expressions geared towards eliciting adherence to COVID-19 guidelines. To this effect, we analyzed how different behaviors associated with the social expressions in this kind of situation occur in Human-Human Interaction (HHI), and designed a scenario with context-inspired behaviors (polite, gentle, displeased and angry) to enforce social compliance to a violator. We then implemented these behaviors in an android robot, and subjectively evaluated how effectively these behaviors could be expressed by the robot, and how these behaviors are perceived in terms of their appropriateness, effectiveness and tendency to enforce social compliance to WHO COVID-19 guidelines.
BibTeX:
@InProceedings{Ajibo2021a,
  author    = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {Motivated by the fact that some human emotional expression promotes affiliating functions such as signaling, social change and support which have social benefits, we investigate how these behaviors can be extended to Human-Robot Interaction (HRI) scenario. Specifically, we explored how an android robot could be furnished with socially motivated expressions geared towards eliciting adherence to COVID-19 guidelines. To this effect, we analyzed how different behaviors associated with the social expressions in this kind of situation occur in Human-Human Interaction (HHI), and designed a scenario with context-inspired behaviors (polite, gentle, displeased and angry) to enforce social compliance to a violator. We then implemented these behaviors in an android robot, and subjectively evaluated how effectively these behaviors could be expressed by the robot, and how these behaviors are perceived in terms of their appropriateness, effectiveness and tendency to enforce social compliance to WHO COVID-19 guidelines.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Hidenobu Sumioka, Kohei Nakajima, Kurima Sakai, Minato Takashi, Mashiro Shiomi, "Wearable Tactile Sensor Suit for Natural Body Dynamics Extraction: Case Study on Posture Prediction Based on Physical Reservoir Computing", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, pp. 9481-9488, September, 2021.
Abstract: We propose a wearable tactile sensor suit, which can be regarded as tactile sensor networks, for monitoring natural body dynamics to be exploited as a computational resource for estimating the posture of a human or robot that wears it. We emulated the periodic motions of a wearer (a human and an android robot) using a novel sensor suit with a 9-channel fabric tactile sensor on the left arm. The emulation was conducted by using a linear regression (LR) model of sensor states as readout modules that predict the next wearer’s movement using the current sensor data. Our result shows that the LR performance is comparable with other recurrent neural network approaches, suggesting that a fabric tactile sensor network is capable of monitoring the natural body motions, and further, this natural body dynamics itself can be used as an effective computational resource.
BibTeX:
@InProceedings{Sumioka2021c,
  author    = {Hidenobu Sumioka and Kohei Nakajima and Kurima Sakai and Minato Takashi and Mashiro Shiomi},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Wearable Tactile Sensor Suit for Natural Body Dynamics Extraction: Case Study on Posture Prediction Based on Physical Reservoir Computing},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  pages     = {9481-9488},
  url       = {https://www.iros2021.org/},
  abstract  = {We propose a wearable tactile sensor suit, which can be regarded as tactile sensor networks, for monitoring natural body dynamics to be exploited as a computational resource for estimating the posture of a human or robot that wears it. We emulated the periodic motions of a wearer (a human and an android robot) using a novel sensor suit with a 9-channel fabric tactile sensor on the left arm. The emulation was conducted by using a linear regression (LR) model of sensor states as readout modules that predict the next wearer’s movement using the current sensor data. Our result shows that the LR performance is comparable with other recurrent neural network approaches, suggesting that a fabric tactile sensor network is capable of monitoring the natural body motions, and further, this natural body dynamics itself can be used as an effective computational resource.},
}
Takuto Akiyoshi, Junya Nakanishi, Hiroshi Ishiguro Hidenobu Sumioka, Masahiro Shiomi, "A Robot that Encourages Self-Disclosure to Reduce Anger Mood", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: One essential role of social robots is supporting human mental health by interaction with people. In this study, we focused on making people’s moods more positive through conversations about their problems as our first step to achieving a robot that cares about mental health. We employed the column method, which is typical stress coping technique in Japan, and designed conversational contents for a robot. We implemented conversational functions based on the column method for a social robot as well as a self-schema estimation function using conversational data. In addition, we proposed conversational strategies to support noticing their self-schemas and automatic thoughts, which are related to mental health support. We experimentally evaluated our system’s effectiveness and found that participants who used our system with the proposed conversational strategies made more self-disclosures and experienced less anger compared to those who did not use the proposed conversational strategies. On the other hand, the strategies did not significantly increase the performance of the self-schema estimation function.
BibTeX:
@InProceedings{Akiyoshi2021a,
  author    = {Takuto Akiyoshi and Junya Nakanishi and Hiroshi Ishiguro Hidenobu Sumioka and Masahiro Shiomi},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {A Robot that Encourages Self-Disclosure to Reduce Anger Mood},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {One essential role of social robots is supporting human mental health by interaction with people. In this study, we focused on making people’s moods more positive through conversations about their problems as our first step to achieving a robot that cares about mental health. We employed the column method, which is typical stress coping technique in Japan, and designed conversational contents for a robot. We implemented conversational functions based on the column method for a social robot as well as a self-schema estimation function using conversational data. In addition, we proposed conversational strategies to support noticing their self-schemas and automatic thoughts, which are related to mental health support. We experimentally evaluated our system’s effectiveness and found that participants who used our system with the proposed conversational strategies made more self-disclosures and experienced less anger compared to those who did not use the proposed conversational strategies. On the other hand, the strategies did not significantly increase the performance of the self-schema estimation function.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Takamasa Iio, Hiroshi Ishiguro, "Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.
BibTeX:
@InProceedings{Fu2021c,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Takamasa Iio and Hiroshi Ishiguro},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Nobuo Yamato, Hidenobu Sumioka, Masahiro Shiomi, Hiroshi Ishiguro, Youji Kohda, "Robotic Baby Doll with Minimal Designfor Interactive Doll Therapy in ElderlyDementia Care", In 12th International Conference on Applied Human Factors and Ergonomics (AHFE 2021), Virtual Conference, pp. 417-422, July, 2021.
Abstract: We designed HIRO, a robotic baby doll, to be used in an interactive, non-pharmacological intervention that combines doll therapy with robot technol-ogy for elderly people with dementia. We took a minimal design approach; on-ly the most basic human-like features are represented on the robotic system to encourage users to use their imagination to fill in the missing details. The ro-bot emits baby voice recordings as the user interacts with it, giving the robot more realistic mannerisms and enhancing the interaction between user and ro-bot. In addition, the minimal design simplifies the system configuration of the robot, making it inexpensive and intuitive for users to handle. In this paper, we discuss the benefits of the developed robot for elderly dementia patients and their caregivers.
BibTeX:
@InProceedings{Yamato2021,
  author    = {Nobuo Yamato and Hidenobu Sumioka and Masahiro Shiomi and Hiroshi Ishiguro and Youji Kohda},
  booktitle = {12th International Conference on Applied Human Factors and Ergonomics (AHFE 2021)},
  title     = {Robotic Baby Doll with Minimal Designfor Interactive Doll Therapy in ElderlyDementia Care},
  year      = {2021},
  address   = {Virtual Conference},
  day       = {25-29},
  doi       = {10.1007/978-3-030-80840-2_48},
  month     = jul,
  pages     = {417-422},
  url       = {https://link.springer.com/chapter/10.1007%2F978-3-030-80840-2_48},
  abstract  = {We designed HIRO, a robotic baby doll, to be used in an interactive, non-pharmacological intervention that combines doll therapy with robot technol-ogy for elderly people with dementia. We took a minimal design approach; on-ly the most basic human-like features are represented on the robotic system to encourage users to use their imagination to fill in the missing details. The ro-bot emits baby voice recordings as the user interacts with it, giving the robot more realistic mannerisms and enhancing the interaction between user and ro-bot. In addition, the minimal design simplifies the system configuration of the robot, making it inexpensive and intuitive for users to handle. In this paper, we discuss the benefits of the developed robot for elderly dementia patients and their caregivers.},
  keywords  = {Elderly care, Therapy robot, Human-robot interaction, Welfare care, Dementia},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "MAEC: Multi-instance learning with an Adversarial Auto-encoder-based Classifier for Speech Emotion Recognition", In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021), vol. SPE-24, no. 3, Toronto, Ontario, Canada, pp. 6299-6303, June, 2021.
Abstract: In this paper, we propose an adversarial auto-encoder based classifier, which can regularize the distribution of latent representation to smooth the boundaries among categories. Moreover, we adopt multi-instance learning by dividing speech into a bag of segments to capture the most salient moments for presenting an emotion. The proposed model was trained on the IEMOCAP dataset and evaluated on the in-corpus validation set (IEMOCAP) and the cross-corpus validation set (MELD). The experiment results show that our model outperforms the baseline on in-corpus validation and increases the scores on cross-corpus validation with regularization.
BibTeX:
@InProceedings{Fu2021a,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021)},
  title     = {MAEC: Multi-instance learning with an Adversarial Auto-encoder-based Classifier for Speech Emotion Recognition},
  year      = {2021},
  address   = {Toronto, Ontario, Canada},
  day       = {6-11},
  doi       = {10.1109/ICASSP39728.2021.9413640},
  month     = jun,
  number    = {3},
  pages     = {6299-6303},
  url       = {https://2021.ieeeicassp.org/},
  volume    = {SPE-24},
  abstract  = {In this paper, we propose an adversarial auto-encoder based classifier, which can regularize the distribution of latent representation to smooth the boundaries among categories. Moreover, we adopt multi-instance learning by dividing speech into a bag of segments to capture the most salient moments for presenting an emotion. The proposed model was trained on the IEMOCAP dataset and evaluated on the in-corpus validation set (IEMOCAP) and the cross-corpus validation set (MELD). The experiment results show that our model outperforms the baseline on in-corpus validation and increases the scores on cross-corpus validation with regularization.},
  keywords  = {speech emotion recognition, multi-instance, adversarial auto-encoder},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An End-to-End Multitask Learning Model to Improve Speech Emotion Recognition", In EUSIPCO 2020 28th European Signal Processing Conference, Amsterdam, The Netherlands (Virtual), pp. 351-355, January, 2021.
Abstract: Speech Emotion Recognition (SER) has been shown to benefit from many of the recent advances in deep learning but still have some space to grow. In this paper, we propose an attention-based CNN-BLSTM model with the end-to-end (E2E) learning method. We first extract Mel-spectrogram from wav file instead of using hand-crafted features. Then we adopt two types of attention mechanisms to let the model focuses on salient periods of speech emotions over the temporal dimension. Considering that there are many individual differences among people in expressing emotions, we incorporate speaker recognition as an auxiliary task. Moreover, since the training data set has a small sample size, we include data from another language as data augmentation. We evaluated the proposed method on SAVEE dataset by training it with single task, multitask, and cross-language. The evaluation shows that our proposed model achieves 73.62% for weighted accuracy and 71.11% for unweighted accuracy in the task of speech emotion recognition, which outperforms the baseline with 11.13 points.
BibTeX:
@InProceedings{Fu2021,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {EUSIPCO 2020 28th European Signal Processing Conference},
  title     = {An End-to-End Multitask Learning Model to Improve Speech Emotion Recognition},
  year      = {2021},
  address   = {Amsterdam, The Netherlands (Virtual)},
  day       = {18-22},
  doi       = {10.23919/Eusipco47968.2020.9287484},
  month     = jan,
  pages     = {351-355},
  url       = {https://eusipco2020.org/},
  abstract  = {Speech Emotion Recognition (SER) has been shown to benefit from many of the recent advances in deep learning but still have some space to grow. In this paper, we propose an attention-based CNN-BLSTM model with the end-to-end (E2E) learning method. We first extract Mel-spectrogram from wav file instead of using hand-crafted features. Then we adopt two types of attention mechanisms to let the model focuses on salient periods of speech emotions over the temporal dimension. Considering that there are many individual differences among people in expressing emotions, we incorporate speaker recognition as an auxiliary task. Moreover, since the training data set has a small sample size, we include data from another language as data augmentation. We evaluated the proposed method on SAVEE dataset by training it with single task, multitask, and cross-language. The evaluation shows that our proposed model achieves 73.62% for weighted accuracy and 71.11% for unweighted accuracy in the task of speech emotion recognition, which outperforms the baseline with 11.13 points.},
}
Jiaqi Shi Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "3D Skeletal Movement enhanced Emotion Recognition Network", In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2020 (APSIPA ASC 2020), Virtual Conference, pp. 1060-1066, December, 2020.
Abstract: Automatic emotion recognition has become an important trend in the field of human-computer natural interaction and artificial intelligence. Although gesture is one of the most important components of nonverbal communication, which has a considerable impact on emotion recognition, motion modalities are rarely considered in the study of affective computing. An important reason is the lack of large open emotion databases containing skeletal movement data. In this paper, we extract 3D skeleton information from video, and apply the method to IEMOCAP database to add a new modality. We propose an attention based convolutional neural network which takes the extracted data as input to predict the speaker's emotion state. We also combine our model with models using other modalities to provide complementary information in the emotion classification task. The combined model utilizes audio signals, text information and skeletal data simultaneously. The performance of the model significantly outperforms the bimodal model, proving the effectiveness of the method.
BibTeX:
@InProceedings{Shi2020d,
  author    = {Jiaqi Shi Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2020 (APSIPA ASC 2020)},
  title     = {3D Skeletal Movement enhanced Emotion Recognition Network},
  year      = {2020},
  address   = {Virtual Conference},
  day       = {7-10},
  month     = dec,
  pages     = {1060-1066},
  url       = {http://www.apsipa2020.org/},
  abstract  = {Automatic emotion recognition has become an important trend in the field of human-computer natural interaction and artificial intelligence. Although gesture is one of the most important components of nonverbal communication, which has a considerable impact on emotion recognition, motion modalities are rarely considered in the study of affective computing. An important reason is the lack of large open emotion databases containing skeletal movement data. In this paper, we extract 3D skeleton information from video, and apply the method to IEMOCAP database to add a new modality. We propose an attention based convolutional neural network which takes the extracted data as input to predict the speaker's emotion state. We also combine our model with models using other modalities to provide complementary information in the emotion classification task. The combined model utilizes audio signals, text information and skeletal data simultaneously. The performance of the model significantly outperforms the bimodal model, proving the effectiveness of the method.},
}
Changzeng Fu, Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "AAEC: An Adversarial Autoencoder-based Classifier for AudioEmotion Recognition", In MuSe 2020-The Multimodal Sentiment in Real-life Media Challenge (Conference: ACM Multimedia Conference 2020 ), Seattle, United States, pp. 45-51, October, 2020.
Abstract: In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide im-plementations in supporting humans’ activities. Given that the data about emotions is difficult to collect and organize into a large database like the dataset of text or images, the true distribution would be difficult to be completely covered by the training set, which affects the model’s robustness and generalization in subse-quent applications. In this paper, we proposed a model, Adversarial Autoencoder-based Classifier (AAEC), that can not only augment the data within real data distribution but also reasonably extend the boundary of the current data distribution to a possible space. Such an extended space would be better to fit the distribution of training and testing sets. In addition to comparing with baseline models, we modified our proposed model into different configura-tions and conducted a comprehensive self-comparison with audio modality. The results of our experiment show that our proposed model outperforms the baselines.
BibTeX:
@Inproceedings{Fu2020a,
  author    = {Changzeng Fu and Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  title     = {AAEC: An Adversarial Autoencoder-based Classifier for AudioEmotion Recognition},
  booktitle = {MuSe 2020-The Multimodal Sentiment in Real-life Media Challenge (Conference: ACM Multimedia Conference 2020 )},
  year      = {2020},
  pages     = {45-51},
  address   = {Seattle, United States},
  month     = oct,
  day       = {12-16},
  doi       = {10.1145/3423327.3423669},
  url       = {https://dl.acm.org/doi/10.1145/3423327.3423669},
  abstract  = {In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide im-plementations in supporting humans’ activities. Given that the data about emotions is difficult to collect and organize into a large database like the dataset of text or images, the true distribution would be difficult to be completely covered by the training set, which affects the model’s robustness and generalization in subse-quent applications. In this paper, we proposed a model, Adversarial Autoencoder-based Classifier (AAEC), that can not only augment the data within real data distribution but also reasonably extend the boundary of the current data distribution to a possible space. Such an extended space would be better to fit the distribution of training and testing sets. In addition to comparing with baseline models, we modified our proposed model into different configura-tions and conducted a comprehensive self-comparison with audio modality. The results of our experiment show that our proposed model outperforms the baselines.},
  keywords  = {audio modality, neural networks, adversarial auto-encoder, emotion recognition},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots", In International Conference on Intelligent Robots and Systems (IROS) 2020, Las Vegas, USA (Virtual), October, 2020.
Abstract: Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions in several different forms. In this study, we first analyzed pointing gestures appearing in human-human dialogue interactions, and observed different trends in the use of different gesture types, according to the inter-personal relationship between the dialogue partners. Then, we conducted multiple subjective experiments by systematically creating behaviors in an android robot, in order to investigate the effects of different types of pointing gestures on the impression of the robot’s attitudes. Several factors were taken into account: sentence type (formal or colloquial), pointing gesture motion type (hand shape, such as open palm or index finger, hand orientation and motion direction), gesture speed and gesture hold duration. Evaluation results indicated that the impression of careful/polite or careless/casual is affected by all analyzed factors, and the appropriateness of a behavior depends on the inter-personal relationship to the dialogue partner.
BibTeX:
@InProceedings{Ishi2020c,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {International Conference on Intelligent Robots and Systems (IROS) 2020},
  title     = {Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots},
  year      = {2020},
  address   = {Las Vegas, USA (Virtual)},
  day       = {25-29},
  month     = oct,
  url       = {http://www.iros2020.org/},
  abstract  = {Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions in several different forms. In this study, we first analyzed pointing gestures appearing in human-human dialogue interactions, and observed different trends in the use of different gesture types, according to the inter-personal relationship between the dialogue partners. Then, we conducted multiple subjective experiments by systematically creating behaviors in an android robot, in order to investigate the effects of different types of pointing gestures on the impression of the robot’s attitudes. Several factors were taken into account: sentence type (formal or colloquial), pointing gesture motion type (hand shape, such as open palm or index finger, hand orientation and motion direction), gesture speed and gesture hold duration. Evaluation results indicated that the impression of careful/polite or careless/casual is affected by all analyzed factors, and the appropriateness of a behavior depends on the inter-personal relationship to the dialogue partner.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Hidenobu Sumioka, Masahiro Shiomi, Nobuo Yamato, Hiroshi Ishiguro, "Acceptance of a minimal design of a human infant for facilitating affective interaction with older adults: A case study toward interactive doll therapy", In The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2020), no. WeP1P.19, Virtual Conference, pp. 775-780, August, 2020.
Abstract: We introduce a minimal design approach to achieve a robot for interactive doll therapy. Our approach aims for positive interactions with older adults with dementia by just expressing the most basic elements of human-like features and relying on the user’s imagination to supplement the missing information. Based on this approach, we developed HIRO, a baby-sized robot with abstract body representation and without facial expressions. The recorded voice of a real human infant emitted by robots enhance human-like features of the robot and then facilitate emotional interaction between older people and the robot. A field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination.
BibTeX:
@InProceedings{Sumioka2020,
  author    = {Hidenobu Sumioka and Masahiro Shiomi and Nobuo Yamato and Hiroshi Ishiguro},
  booktitle = {The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2020)},
  title     = {Acceptance of a minimal design of a human infant for facilitating affective interaction with older adults: A case study toward interactive doll therapy},
  year      = {2020},
  address   = {Virtual Conference},
  day       = {31-4},
  month     = aug,
  number    = {WeP1P.19},
  pages     = {775-780},
  url       = {https://ras.papercept.net/conferences/conferences/ROMAN20/program/ROMAN20_ContentListWeb_3.html},
  abstract  = {We introduce a minimal design approach to achieve a robot for interactive doll therapy. Our approach aims for positive interactions with older adults with dementia by just expressing the most basic elements of human-like features and relying on the user’s imagination to supplement the missing information. Based on this approach, we developed HIRO, a baby-sized robot with abstract body representation and without facial expressions. The recorded voice of a real human infant emitted by robots enhance human-like features of the robot and then facilitate emotional interaction between older people and the robot. A field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination.},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Analysis of the factors involved in person-directed pointing gestures in dialogue speech", In Speech Prosody 2020, Tokyo, Japan, pp. 309-313, May, 2020.
Abstract: Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we extracted pointing gestures appearing in a three-party spontaneous dialogue database, and analyzed several factors including gesture type (hand shape, orientation, motion direction), dialogue acts, inter-personal relationship and attitudes. Analysis results indicate that more than half of the observed pointing gestures use the index finger towards the interlocutor, but are not particularly perceived as impolite. Pointing with the index finger moving in the forward direction was found to be predominant towards interlocutors with close relationship, while pointing with the open palm was found to be more frequent towards first-met person or older person. The majority of the pointing gestures were found to be used along with utterances whose contents are related or directed to the pointed person, while part were accompanied with attitudinal expressions such as yielding the turn, attention drawing, sympathizing, and joking/bantering.
BibTeX:
@InProceedings{Ishi2020a,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {Speech Prosody 2020},
  title     = {Analysis of the factors involved in person-directed pointing gestures in dialogue speech},
  year      = {2020},
  address   = {Tokyo, Japan},
  day       = {25-28},
  doi       = {10.21437/SpeechProsody.2020-63},
  month     = may,
  pages     = {309-313},
  url       = {https://sp2020.jpn.org/},
  abstract  = {Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we extracted pointing gestures appearing in a three-party spontaneous dialogue database, and analyzed several factors including gesture type (hand shape, orientation, motion direction), dialogue acts, inter-personal relationship and attitudes. Analysis results indicate that more than half of the observed pointing gestures use the index finger towards the interlocutor, but are not particularly perceived as impolite. Pointing with the index finger moving in the forward direction was found to be predominant towards interlocutors with close relationship, while pointing with the open palm was found to be more frequent towards first-met person or older person. The majority of the pointing gestures were found to be used along with utterances whose contents are related or directed to the pointed person, while part were accompanied with attitudinal expressions such as yielding the turn, attention drawing, sympathizing, and joking/bantering.},
}
Xinyue Li, Carlos Toshinori Ishi, Ryoko Hayashi, "Prosodic and Voice Quality Feature of Japanese Speech Conveying Attitudes: Mandarin Chinese Learners and Japanese Native Speakers", In Speech Prosody 2020, The University of Tokyo, Tokyo, pp. 41-45, May, 2020.
Abstract: To clarify the cross-linguistic differences in attitudinal speech and how L2 learners express attitudinal speech, in the present study Japanese speech representing four classes of attitudes was recorded: friendly/hostile, polite/rude, serious/joking and praising/blaming, elicited from Japanese native speakers and Mandarin Chinese learners of L2 Japanese. Accounting for language transfer, Mandarin Chinese speech was also recorded. Acoustic analyses including F0, duration and voice quality features revealed different patterns of utterances by Japanese native speakers and Mandarin Chinese learners. Analysis of sentence final tones also differentiate native speakers from L2 learners in the production of attitudinal speech. Furthermore, as for the word carrying sentential stress, open quotient-valued voice range profiles based on Electroglottography signals suggest that the attitudinal expression of Mandarin Chinese learners are affected by their mother tongue.
BibTeX:
@InProceedings{Li2020a,
  author    = {Xinyue Li and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {Speech Prosody 2020},
  title     = {Prosodic and Voice Quality Feature of Japanese Speech Conveying Attitudes: Mandarin Chinese Learners and Japanese Native Speakers},
  year      = {2020},
  address   = {The University of Tokyo, Tokyo},
  day       = {24-28},
  doi       = {10.21437/speechProsody.2020-9},
  month     = may,
  pages     = {41-45},
  url       = {https://sp2020.jpn.org/},
  abstract  = {To clarify the cross-linguistic differences in attitudinal speech and how L2 learners express attitudinal speech, in the present study Japanese speech representing four classes of attitudes was recorded: friendly/hostile, polite/rude, serious/joking and praising/blaming, elicited from Japanese native speakers and Mandarin Chinese learners of L2 Japanese. Accounting for language transfer, Mandarin Chinese speech was also recorded. Acoustic analyses including F0, duration and voice quality features revealed different patterns of utterances by Japanese native speakers and Mandarin Chinese learners. Analysis of sentence final tones also differentiate native speakers from L2 learners in the production of attitudinal speech. Furthermore, as for the word carrying sentential stress, open quotient-valued voice range profiles based on Electroglottography signals suggest that the attitudinal expression of Mandarin Chinese learners are affected by their mother tongue.},
}
Chinenye Augustine Ajibo, Ryusuke Mikata, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Generation and Evaluation of Audio-Visual Anger Emotional Expression for Android Robot", In The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020), Cambrige, UK, pp. 96-98, March, 2020.
Abstract: Recent studies in human-human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions that are beneficial to the expresser and also help to foster cordiality and closeness amongst interlocutors. However, efforts in human-robot interaction (HRI) have not attempted to investigate the consequences of expression of negative emotion by robots on HRI. Thus, the background of this study as a first step is to furnish humanoid robots with natural audio-visual anger expression for HRI. Based on the analysis results from a multimodal HHI corpus, we implemented different types of gestures related to anger expressions for humanoid robots and carried-out subjective evaluation of the generated anger expressions. Findings from this study revealed that the semantic context and functional content of anger-based utterances play a significant role in the choice of gesture to accompany such utterance. Our current result shows that "Pointing" gesture is adjudged more appropriate for utterances with "you" and anger-based "questioning" utterances; while "both arms spread" and "both arm swing" gestures were evaluated more appropriated for "declarative" and ``disagreement`` utterances respectively.
BibTeX:
@InProceedings{Ajibo2020,
  author    = {Chinenye Augustine Ajibo and Ryusuke Mikata and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  booktitle = {The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020)},
  title     = {Generation and Evaluation of Audio-Visual Anger Emotional Expression for Android Robot},
  year      = {2020},
  address   = {Cambrige, UK},
  day       = {23-26},
  doi       = {10.1145/3371382.3378282},
  month     = mar,
  pages     = {96-98},
  url       = {https://humanrobotinteraction.org/2020/},
  abstract  = {Recent studies in human-human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions that are beneficial to the expresser and also help to foster cordiality and closeness amongst interlocutors. However, efforts in human-robot interaction (HRI) have not attempted to investigate the consequences of expression of negative emotion by robots on HRI. Thus, the background of this study as a first step is to furnish humanoid robots with natural audio-visual anger expression for HRI. Based on the analysis results from a multimodal HHI corpus, we implemented different types of gestures related to anger expressions for humanoid robots and carried-out subjective evaluation of the generated anger expressions. Findings from this study revealed that the semantic context and functional content of anger-based utterances play a significant role in the choice of gesture to accompany such utterance. Our current result shows that "Pointing" gesture is adjudged more appropriate for utterances with "you" and anger-based "questioning" utterances; while "both arms spread" and "both arm swing" gestures were evaluated more appropriated for "declarative" and ``disagreement`` utterances respectively.},
}
Masahiro Shiomi, Hidenobu Sumioka, Kurima Sakai, Tomo Funayama, Takashi Minato, "SOTO: An Android Platform with a Masculine Appearance for Social Touch Interaction", In The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020), Cambridge, UK, pp. 447-449, March, 2020.
Abstract: In this paper, we report an android platform with a masculine appearance. In the human-human interaction research field, several studies reported the effects of gender in the social touch context. However, in the human-robot interaction research field, gender effects are mainly focused on human genders, i.e., a robot’s perceived gender is less focused. The purpose of developing the android is to investigate gender effects in social touch in the context of the human-robot interaction, comparing to existing android platforms with feminine appearances. For this purpose, we prepared a nonexistent face design in order to avoid appearance effects and fabric-based capacitance type upper-body touch sensors.
BibTeX:
@InProceedings{Shiomi2020,
  author    = {Masahiro Shiomi and Hidenobu Sumioka and Kurima Sakai and Tomo Funayama and Takashi Minato},
  booktitle = {The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020)},
  title     = {SOTO: An Android Platform with a Masculine Appearance for Social Touch Interaction},
  year      = {2020},
  address   = {Cambridge, UK},
  day       = {23-26},
  doi       = {10.1145/3371382.3378283},
  month     = mar,
  pages     = {447-449},
  url       = {https://humanrobotinteraction.org/2020/},
  abstract  = {In this paper, we report an android platform with a masculine appearance. In the human-human interaction research field, several studies reported the effects of gender in the social touch context. However, in the human-robot interaction research field, gender effects are mainly focused on human genders, i.e., a robot’s perceived gender is less focused. The purpose of developing the android is to investigate gender effects in social touch in the context of the human-robot interaction, comparing to existing android platforms with feminine appearances. For this purpose, we prepared a nonexistent face design in order to avoid appearance effects and fabric-based capacitance type upper-body touch sensors.},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Hiroshi Ishiguro, "SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text Classification", In 14th IEEE International Conference on Semantic Computing (ICSC 2020), San Diego, California, USA, pp. 123-127, February, 2020.
Abstract: Text categorization is the task of assigning labels to documents written in a natural language, and it has numerous real-world applications including sentiment analysis as well as traditional topic assignment tasks. In this paper, we propose 5 different configurations for the semantic matrix based memory neural network with end to end learning manner and evaluate our proposed method on AG news, Sogou news. The best performance of our proposed method outperforms VDCNN on the text classification task and gives a faster speed for learning semantics. Moreover, we also evaluate our model on small Scale data. The results show that our proposed method can still achieve good results than VDCNN.
BibTeX:
@InProceedings{Fu2019_1,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  booktitle = {14th IEEE International Conference on Semantic Computing (ICSC 2020)},
  title     = {SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text Classification},
  year      = {2020},
  address   = {San Diego, California, USA},
  day       = {3-5},
  doi       = {10.1109/ICIS.2020.00024},
  month     = feb,
  pages     = {123-127},
  url       = {https://www.ieee-icsc.org/},
  abstract  = {Text categorization is the task of assigning labels to documents written in a natural language, and it has numerous real-world applications including sentiment analysis as well as traditional topic assignment tasks. In this paper, we propose 5 different configurations for the semantic matrix based memory neural network with end to end learning manner and evaluate our proposed method on AG news, Sogou news. The best performance of our proposed method outperforms VDCNN on the text classification task and gives a faster speed for learning semantics. Moreover, we also evaluate our model on small Scale data. The results show that our proposed method can still achieve good results than VDCNN.},
}
Carlos T. Ishi, Akira Utsumi, Isamu Nagasawa, "Analysis of sound activities and voice activity detection using in-car microphone arrays", In 2020 IEEE/SICE International Symposium on System Integration (SII2020), Honolulu, Hawaii, USA, pp. 640-645, January, 2020.
Abstract: In this study, we evaluate the collaboration of multiple microphone arrays installed in the interior of a car, with the aim of robustly identifying the driver’s voice activities embedded in car environment noises. We first conducted preliminary analysis on the identified sound activities from the sound direction estimations by different microphone arrays arranged under the physical constraints of the car interior. Driving audio data was collected under several car environment conditions, including engine noise, road noise, air conditioner, winker sounds, radio sounds, driver’s voice, passenger voices, and external noises from other cars. The driver’s voice activity intervals could be identified with 97% detection rate by combining two microphone arrays, one around the “eyesight” camera system cover and the other around the driver’s sun visor.
BibTeX:
@InProceedings{Ishi2020,
  author    = {Carlos T. Ishi and Akira Utsumi and Isamu Nagasawa},
  booktitle = {2020 IEEE/SICE International Symposium on System Integration (SII2020)},
  title     = {Analysis of sound activities and voice activity detection using in-car microphone arrays},
  year      = {2020},
  address   = {Honolulu, Hawaii, USA},
  day       = {12-15},
  month     = jan,
  pages     = {640-645},
  url       = {https://sice-si.org/conf/SII2020/index.html},
  abstract  = {In this study, we evaluate the collaboration of multiple microphone arrays installed in the interior of a car, with the aim of robustly identifying the driver’s voice activities embedded in car environment noises. We first conducted preliminary analysis on the identified sound activities from the sound direction estimations by different microphone arrays arranged under the physical constraints of the car interior. Driving audio data was collected under several car environment conditions, including engine noise, road noise, air conditioner, winker sounds, radio sounds, driver’s voice, passenger voices, and external noises from other cars. The driver’s voice activity intervals could be identified with 97% detection rate by combining two microphone arrays, one around the “eyesight” camera system cover and the other around the driver’s sun visor.},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro, "Preliminary Investigation about Relationship between Perceived Intimacy and Touch Characteristics", In The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Macau, China, pp. 3409, November, 2019.
Abstract: This study investigated the effects of touch characteristics that change the perceived intimacy of people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. In this study, we investigate the effects of two kinds of touch characteristics (length and touch-part), and the results showed that the touch-part are useful to change the perceived intimacy, although the length did not show significant effects.
BibTeX:
@InProceedings{Zheng2019b,
  author    = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Preliminary Investigation about Relationship between Perceived Intimacy and Touch Characteristics},
  year      = {2019},
  address   = {Macau, China},
  day       = {4-8},
  month     = nov,
  pages     = {3409},
  url       = {https://www.iros2019.org/},
  abstract  = {This study investigated the effects of touch characteristics that change the perceived intimacy of people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. In this study, we investigate the effects of two kinds of touch characteristics (length and touch-part), and the results showed that the touch-part are useful to change the perceived intimacy, although the length did not show significant effects.},
}
Soheil Keshmiri, Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, Hiroshi Ishiguro, "Exploring the Causal Modeling of Human-Robot Touch Interaction", In The Eleventh International Conference on Social Robotics (ICSR2019), Madrid, Spain, pp. 235-244, November, 2019.
Abstract: Interpersonal touch plays a pivotal role in individuals’ emotional and physical well-being which, despite its psychological and therapeutic effects, has been mostly neglected in such field of research as socially-assistive robotics. On the other hand, the growing emergence of such interactive social robots in our daily lives inevitably entails such interactions as touch and hug between robots and humans. Therefore, derivation of robust models for such physical interactions to enable robots to perform them in naturalistic fashion is highly desirable. In this study, we investigated whether it was possible to realize distinct patterns of different touch interactions that were general representations of their respective types. For this purpose, we adapted three touch interaction paradigms and asked human subjects to perform them on a mannequin that was equipped with a touch sensor on its torso. We then appliedWiener-Granger causality on the time series of activated channels of this touch sensor that were common (per touch paradigm) among all participants. The analyses of these touch time series suggested that different types of touch can be quantified in terms of causal association between sequential steps that form the variation information among their patterns. These results hinted at the potential utility of such generalized touch patterns for devising social robots with robust causal models of naturalistic touch behaviour for their human-robot touch interactions.
BibTeX:
@InProceedings{keshmiri2019f,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {The Eleventh International Conference on Social Robotics (ICSR2019)},
  title     = {Exploring the Causal Modeling of Human-Robot Touch Interaction},
  year      = {2019},
  address   = {Madrid, Spain},
  day       = {26-29},
  doi       = {https://doi.org/10.1007/978-3-030-35888-4_22},
  month     = nov,
  pages     = {235-244},
  url       = {https://link.springer.com/chapter/10.1007%2F978-3-030-35888-4_22},
  abstract  = {Interpersonal touch plays a pivotal role in individuals’ emotional and physical well-being which, despite its psychological and therapeutic effects, has been mostly neglected in such field of research as socially-assistive robotics. On the other hand, the growing emergence of such interactive social robots in our daily lives inevitably entails such interactions as touch and hug between robots and humans. Therefore, derivation of robust models for such physical interactions to enable robots to perform them in naturalistic fashion is highly desirable. In this study, we investigated whether it was possible to realize distinct patterns of different touch interactions that were general representations of their respective types. For this purpose, we adapted three touch interaction paradigms and asked human subjects to perform them on a mannequin that was equipped with a touch sensor on its torso. We then appliedWiener-Granger causality on the time series of activated channels of this touch sensor that were common (per touch paradigm) among all participants. The analyses of these touch time series suggested that different types of touch can be quantified in terms of causal association between sequential steps that form the variation information among their patterns. These results hinted at the potential utility of such generalized touch patterns for devising social robots with robust causal models of naturalistic touch behaviour for their human-robot touch interactions.},
}
Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, "Development of a sensor suit for touch and pre-touch perception toward close human-robot touch interaction", In RoboTac 2019: New Advances in Tactile Sensation, Perception, and Learning in Robotics: Emerging Materials and Technologies for Manipulation in a workshop on The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS2019), Macau, China, November, 2019.
Abstract: In this paper, we propose that recognition of social touch from a human should be considered as both pre-touch inter-action and post-touch interaction. To build a social robot that facilitates both interactions, we aim to develop a touch sensor system that enables a robot to detect situations before being touched by a human as well as ones after being touched. In the rest of the paper, we first summarize a design concept of a sensor system for social touch. Next, as a first step, we develop a sensor suit that detect situations before being touched by a human, using fabric-based proximity sensors. Then, we report a preliminary experiment to evaluate the developed sensor as a proximity sensor for touch interaction. Finally, we discuss future studies.
BibTeX:
@InProceedings{Sumioka2019e,
  author    = {Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi},
  booktitle = {RoboTac 2019: New Advances in Tactile Sensation, Perception, and Learning in Robotics: Emerging Materials and Technologies for Manipulation in a workshop on The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS2019)},
  title     = {Development of a sensor suit for touch and pre-touch perception toward close human-robot touch interaction},
  year      = {2019},
  address   = {Macau, China},
  day       = {4-8},
  month     = nov,
  url       = {https://www.iros2019.org/about https://www.iros2019.org/workshops-and-tutorials https://robotac19.aau.at/},
  abstract  = {In this paper, we propose that recognition of social touch from a human should be considered as both pre-touch inter-action and post-touch interaction. To build a social robot that facilitates both interactions, we aim to develop a touch sensor system that enables a robot to detect situations before being touched by a human as well as ones after being touched. In the rest of the paper, we first summarize a design concept of a sensor system for social touch. Next, as a first step, we develop a sensor suit that detect situations before being touched by a human, using fabric-based proximity sensors. Then, we report a preliminary experiment to evaluate the developed sensor as a proximity sensor for touch interaction. Finally, we discuss future studies.},
}
Jan Magya, Masahiko Kobayashi, Shuichi Nishio, Peter Sincak, Hiroshi Ishiguro, "Autonomous Robotic Dialogue System with Reinforcement Learning for Elderlies with Dementia", In 2019 IEEE International Conference on Systems, Man, and Cybernetics(SMC), Bari, Italy, pp. 1-6, October, 2019.
Abstract: To learn the pattern of the response of them, we used reinforcement learning to adapt to each elderly individually. Moreover, the robot which does not depend on speech recognition estimates the elderly’s state from their nonverbal information. We experimented with three elderly people with dementia in a care home.
BibTeX:
@Inproceedings{Magya2019,
  author    = {Jan Magya and Masahiko Kobayashi and Shuichi Nishio and Peter Sincak and Hiroshi Ishiguro},
  title     = {Autonomous Robotic Dialogue System with Reinforcement Learning for Elderlies with Dementia},
  booktitle = {2019 IEEE International Conference on Systems, Man, and Cybernetics(SMC)},
  year      = {2019},
  pages     = {1-6},
  address   = {Bari, Italy},
  month     = oct,
  day       = {6-9},
  url       = {http://smc2019.org/index.html},
  abstract  = {To learn the pattern of the response of them, we used reinforcement learning to adapt to each elderly individually. Moreover, the robot which does not depend on speech recognition estimates the elderly’s state from their nonverbal information. We experimented with three elderly people with dementia in a care home.},
}
Ryusuke Mikata, Carlos T IShi, Talashi Minato, Hiroshi Ishiguro, "Analysis of factors influencing the impression of speaker individuality in android robots", In The 28th IEEE International Conference on Robot and Human Interactive Communication(IEEE RO-MAN2019), Le Meridien, Windsor Place, New Delhi, India, pp. 1224-1229, October, 2019.
Abstract: Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.
BibTeX:
@InProceedings{Mikata2019,
  author    = {Ryusuke Mikata and Carlos T IShi and Talashi Minato and Hiroshi Ishiguro},
  booktitle = {The 28th IEEE International Conference on Robot and Human Interactive Communication(IEEE RO-MAN2019)},
  title     = {Analysis of factors influencing the impression of speaker individuality in android robots},
  year      = {2019},
  address   = {Le Meridien, Windsor Place, New Delhi, India},
  day       = {14-18},
  month     = oct,
  pages     = {1224-1229},
  url       = {https://ro-man2019.org/},
  abstract  = {Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.},
}
Carlos Ishi, Ryusuke Mikata, Takashi Minato, Hiroshi Ishiguro, "Online processing for speech-driven gesture motion generation in android robots", In The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, pp. 508-514, October, 2019.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we proposed and implemented an online processing for a speech-driven gesture motion generation in an android robot dialogue system. Issues on motion overlaps and speech interruptions by the dialogue partner were taken into account. We then conducted two experiments to evaluate the effects of occasional dis-synchrony between the generated motions and speech, and the effects of holding duration control after speech interruptions. Evaluation results indicated that beat gestures are more critical in terms of speech-motion synchrony, and should not be delayed by more than 400ms relative to the speech utterances. Evaluation of the second experiment indicated that gesture holding durations around 0.5 to 2 seconds after an interruption look natural, while longer durations may cause impression of displeasure by the robot.
BibTeX:
@InProceedings{Ishi2019c,
  author    = {Carlos Ishi and Ryusuke Mikata and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {The 2019 IEEE-RAS International Conference on Humanoid Robots},
  title     = {Online processing for speech-driven gesture motion generation in android robots},
  year      = {2019},
  address   = {Toronto, Canada},
  day       = {15-17},
  month     = oct,
  pages     = {508-514},
  url       = {http://humanoids2019.loria.fr/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we proposed and implemented an online processing for a speech-driven gesture motion generation in an android robot dialogue system. Issues on motion overlaps and speech interruptions by the dialogue partner were taken into account. We then conducted two experiments to evaluate the effects of occasional dis-synchrony between the generated motions and speech, and the effects of holding duration control after speech interruptions. Evaluation results indicated that beat gestures are more critical in terms of speech-motion synchrony, and should not be delayed by more than 400ms relative to the speech utterances. Evaluation of the second experiment indicated that gesture holding durations around 0.5 to 2 seconds after an interruption look natural, while longer durations may cause impression of displeasure by the robot.},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "A Neural Turn-taking Model without RNN", In the 20th Annual Conference of the International Speech Communication Association INTERSPEECH 2019(Interspeech 2019), Graz, Austria, pp. 4150-4154, September, 2019.
Abstract: Sequential data such as speech and dialog are usually modeled by Recurrent Neural Networks (RNN) and derivatives since the information could travel through time with this kind of architectures. However, there are disadvantages coming with the use of RNNs such as the limited depth of neural networks and the GPU unfriendly training process. Estimating the timing of turn-taking is an important feature of a dialog system. Such a task requires knowledges about the past dialog context and has been modeled using RNNs in several studies. In this paper, we propose a non-RNN model for the timing estimation of turn-taking in dialogs. The proposed model takes lexical and acoustic features as its input to predict the end of a turn. We conduct experiments on four types of Japanese conversation datasets. The experimental results show that with proper neural network designs, the long term information in a dialog could propagate without recurrent structure and the proposed model could outperform canonical RNN-based architectures on the task of turn-taking estimation.
BibTeX:
@InProceedings{Liu2019b,
  author    = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  booktitle = {the 20th Annual Conference of the International Speech Communication Association INTERSPEECH 2019(Interspeech 2019)},
  title     = {A Neural Turn-taking Model without RNN},
  year      = {2019},
  address   = {Graz, Austria},
  day       = {15-19},
  doi       = {10.21437/Interspeech.2019-2270},
  month     = sep,
  pages     = {4150-4154},
  url       = {https://www.interspeech2019.org/},
  abstract  = {Sequential data such as speech and dialog are usually modeled by Recurrent Neural Networks (RNN) and derivatives since the information could travel through time with this kind of architectures. However, there are disadvantages coming with the use of RNNs such as the limited depth of neural networks and the GPU unfriendly training process. Estimating the timing of turn-taking is an important feature of a dialog system. Such a task requires knowledges about the past dialog context and has been modeled using RNNs in several studies. In this paper, we propose a non-RNN model for the timing estimation of turn-taking in dialogs. The proposed model takes lexical and acoustic features as its input to predict the end of a turn. We conduct experiments on four types of Japanese conversation datasets. The experimental results show that with proper neural network designs, the long term information in a dialog could propagate without recurrent structure and the proposed model could outperform canonical RNN-based architectures on the task of turn-taking estimation.},
  keywords  = {turn-taking, deep learning, capsule network, CNN, Dilated ConvNet},
}
Carlos Ishi, Takayuki Kanda, "Prosodic and voice quality analyses of loud speech: differences of hot anger and far-directed speech", In Speech, Music and Mind 2019 (SMM 2019) Detecting and Influencing Mental States with Audio Satellite Workshop of Interspeech 2019, Vienna, Austria, pp. 1-5, September, 2019.
Abstract: Loud speech may appear in different attitudinal situations, so that in human-robot speech interactions, the robot should be able to understand such situations. In this study, we analyzed the differences in acoustic-prosodic and voice quality features of loud speech in two situations: hot anger (aggressive/frenzy speech) and far-directed speech (i.e., speech addressed to a person in a far distance). Analysis results indicated that both speaking styles are accompanied by louder power and higher pitch, while differences were observed in the intonation: far-directed voices tend to have large power and high pitch over the whole utterance, while angry speech has more pitch movements in a larger pitch range. Regarding voice quality, both styles tend to be tenser (higher vocal effort), but angry speech tends to be more pressed, with local appearance of harsh voices (with irregularities in the vocal fold vibrations).
BibTeX:
@InProceedings{Ishi2019b,
  author    = {Carlos Ishi and Takayuki Kanda},
  booktitle = {Speech, Music and Mind 2019 (SMM 2019) Detecting and Influencing Mental States with Audio Satellite Workshop of Interspeech 2019},
  title     = {Prosodic and voice quality analyses of loud speech: differences of hot anger and far-directed speech},
  year      = {2019},
  address   = {Vienna, Austria},
  day       = {14},
  doi       = {10.21437/SMM.2019-1},
  month     = sep,
  pages     = {1-5},
  url       = {http://smm19.ifs.tuwien.ac.at/},
  abstract  = {Loud speech may appear in different attitudinal situations, so that in human-robot speech interactions, the robot should be able to understand such situations. In this study, we analyzed the differences in acoustic-prosodic and voice quality features of loud speech in two situations: hot anger (aggressive/frenzy speech) and far-directed speech (i.e., speech addressed to a person in a far distance). Analysis results indicated that both speaking styles are accompanied by louder power and higher pitch, while differences were observed in the intonation: far-directed voices tend to have large power and high pitch over the whole utterance, while angry speech has more pitch movements in a larger pitch range. Regarding voice quality, both styles tend to be tenser (higher vocal effort), but angry speech tends to be more pressed, with local appearance of harsh voices (with irregularities in the vocal fold vibrations).},
  keywords  = {loud speech, hot anger, prosody, voice quality, paralinguistics},
}
Carlos Ishi, Takayuki Kanda, "Prosodic and voice quality analyses of offensive speech", In International Congress of Phonetic Sciences (ICPhS 2019), Melbourne, Autralia, pp. 2174-2178, August, 2019.
Abstract: In this study, differences in the acoustic-prosodic features are analyzed in the low-moral or offensive speech. The same contents were spoken by multiple speakers with different speaking styles, including reading out, aggressive speech, extremely aggressive (frenzy), and joking styles. Acoustic-prosodic analyses indicated that different speakers use different speaking styles for expressing offensive speech. Clear changes in voice quality, such as tense and harsh voices, were observed for high levels of expressivity of aggressiveness and threatening.
BibTeX:
@Inproceedings{Ishi2019a,
  author    = {Carlos Ishi and Takayuki Kanda},
  title     = {Prosodic and voice quality analyses of offensive speech},
  booktitle = {International Congress of Phonetic Sciences (ICPhS 2019)},
  year      = {2019},
  pages     = {2174-2178},
  address   = {Melbourne, Autralia},
  month     = Aug,
  day       = {5-9},
  url       = {https://www.icphs2019.org/},
  abstract  = {In this study, differences in the acoustic-prosodic features are analyzed in the low-moral or offensive speech. The same contents were spoken by multiple speakers with different speaking styles, including reading out, aggressive speech, extremely aggressive (frenzy), and joking styles. Acoustic-prosodic analyses indicated that different speakers use different speaking styles for expressing offensive speech. Clear changes in voice quality, such as tense and harsh voices, were observed for high levels of expressivity of aggressiveness and threatening.},
  keywords  = {offensive speech, prosody, voice quality, acoustic features, speaking style},
}
Xinyue Li, Aaron Lee Albin, Carlos Toshinori Ishi, Ryoko Hayashi, "JAPANESE EMOTIONAL SPEECH PRODUCED BY CHINESE LEARNERS AND JAPANESE NATIVE SPEAKERS: DIFFERENCES IN PERCEPTION AND VOICE QUALITY", In International Congress of Phonetic Sciences(ICPhS2019), Melbourne, Australia, pp. 2183-2187, August, 2019.
Abstract: The present study leverages L2 learner data to contribute to the debate whether the perception and production of emotions is universal vs. language-specific. Japanese native speakers and Chinese learners of L2 Japanese were recorded producing single-word Japanese utterances with seven emotions. A different set of listeners representing the same two groups were then asked to identify the emotion produced in each token. Results suggest that identification accuracy was highest within groups (i.e., for learner+learner and for native+native). Furthermore, more confusions were observed in Japanese native speech, e.g., with 'angry' vs. 'disgusted' confused for Japanese native, but not Chinese learner, productions. Analyses of the electroglottography signal suggest these perceptual results stem from crosslinguistic differences in the productions themselves (e.g., Chinese learners using a tenser glottal configuration to distinguish 'angry' from 'disgusted'). Taken together, these results support the hypothesis that the encoding and recognition of emotions does indeed depend on L1 background.
BibTeX:
@InProceedings{Li2019,
  author    = {Xinyue Li and Aaron Lee Albin and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {International Congress of Phonetic Sciences(ICPhS2019)},
  title     = {JAPANESE EMOTIONAL SPEECH PRODUCED BY CHINESE LEARNERS AND JAPANESE NATIVE SPEAKERS: DIFFERENCES IN PERCEPTION AND VOICE QUALITY},
  year      = {2019},
  address   = {Melbourne, Australia},
  day       = {5-9},
  month     = aug,
  pages     = {2183-2187},
  url       = {https://www.icphs2019.org/},
  abstract  = {The present study leverages L2 learner data to contribute to the debate whether the perception and production of emotions is universal vs. language-specific. Japanese native speakers and Chinese learners of L2 Japanese were recorded producing single-word Japanese utterances with seven emotions. A different set of listeners representing the same two groups were then asked to identify the emotion produced in each token. Results suggest that identification accuracy was highest within groups (i.e., for learner+learner and for native+native). Furthermore, more confusions were observed in Japanese native speech, e.g., with 'angry' vs. 'disgusted' confused for Japanese native, but not Chinese learner, productions. Analyses of the electroglottography signal suggest these perceptual results stem from crosslinguistic differences in the productions themselves (e.g., Chinese learners using a tenser glottal configuration to distinguish 'angry' from 'disgusted'). Taken together, these results support the hypothesis that the encoding and recognition of emotions does indeed depend on L1 background.},
}
Christian Penaloza, David Hernandez-Carmona, "Decoding Visual Representations of Objects from Brain Data during Object-Grasping Task with a BMI-controlled Robotic Arm", In 4th International Brain Technology Conference, BrainTech 2019, Tel Aviv, Israel, March, 2019.
Abstract: Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic arms using brainwaves alone to do simple tasks such as grasping an object, but the low throughput information of brain-data decoding does not allow the robotic arm to achieve multiple grasp configurations. On the other hand, computer vision researchers have mostly solved the problem of robot arm configuration for object-grasping given visual object recognition. It is then natural to think that if we could decode from brain data the image of the object that the user intends to grasp, then the robotic arm could automatically decide the type of grasping to execute. For this reason, in this paper we propose a method to decode visual representations of the objects from brain data towards improving robot arm grasp configurations. More specifically, we recorded EEG data during an object-grasping experiment in which the participant had to control a robotic arm using a BMI to grasp an object. We also recorded images of the object and developed a multimodal representation of the encoded brain data and object image. Given this representation, the objective was to reconstruct the image given that only half of the image (the brain data encoding) was provided. To achieve this goal, we developed a deep stacked convolutional autoencoder that learned a noise-free joint manifold of brain data encoding and object image. After training, the autoencoder was able to reconstruct the missing part of the object image given that only brain data encoding was provided. Performance analysis was conducted using a convolutional neural network (CNN) trained with the original object images. The performance recognition using the reconstructed images was 76.55%.
BibTeX:
@Inproceedings{Penaloza2019,
  author    = {Christian Penaloza and David Hernandez-Carmona},
  title     = {Decoding Visual Representations of Objects from Brain Data during Object-Grasping Task with a BMI-controlled Robotic Arm},
  booktitle = {4th International Brain Technology Conference, BrainTech 2019},
  year      = {2019},
  address   = {Tel Aviv, Israel},
  month     = Mar,
  day       = {4-5},
  url       = {https://braintech.kenes.com/registration/},
  abstract  = {Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic arms using brainwaves alone to do simple tasks such as grasping an object, but the low throughput information of brain-data decoding does not allow the robotic arm to achieve multiple grasp configurations. On the other hand, computer vision researchers have mostly solved the problem of robot arm configuration for object-grasping given visual object recognition. It is then natural to think that if we could decode from brain data the image of the object that the user intends to grasp, then the robotic arm could automatically decide the type of grasping to execute. For this reason, in this paper we propose a method to decode visual representations of the objects from brain data towards improving robot arm grasp configurations. More specifically, we recorded EEG data during an object-grasping experiment in which the participant had to control a robotic arm using a BMI to grasp an object. We also recorded images of the object and developed a multimodal representation of the encoded brain data and object image. Given this representation, the objective was to reconstruct the image given that only half of the image (the brain data encoding) was provided. To achieve this goal, we developed a deep stacked convolutional autoencoder that learned a noise-free joint manifold of brain data encoding and object image. After training, the autoencoder was able to reconstruct the missing part of the object image given that only brain data encoding was provided. Performance analysis was conducted using a convolutional neural network (CNN) trained with the original object images. The performance recognition using the reconstructed images was 76.55%.},
}
Xiqian Zheng, Dylan Glass, Takashi Minato, Hiroshi Ishiguro, "Four memory categories to support socially-appropriate conversations in long-term HRI", In Workshop of Personalization in long-term human-robot interaction at the international conference on Human-Robot Interaction 2019 (HRI2019 Workshop), Daegu, South Korea, March, 2019.
Abstract: In long-term human-robot interaction (HRI), memory is necessary for robots to use information that are collected from past encounters to generate personalized interaction. Although memory has been widely employed as a core component in cognitive systems, they do not provide direct solutions to utilize memorized information in generating socially-appropriate conversations. From a design perspective, many studies have employed the use of memory in social interactions. However, only a few works so far have addressed the issue of how to utilize memorized information to design long-term HRI. This work proposes a category of four types of memory information aiming to allow a robot to directly use memorized information to modify conversation content in long-term HRI. An adaptive memory system was developed and briefly introduced to facilitate the usage of the memory information. In addition, the concept of ways to use these four types of memory in long-term interactions are provided. To demonstrate, a personal assistant robot application and a user study using it are also included. The user study shows that a robot using the proposed memory information can help users perceive positive relationship with the robot.
BibTeX:
@InProceedings{Zheng2019_1,
  author    = {Xiqian Zheng and Dylan Glass and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {Workshop of Personalization in long-term human-robot interaction at the international conference on Human-Robot Interaction 2019 (HRI2019 Workshop)},
  title     = {Four memory categories to support socially-appropriate conversations in long-term HRI},
  year      = {2019},
  address   = {Daegu, South Korea},
  day       = {11-14},
  month     = mar,
  url       = {http://humanrobotinteraction.org/2019/ https://longtermpersonalizationhri.github.io},
  abstract  = {In long-term human-robot interaction (HRI), memory is necessary for robots to use information that are collected from past encounters to generate personalized interaction. Although memory has been widely employed as a core component in cognitive systems, they do not provide direct solutions to utilize memorized information in generating socially-appropriate conversations. From a design perspective, many studies have employed the use of memory in social interactions. However, only a few works so far have addressed the issue of how to utilize memorized information to design long-term HRI. This work proposes a category of four types of memory information aiming to allow a robot to directly use memorized information to modify conversation content in long-term HRI. An adaptive memory system was developed and briefly introduced to facilitate the usage of the memory information. In addition, the concept of ways to use these four types of memory in long-term interactions are provided. To demonstrate, a personal assistant robot application and a user study using it are also included. The user study shows that a robot using the proposed memory information can help users perceive positive relationship with the robot.},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Virtual Hug Induces Modulated Impression on Hearsay Information", In 6th International Conference on Human-Agent Interaction, Southampton, UK, pp. 199-204, December, 2018.
Abstract: In this article, we report the alleviating effect of virtual interpersonal touch on social judgment. In particular, we show that virtual hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, partici- pants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that virtual hug reduces the negative in- ferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via virtual hug.
BibTeX:
@Inproceedings{Nakanishi2018,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title     = {Virtual Hug Induces Modulated Impression on Hearsay Information},
  booktitle = {6th International Conference on Human-Agent Interaction},
  year      = {2018},
  pages     = {199-204},
  address   = {Southampton, UK},
  month     = Dec,
  day       = {15-18},
  abstract  = {In this article, we report the alleviating effect of virtual interpersonal touch on social judgment. In particular, we show that virtual hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, partici- pants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that virtual hug reduces the negative in- ferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via virtual hug.},
}
Hidenobu Sumioka, Soheil Keshmiri, Junya Nakanishi, "Potential impact of Listening Support for Individuals with Developmental Disorders through A Huggable Communication Medium", In the 6th annual International Conference on Human-Agent Interaction (HAI2018), Southampton, UK, December, 2018.
Abstract: The 6th annual International Conference on Human-Agent Interaction aims to be the premier interdisciplinary venue for discussing and disseminating state-of-the-art research and results that reach across conventional interaction boundaries from people to a wide range of intelligent systems, including physical robots, software agents and digitally-mediated human-human communication. HAI focusses on technical as well as social aspects.
BibTeX:
@Inproceedings{Sumioka2018b,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Junya Nakanishi},
  title     = {Potential impact of Listening Support for Individuals with Developmental Disorders through A Huggable Communication Medium},
  booktitle = {the 6th annual International Conference on Human-Agent Interaction (HAI2018)},
  year      = {2018},
  address   = {Southampton, UK},
  month     = Dec,
  day       = {15-18},
  url       = {http://hai-conference.net/hai2018/},
  abstract  = {The 6th annual International Conference on Human-Agent Interaction aims to be the premier interdisciplinary venue for discussing and disseminating state-of-the-art research and results that reach across conventional interaction boundaries from people to a wide range of intelligent systems, including physical robots, software agents and digitally-mediated human-human communication. HAI focusses on technical as well as social aspects.},
}
Christian Penaloza, David Hernandez-Carmona, Shuichi Nishio, "Towards Intelligent Brain-Controlled Body Augmentation Robotic Limbs", In The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, October, 2018.
Abstract: Supernumerary Robotic Limbs (SRL) are body augmentation robotic devices that will extend the physical capabilities of humans in an unprecedented way. Researchers have explored the possibility to control SRLs in diverse ways - from manual operation through a joystick to myoelectric signals from muscle impulses - but the ultimate goal is to be able to control them with the brain. Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic devices using brainwaves alone, but the low number of brain-based commands that can be decoded does not allow an SRL to achieve a high number of actions. For this reason, in this paper, we present an intelligent brain-controlled SRL that has context-aware capabilities in order to complement BMI-based commands and increase the number of actions that it can perform with the same BMI-based command. The proposed system consists of a human-like robotic limb that can be activated (i.e. grasp action) with a non-invasive EEG-based BMI when the human operator imagines the action. Since there are different ways that the SRL can perform the action (i.e. different grasping configurations) depending on the context (i.e. type of the object), we provided vision capabilities to the SRL so it can recognize the context and optimize its behavior in order to match the user intention. The proposed hybrid BMI-SRL system opens up the possibilities to explore more complex and realistic human augmentation applications.
BibTeX:
@Inproceedings{Penaloza2018b,
  author    = {Christian Penaloza and David Hernandez-Carmona and Shuichi Nishio},
  title     = {Towards Intelligent Brain-Controlled Body Augmentation Robotic Limbs},
  booktitle = {The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00180},
  url       = {http://www.smc2018.org/},
  abstract  = {Supernumerary Robotic Limbs (SRL) are body augmentation robotic devices that will extend the physical capabilities of humans in an unprecedented way. Researchers have explored the possibility to control SRLs in diverse ways - from manual operation through a joystick to myoelectric signals from muscle impulses - but the ultimate goal is to be able to control them with the brain. Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic devices using brainwaves alone, but the low number of brain-based commands that can be decoded does not allow an SRL to achieve a high number of actions. For this reason, in this paper, we present an intelligent brain-controlled SRL that has context-aware capabilities in order to complement BMI-based commands and increase the number of actions that it can perform with the same BMI-based command. The proposed system consists of a human-like robotic limb that can be activated (i.e. grasp action) with a non-invasive EEG-based BMI when the human operator imagines the action. Since there are different ways that the SRL can perform the action (i.e. different grasping configurations) depending on the context (i.e. type of the object), we provided vision capabilities to the SRL so it can recognize the context and optimize its behavior in order to match the user intention. The proposed hybrid BMI-SRL system opens up the possibilities to explore more complex and realistic human augmentation applications.},
}
Soheil Keshmiri, Hidenobu Sumioka, Masataka Okubo, Ryuji Yamazaki, Aya Nakae, Hiroshi Ishiguro, "Potential Health Benefit of Physical Embodiment in Elderly Counselling: a Longitudinal Case Study", In The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, pp. 1022-1028, October, 2018.
Abstract: We present results of the effect of humanoid in comparison with voice-only communication on frontal brain activity of elderly adults. Our results indicate that use of humanoid induces an increase in frontal brain activity. Additionally, these results imply an increase in their Immunoglobulin A antibody (sIgA), thereby suggesting physical embodiment as a potential health factor in communication with elderly individuals. Such increases in hormonal as well as frontal brain activity, as observed in healthy condition, suggest the potential that physical embodiment can offer to the solution concept of sustaining the process of cognitive decline associated with aging and its consequential diseases such as Alzheimer.
BibTeX:
@Inproceedings{Keshmiri2018c,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Masataka Okubo and Ryuji Yamazaki and Aya Nakae and Hiroshi Ishiguro},
  title     = {Potential Health Benefit of Physical Embodiment in Elderly Counselling: a Longitudinal Case Study},
  booktitle = {The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  pages     = {1022-1028},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00183},
  url       = {http://www.smc2018.org/},
  abstract  = {We present results of the effect of humanoid in comparison with voice-only communication on frontal brain activity of elderly adults. Our results indicate that use of humanoid induces an increase in frontal brain activity. Additionally, these results imply an increase in their Immunoglobulin A antibody (sIgA), thereby suggesting physical embodiment as a potential health factor in communication with elderly individuals. Such increases in hormonal as well as frontal brain activity, as observed in healthy condition, suggest the potential that physical embodiment can offer to the solution concept of sustaining the process of cognitive decline associated with aging and its consequential diseases such as Alzheimer.},
}
Maryam Alimardani, Soheil Keshmiri, Hidenobu Sumioka, Kazuo Hiraki, "Classification of EEG signals for a hypnotrack BCI system", In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: In this paper, we extracted differential entropy (DE) of the recorded EEGs from two groups of subjects with high and low hypnotic susceptibility and built a support vector machine on these DE features for the classification of susceptibility trait. Moreover, we proposed a clustering-based feature refinement strategy to improve the estimation of such trait. Results showed a high classification performance in detection of subjects’level of susceptibility before and during hypnosis. Our results suggest the usefulness of this classifier in development of future BCI systems applied in the domain of therapy and healthcare.
BibTeX:
@Inproceedings{Alimardani2018a,
  author    = {Maryam Alimardani and Soheil Keshmiri and Hidenobu Sumioka and Kazuo Hiraki},
  title     = {Classification of EEG signals for a hypnotrack BCI system},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  year      = {2018},
  address   = {Madrid, Spain},
  month     = Oct,
  day       = {1-5},
  url       = {https://www.iros2018.org/},
  abstract  = {In this paper, we extracted differential entropy (DE) of the recorded EEGs from two groups of subjects with high and low hypnotic susceptibility and built a support vector machine on these DE features for the classification of susceptibility trait. Moreover, we proposed a clustering-based feature refinement strategy to improve the estimation of such trait. Results showed a high classification performance in detection of subjects’level of susceptibility before and during hypnosis. Our results suggest the usefulness of this classifier in development of future BCI systems applied in the domain of therapy and healthcare.},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Masataka Okubo, Hiroshi Ishiguro, "Similarity of Impact of Humanoid and In-Person Communication on Frontal Brain Activity of Elderly Adults", In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, pp. 2286-2291, October, 2018.
Abstract: We report results of the analyses of the effect of communication through a humanoid robot, in comparison with in-person, video-chat, and speaker, on frontal brain activity of elderly adults during an storytelling experiment. Our results suggest that whereas communicating through a physically embodied medium potentially induces a significantly higher pattern of brain activity with respect to video-chat and speaker, its difference is non-significant in comparison with inperson communication. These results imply that communicating through a humanoid robot induces effects on brain activity of elderly adults that are potentially similar in their patterns to in-person communication. Our findings benefit researchers and practitioners in rehabilitation and elderly care facilities in search of effective means of communication with their patients to increase their involvement in the incremental steps of their treatments. Moreover, they imply the utility of brain information as a promising sensory gateway in characterization of the behavioural responses in human-robot interaction.
BibTeX:
@Inproceedings{Keshmiri2018,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Masataka Okubo and Hiroshi Ishiguro},
  title     = {Similarity of Impact of Humanoid and In-Person Communication on Frontal Brain Activity of Elderly Adults},
  booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  year      = {2018},
  pages     = {2286-2291},
  address   = {Madrid, Spain},
  month     = Oct,
  day       = {1-5},
  url       = {https://www.iros2018.org/},
  abstract  = {We report results of the analyses of the effect of communication through a humanoid robot, in comparison with in-person, video-chat, and speaker, on frontal brain activity of elderly adults during an storytelling experiment. Our results suggest that whereas communicating through a physically embodied medium potentially induces a significantly higher pattern of brain activity with respect to video-chat and speaker, its difference is non-significant in comparison with inperson communication. These results imply that communicating through a humanoid robot induces effects on brain activity of elderly adults that are potentially similar in their patterns to in-person communication. Our findings benefit researchers and practitioners in rehabilitation and elderly care facilities in search of effective means of communication with their patients to increase their involvement in the incremental steps of their treatments. Moreover, they imply the utility of brain information as a promising sensory gateway in characterization of the behavioural responses in human-robot interaction.},
}
Abdelkader Nasreddine Belkacem, Shuichi Nishio, Takafumi Suzuki, Hiroshi Ishiguro, Masayuki Hirata, "Neuromagnetic Geminoid Control by BCI based on Four Bilateral Hand Movements", In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, pp. 524-527, October, 2018.
Abstract: The present study describes neuromagnetic Geminoid control system by using single-trial decoding of bilateral hand movements as a new approach to enhance a user’s ability to interact with a complex environment through a multidimensional brain-computer interface (BCI).
BibTeX:
@Inproceedings{Belkacem2018b,
  author    = {Abdelkader Nasreddine Belkacem and Shuichi Nishio and Takafumi Suzuki and Hiroshi Ishiguro and Masayuki Hirata},
  title     = {Neuromagnetic Geminoid Control by BCI based on Four Bilateral Hand Movements},
  booktitle = {2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  pages     = {524-527},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00183},
  url       = {http://www.smc2018.org/},
  abstract  = {The present study describes neuromagnetic Geminoid control system by using single-trial decoding of bilateral hand movements as a new approach to enhance a user’s ability to interact with a complex environment through a multidimensional brain-computer interface (BCI).},
}
Masataka Okubo, Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Intimate touch conversation through teleoperated android increases interpersonal closeness in elderly people", In The 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2018), Nanjing and Tai'an, China, August, 2018.
Abstract: We propose Intimate Touch Conversation (ITC) as a new remote communication paradigm in which an individual who is holding a telepresence humanoid engages in a conversation-over-distance with a remote partner that is tele-operating the humanoid. We compare the effect of this new communication paradigm on interpersonal closeness in comparison with in-person and video-chat. Our results suggest that ITC significantly enhances the feeling of interpersonal closeness, as opposed to video-chat and in-person. In addition, they show the intimate touch conversation allows elderly people to find their conversation more interesting. These results imply that feeling of intimate touch that is evoked by the presence of teleoperated android enables elderly users to establish a closer relationship with their conversational partners over distance, thereby reducing their feeling of loneliness. Our findings benefit researchers and engineers in elderly care facilities in search of effective means of establishing a social relation with their elderly users to reduce their feeling of social isolation and loneliness.
BibTeX:
@Inproceedings{Okubo2018,
  author    = {Masataka Okubo and Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  title     = {Intimate touch conversation through teleoperated android increases interpersonal closeness in elderly people},
  booktitle = {The 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2018)},
  year      = {2018},
  address   = {Nanjing and Tai'an, China},
  month     = Aug,
  day       = {27-31},
  url       = {http://ro-man2018.org/},
  abstract  = {We propose Intimate Touch Conversation (ITC) as a new remote communication paradigm in which an individual who is holding a telepresence humanoid engages in a conversation-over-distance with a remote partner that is tele-operating the humanoid. We compare the effect of this new communication paradigm on interpersonal closeness in comparison with in-person and video-chat. Our results suggest that ITC significantly enhances the feeling of interpersonal closeness, as opposed to video-chat and in-person. In addition, they show the intimate touch conversation allows elderly people to find their conversation more interesting. These results imply that feeling of intimate touch that is evoked by the presence of teleoperated android enables elderly users to establish a closer relationship with their conversational partners over distance, thereby reducing their feeling of loneliness. Our findings benefit researchers and engineers in elderly care facilities in search of effective means of establishing a social relation with their elderly users to reduce their feeling of social isolation and loneliness.},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Does a Robot's Subtle Pause in Reaction Time to People's Touch Contribute to Positive Influences?", In the 27th IEEE International Conference on Robot and Human Interactive Communication, (RO-MAN 2018), Nanjing and Tai'an, China, August, 2018.
Abstract: This paper addresses the effects of a subtle pause in reactions during human-robot touch interactions. Based on the human scientific literature, people's reaction times to touch stimuli range from 150 to 400 msec. Therefore, we decided to use a subtle pause with a similar length for reactions for more natural human-robot touch interactions. On the other hand, in the human-robot interaction research field, a past study reports that people prefer reactions from a robot in touch interaction that are as quick as possible, i.e., a 0- second reaction time is preferred to 1- or 2- second reaction times. We note that since the resolution of the study's time slices was every second, it remains unknown whether a robot should take a pause of hundreds of milliseconds for a more natural reaction time. To investigate the effects of subtle pauses in touch interaction, we experimentally investigated the effects of reaction time to people's touch with a 200-msec resolution of time slices between 0 second and 1 second: 0 second, and 200, 400, 600, and 800 msec. The number of people who preferred the reactions with subtle pauses exceeded the number who preferred the 0- second reactions. However, the questionnaire scores did not show any significant differences because of individual differences, even though the 400-msec pause was slightly preferred to the others from the preference perspective.
BibTeX:
@Inproceedings{Shiomi2018,
  author    = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  title     = {Does a Robot's Subtle Pause in Reaction Time to People's Touch Contribute to Positive Influences?},
  booktitle = {the 27th IEEE International Conference on Robot and Human Interactive Communication, (RO-MAN 2018)},
  year      = {2018},
  address   = {Nanjing and Tai'an, China},
  month     = Aug,
  day       = {27-31},
  url       = {http://ro-man2018.org/},
  abstract  = {This paper addresses the effects of a subtle pause in reactions during human-robot touch interactions. Based on the human scientific literature, people's reaction times to touch stimuli range from 150 to 400 msec. Therefore, we decided to use a subtle pause with a similar length for reactions for more natural human-robot touch interactions. On the other hand, in the human-robot interaction research field, a past study reports that people prefer reactions from a robot in touch interaction that are as quick as possible, i.e., a 0- second reaction time is preferred to 1- or 2- second reaction times. We note that since the resolution of the study's time slices was every second, it remains unknown whether a robot should take a pause of hundreds of milliseconds for a more natural reaction time. To investigate the effects of subtle pauses in touch interaction, we experimentally investigated the effects of reaction time to people's touch with a 200-msec resolution of time slices between 0 second and 1 second: 0 second, and 200, 400, 600, and 800 msec. The number of people who preferred the reactions with subtle pauses exceeded the number who preferred the 0- second reactions. However, the questionnaire scores did not show any significant differences because of individual differences, even though the 400-msec pause was slightly preferred to the others from the preference perspective.},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Analysis of relations between hand gestures and dialogue act categories", In Speech Prosody 2018, Poznan, Poland, pp. 473-477, June, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we analyzed a multimodal database of three-party conversations, and investigated the relations between the occurrence of hand gestures and speech, with special focus on dialogue act categories. Analysis results revealed that hand gestures occur with highest frequency in turn-keeping phrases, and seldom occur in backchannel-type utterances. On the other hand, self-touch hand motions (adapters) occur more often in backchannel utterances and in laughter intervals, in comparison to other dialogue act categories.
BibTeX:
@Inproceedings{Ishi2018a,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  title     = {Analysis of relations between hand gestures and dialogue act categories},
  booktitle = {Speech Prosody 2018},
  year      = {2018},
  pages     = {473-477},
  address   = {Poznan, Poland},
  month     = Jun,
  day       = {13-16},
  url       = {https://www.isca-speech.org/archive/SpeechProsody_2018/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we analyzed a multimodal database of three-party conversations, and investigated the relations between the occurrence of hand gestures and speech, with special focus on dialogue act categories. Analysis results revealed that hand gestures occur with highest frequency in turn-keeping phrases, and seldom occur in backchannel-type utterances. On the other hand, self-touch hand motions (adapters) occur more often in backchannel utterances and in laughter intervals, in comparison to other dialogue act categories.},
}
Jakub Zlotowski, Hidenobu Sumioka, Christoph Bartneck, Shuichi Nishio, Hiroshi Ishiguro, "Understanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization", In The Ninth International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, pp. 618-627, November, 2017.
Abstract: Anthropomorphism plays an important role in affecting human interaction with a robot. However, our understanding of this process is still limited. We argue that it is not possible to understand anthropomorphism without understanding what is humanness. In the previous research, we proposed to look at the work on dehumanization in order to understand what factors can affect a robot's anthropomorphism. Moreover, considering that there are two distinct dimensions of humanness, a two-dimensional model of anthropomorphism was proposed. We conducted a study in which we manipulated the perceived intentionality of a robot and its appearance, and measured how they affected the anthropomorphization of a robot on two dimensions of humanness and its perceived moral agency. The results do not support a two-dimensional model of anthropomorphism and indicate that the distinction between positive and negative traits may be more relevant in HRI studies in Japan.
BibTeX:
@Inproceedings{Zlotowski2017a,
  author    = {Jakub Zlotowski and Hidenobu Sumioka and Christoph Bartneck and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Understanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization},
  booktitle = {The Ninth International Conference on Social Robotics (ICSR 2017)},
  year      = {2017},
  series    = {LNAI 10652},
  pages     = {618-627},
  address   = {Tsukuba, Japan},
  month     = Nov,
  day       = {22-24},
  doi       = {10.1007/978-3-319-70022-9_61},
  url       = {http://www.icsr2017.org/index.html},
  abstract  = {Anthropomorphism plays an important role in affecting human interaction with a robot. However, our understanding of this process is still limited. We argue that it is not possible to understand anthropomorphism without understanding what is humanness. In the previous research, we proposed to look at the work on dehumanization in order to understand what factors can affect a robot's anthropomorphism. Moreover, considering that there are two distinct dimensions of humanness, a two-dimensional model of anthropomorphism was proposed. We conducted a study in which we manipulated the perceived intentionality of a robot and its appearance, and measured how they affected the anthropomorphization of a robot on two dimensions of humanness and its perceived moral agency. The results do not support a two-dimensional model of anthropomorphism and indicate that the distinction between positive and negative traits may be more relevant in HRI studies in Japan.},
  file      = {Zlotowski2017a.pdf:pdf/Zlotowski2017a.pdf:PDF},
}
Masa Jazbec, Shuichi Nishio, Hiroshi Ishiguro, Hideaki Kuzuoka, Masataka Okubo, Christian Penaloza, "Body-swapping experiment with an android robot Investigation of the relationship between agency and a sense of ownership toward a different body", In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC2017), Banff, Canada, pp. 1471-1476, October, 2017.
Abstract: This study extends existing Rubber Hand Illusion (RHI) experiments to employ life-size full-body humanlike android robot to investigate body ownership illusion and the sense of agency.
BibTeX:
@Inproceedings{Jazbec2017a,
  author    = {Masa Jazbec and Shuichi Nishio and Hiroshi Ishiguro and Hideaki Kuzuoka and Masataka Okubo and Christian Penaloza},
  title     = {Body-swapping experiment with an android robot Investigation of the relationship between agency and a sense of ownership toward a different body},
  booktitle = {2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC2017)},
  year      = {2017},
  pages     = {1471-1476},
  address   = {Banff, Canada},
  month     = Oct,
  day       = {5-8},
  url       = {http://www.smc2017.org/},
  abstract  = {This study extends existing Rubber Hand Illusion (RHI) experiments to employ life-size full-body humanlike android robot to investigate body ownership illusion and the sense of agency.},
  file      = {Jazbec2017a.pdf:pdf/Jazbec2017a.pdf:PDF},
}
Takashi Suegami, Hidenobu Sumioka, Fumio Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, Aya Nakae, Hiroshi Ishiguro, "Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle - A Pilot Study", In 5th annual International Conference on Human-Agent Interaction (HAI 2017), Bielefeld, Germany, pp. 463-467, October, 2017.
Abstract: This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.
BibTeX:
@Inproceedings{Suegami2017,
  author    = {Takashi Suegami and Hidenobu Sumioka and Fumio Obayashi and Kyonosuke Ichii and Yoshinori Harada and Hiroshi Daimoto and Aya Nakae and Hiroshi Ishiguro},
  title     = {Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle - A Pilot Study},
  booktitle = {5th annual International Conference on Human-Agent Interaction (HAI 2017)},
  year      = {2017},
  pages     = {463-467},
  address   = {Bielefeld, Germany},
  month     = Oct,
  day       = {17-20},
  doi       = {10.1145/3125739.3132588},
  url       = {http://hai-conference.net/hai2017/},
  abstract  = {This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.},
  file      = {Suegami2017.pdf:pdf/Suegami2017.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Probabilistic nod generation model based on estimated utterance categories", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, BC, Canada, pp. 5333-5339, September, 2017.
Abstract: We propose a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability dis- tribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. Subjective experiment results indicate that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.
BibTeX:
@Inproceedings{Liu2017b,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Probabilistic nod generation model based on estimated utterance categories},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  year      = {2017},
  pages     = {5333-5339},
  address   = {Vancouver, BC, Canada},
  month     = Sep,
  day       = {24-28},
  url       = {http://www.iros2017.org/},
  abstract  = {We propose a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability dis- tribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. Subjective experiment results indicate that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.},
  file      = {Liu2017b.pdf:pdf/Liu2017b.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions", In Interspeech 2017, Stockholm, Sweden, pp. 874-878, August, 2017.
Abstract: The background of our research is the generation of natural human-like motions during speech in android robots that have a highly human-like appearance. Mismatches in speech and motion are sources of unnaturalness, especially when emotion expressions are involved. Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. In this study, we analyze facial, head and body motions during several types of vocalized surprise expressions appearing in human-human dialogue interactions. Analysis results indicate inter-dependence between motion types and different types of surprise expression (such as emotional, social or quoted) as well as different degrees of surprise expression. The synchronization between motion and surprise utterances is also analyzed.
BibTeX:
@Inproceedings{Ishi2017b,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Motion analysis in vocalized surprise expressions},
  booktitle = {Interspeech 2017},
  year      = {2017},
  pages     = {874-878},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-631},
  url       = {http://www.interspeech2017.org/},
  abstract  = {The background of our research is the generation of natural human-like motions during speech in android robots that have a highly human-like appearance. Mismatches in speech and motion are sources of unnaturalness, especially when emotion expressions are involved. Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. In this study, we analyze facial, head and body motions during several types of vocalized surprise expressions appearing in human-human dialogue interactions. Analysis results indicate inter-dependence between motion types and different types of surprise expression (such as emotional, social or quoted) as well as different degrees of surprise expression. The synchronization between motion and surprise utterances is also analyzed.},
  file      = {Ishi2017b.pdf:pdf/Ishi2017b.pdf:PDF},
}
Carlos T. Ishi, Jun Arai, Norihiro Hagita, "Prosodic analysis of attention-drawing speech", In Interspeech 2017, Stockholm, Sweden, pp. 909-913, August, 2017.
Abstract: The term “attention drawing" refers to the action of sellers who call out to get the attention of people passing by in front of their stores or shops to invite them inside to buy or sample products. Since the speaking styles exhibited in such attention-drawing speech are clearly different from conversational speech, in this study, we focused on prosodic analyses of attention-drawing speech and collected the speech data of multiple people with previous attention-drawing experience by simulating several situations. We then investigated the effects of several factors, including background noise, interaction phases, and shop categories on the prosodic features of attention-drawing utterances. Analysis results indicate that compared to dialogue interaction utterances, attention-drawing utterances usually have higher power, higher mean F0s, smaller F0 ranges, and do not drop at the end of sentences, regardless of the presence or absence of background noise. Analysis of sentence-final syllable intonation indicates the presence of lengthened flat or rising tones in attention-drawing utterances.
BibTeX:
@Inproceedings{Ishi2017c,
  author    = {Carlos T. Ishi and Jun Arai and Norihiro Hagita},
  title     = {Prosodic analysis of attention-drawing speech},
  booktitle = {Interspeech 2017},
  year      = {2017},
  pages     = {909-913},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-623},
  url       = {http://www.interspeech2017.org/},
  abstract  = {The term “attention drawing" refers to the action of sellers who call out to get the attention of people passing by in front of their stores or shops to invite them inside to buy or sample products. Since the speaking styles exhibited in such attention-drawing speech are clearly different from conversational speech, in this study, we focused on prosodic analyses of attention-drawing speech and collected the speech data of multiple people with previous attention-drawing experience by simulating several situations. We then investigated the effects of several factors, including background noise, interaction phases, and shop categories on the prosodic features of attention-drawing utterances. Analysis results indicate that compared to dialogue interaction utterances, attention-drawing utterances usually have higher power, higher mean F0s, smaller F0 ranges, and do not drop at the end of sentences, regardless of the presence or absence of background noise. Analysis of sentence-final syllable intonation indicates the presence of lengthened flat or rising tones in attention-drawing utterances.},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Turn-taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents", In Interspeech 2017, Stockholm, Sweden, August, 2017.
Abstract: A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation part- ner. This paper proposes a model that estimates the timing of turn-taking during verbal interactions. Unlike previous stud- ies, our proposed model does not rely on a silence region be- tween sentences since a dialog system must respond without large gaps or overlaps. We propose a Recurrent Neural Net- work (RNN) based model that takes the joint embedding of lex- ical and prosodic contents as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. To this end, we trained a neural network to embed the lexical contents, the fundamental frequencies, and the speech power into a joint embedding space. To learn meaningful embedding spaces, the prosodic features from each single utterance are pre- trained using RNN and combined with utterance lexical embed- ding as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it out- performed the use of word embedding-based features.
BibTeX:
@Inproceedings{Liu2017c,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Turn-taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents},
  booktitle = {Interspeech 2017},
  year      = {2017},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-965},
  url       = {http://www.interspeech2017.org/},
  abstract  = {A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation part- ner. This paper proposes a model that estimates the timing of turn-taking during verbal interactions. Unlike previous stud- ies, our proposed model does not rely on a silence region be- tween sentences since a dialog system must respond without large gaps or overlaps. We propose a Recurrent Neural Net- work (RNN) based model that takes the joint embedding of lex- ical and prosodic contents as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. To this end, we trained a neural network to embed the lexical contents, the fundamental frequencies, and the speech power into a joint embedding space. To learn meaningful embedding spaces, the prosodic features from each single utterance are pre- trained using RNN and combined with utterance lexical embed- ding as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it out- performed the use of word embedding-based features.},
  file      = {Liu2017c.pdf:pdf/Liu2017c.pdf:PDF},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "An Android Architecture for Bio-inspired Honest Signalling in Human-Humanoid Interaction", In Biologically Inspired Cognitive Architectures 2017 (BICA 2017), Moscow, Russia, August, 2017.
Abstract: This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.
BibTeX:
@Inproceedings{Sorbello2017,
  author    = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title     = {An Android Architecture for Bio-inspired Honest Signalling in Human-Humanoid Interaction},
  booktitle = {Biologically Inspired Cognitive Architectures 2017 (BICA 2017)},
  year      = {2017},
  address   = {Moscow, Russia},
  month     = Aug,
  day       = {1-6},
  url       = {http://bica2017.bicasociety.org/},
  abstract  = {This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.},
  file      = {Sorbello2017.pdf:pdf/Sorbello2017.pdf:PDF},
}
Rosario Sorbello, Salvatore Tramonte, Marcello Giardina, Carmelo Cali, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "Augmented Embodied Emotions by Geminoid Robot induced by Human Bio-feedback Brain Features in a Musical Experience", In Biologically Inspired Cognitive Architectures 2017 (BICA 2017), Moscow, Russia, August, 2017.
Abstract: This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). We discuss the state of the art of the theoretical and the experimental research into the cognitive capacity of music. We overview the results that points to the correspondence between the perceptual structures, the cognitive organization of sounds in music, the motor and a ective behaviour. On such grounds we bring in the concepts of musical tensions and functional connections as the constructs that account for such correspondence in music experience. Finally we describe the architecture as a models generator system whose modules can be employed to test this correspondence from which the perceptual, cognitive, a ective and motor constituents of musical capacity may emerge.
BibTeX:
@Inproceedings{Sorbello2017a,
  author    = {Rosario Sorbello and Salvatore Tramonte and Marcello Giardina and Carmelo Cali and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title     = {Augmented Embodied Emotions by Geminoid Robot induced by Human Bio-feedback Brain Features in a Musical Experience},
  booktitle = {Biologically Inspired Cognitive Architectures 2017 (BICA 2017)},
  year      = {2017},
  address   = {Moscow, Russia},
  month     = Aug,
  day       = {1-6},
  url       = {http://bica2017.bicasociety.org/},
  abstract  = {This paper presents the conceptual framework for a study of musical experience and
the associated architecture centred on Human-Humanoid Interaction (HHI). We discuss
the state of the art of the theoretical and the experimental research into the cognitive
capacity of music. We overview the results that points to the correspondence between the
perceptual structures, the cognitive organization of sounds in music, the motor and aective
behaviour. On such grounds we bring in the concepts of musical tensions and functional
connections as the constructs that account for such correspondence in music experience.
Finally we describe the architecture as a models generator system whose modules can be
employed to test this correspondence from which the perceptual, cognitive, aective and
motor constituents of musical capacity may emerge.},
  file      = {Sorbello2017a.pdf:pdf/Sorbello2017a.pdf:PDF},
}
Soheil Keshmiri, Hidenobu Sumioka, Junya Nakanishi, Hiroshi Ishiguro, "Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates", In The International Joint Conference on Neural Networks (IJCNN 2017), Anchorage, Alaska, USA, May, 2017.
Abstract: We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.
BibTeX:
@Inproceedings{Keshmiri2017a,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Junya Nakanishi and Hiroshi Ishiguro},
  title     = {Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates},
  booktitle = {The International Joint Conference on Neural Networks (IJCNN 2017)},
  year      = {2017},
  address   = {Anchorage, Alaska, USA},
  month     = May,
  day       = {18},
  url       = {htp://www.ijcnn.org/},
  abstract  = {We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.},
  file      = {Keshmiri2017a.pdf:pdf/Keshmiri2017a.pdf:PDF},
}
Masa Jazbec, Shuichi Nishio, Hiroshi Ishiguro, Masataka Okubo, Christian Penaloza, "Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body", In The 2017 Conference on Human-Robot Interaction (HRI2017), Vienna, Austria, pp. 143-144, March, 2017.
Abstract: The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.
BibTeX:
@Inproceedings{Jazbec2017,
  author    = {Masa Jazbec and Shuichi Nishio and Hiroshi Ishiguro and Masataka Okubo and Christian Penaloza},
  title     = {Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body},
  booktitle = {The 2017 Conference on Human-Robot Interaction (HRI2017)},
  year      = {2017},
  pages     = {143-144},
  address   = {Vienna, Austria},
  month     = Mar,
  url       = {http://humanrobotinteraction.org/2017/},
  abstract  = {The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.},
}
Dylan F. Glas, Malcolm Doering, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction", In 2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation, Vienna, Austria, March, 2017.
Abstract: Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.
BibTeX:
@Inproceedings{Glas2017,
  author    = {Dylan F. Glas and Malcolm Doering and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction},
  booktitle = {2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation},
  year      = {2017},
  address   = {Vienna, Austria},
  month     = Mar,
  doi       = {10.1145/3029798.3036646},
  url       = {https://youtu.be/CY1WIfPJHqI},
  abstract  = {Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.},
  file      = {Glas2017.pdf:pdf/Glas2017.pdf:PDF},
}
Carlos T. Ishi, Tomo Funayama, Takashi Minato, Hiroshi Ishiguro, "Motion generation in android robots during laughing speech", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 3327-3332, October, 2016.
Abstract: We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).
BibTeX:
@Inproceedings{Ishi2016b,
  author    = {Carlos T. Ishi and Tomo Funayama and Takashi Minato and Hiroshi Ishiguro},
  title     = {Motion generation in android robots during laughing speech},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2016},
  pages     = {3327-3332},
  address   = {DaeJeon, Korea},
  month     = Oct,
  url       = {http://www.iros2016.org/},
  abstract  = {We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).},
  file      = {Ishi2016b.pdf:pdf/Ishi2016b.pdf:PDF},
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots", In the 4th annual International Conference on Human-Agent Interaction (iHAI2016), Singapore, pp. 187-192, October, 2016.
Abstract: This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.
BibTeX:
@Inproceedings{Uchida2016a,
  author    = {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title     = {Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots},
  booktitle = {the 4th annual International Conference on Human-Agent Interaction (iHAI2016)},
  year      = {2016},
  pages     = {187-192},
  address   = {Singapore},
  month     = Oct,
  url       = {http://hai-conference.net/hai2016/},
  abstract  = {This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.},
  file      = {Uchida2016a.pdf:pdf/Uchida2016a.pdf:PDF},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium", In The 4th annual International Conference on Human-Agent Interaction (HAI 2016), Biopolis, Singapore, pp. 103-106, October, 2016.
Abstract: This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.
BibTeX:
@Inproceedings{Nakanishi2016a,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title     = {Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium},
  booktitle = {The 4th annual International Conference on Human-Agent Interaction (HAI 2016)},
  year      = {2016},
  pages     = {103-106},
  address   = {Biopolis, Singapore},
  month     = Oct,
  url       = {http://hai-conference.net/hai2016/},
  abstract  = {This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.},
  file      = {Nakanishi2016a.pdf:pdf/Nakanishi2016a.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Jani Even, Norihiro Hagita, "Hearing support system using environment sensor network", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 1275-1280, October, 2016.
Abstract: In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.
BibTeX:
@Inproceedings{Ishi2016c,
  author    = {Carlos T. Ishi and Chaoran Liu and Jani Even and Norihiro Hagita},
  title     = {Hearing support system using environment sensor network},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2016},
  pages     = {1275-1280},
  address   = {DaeJeon, Korea},
  month     = Oct,
  url       = {http://www.iros2016.org/},
  abstract  = {In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.},
  file      = {Ishi2016c.pdf:pdf/Ishi2016c.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction", In The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR), New York, NY, USA, August, 2016.
Abstract: Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.
BibTeX:
@Inproceedings{Liu2016a,
  author    = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction},
  booktitle = {The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR)},
  year      = {2016},
  address   = {New York, NY, USA},
  month     = Aug,
  abstract  = {Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.},
  file      = {Liu2016a.pdf:pdf/Liu2016a.pdf:PDF},
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 206-211, August, 2016.
Abstract: The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.
BibTeX:
@Inproceedings{Uchida2016,
  author    = {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title     = {A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {206-211},
  address   = {Teachers College, Columbia University, USA},
  month     = Aug,
  url       = {http://ro-man2016.org/},
  abstract  = {The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.},
  file      = {Uchida2016.pdf:pdf/Uchida2016.pdf:PDF},
}
Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Speech Driven Trunk Motion Generating System Based on Physical Constraint", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 232-239, August, 2016.
Abstract: We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.
BibTeX:
@Inproceedings{Sakai2016,
  author    = {Kurima Sakai and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Speech Driven Trunk Motion Generating System Based on Physical Constraint},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {232-239},
  address   = {Teachers College, Columbia University, USA},
  month     = Aug,
  url       = {http://ro-man2016.org/},
  abstract  = {We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.},
  file      = {Sakai2016.pdf:pdf/Sakai2016.pdf:PDF},
}
Dylan F. Glas, Takashi Minato, Carlos T. Ishi, Tatsuya Kawahara, Hiroshi Ishiguro, "ERICA: The ERATO Intelligent Conversational Android", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, New York, NY, USA, pp. 22-29, August, 2016.
Abstract: The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.
BibTeX:
@Inproceedings{Glas2016b,
  author    = {Dylan F. Glas and Takashi Minato and Carlos T. Ishi and Tatsuya Kawahara and Hiroshi Ishiguro},
  title     = {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {22-29},
  address   = {New York, NY, USA},
  month     = Aug,
  url       = {http://www.ro-man2016.org/},
  abstract  = {The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.},
  file      = {Glas2016b.pdf:pdf/Glas2016b.pdf:PDF},
}
Hiroaki Hatano, Carlos T. Ishi, Tsuyoshi Komatsubara, Masahiro Shiomi, Takayuki Kanda, "Analysis of laughter events and social status of children in classrooms", In Speech Prosody 2016 boston (Speech Prosody 8), Boston, USA, pp. 1004-1008, May, 2016.
Abstract: Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.
BibTeX:
@Inproceedings{Hatano2016,
  author          = {Hiroaki Hatano and Carlos T. Ishi and Tsuyoshi Komatsubara and Masahiro Shiomi and Takayuki Kanda},
  title           = {Analysis of laughter events and social status of children in classrooms},
  booktitle       = {Speech Prosody 2016 boston (Speech Prosody 8)},
  year            = {2016},
  pages           = {1004-1008},
  address         = {Boston, USA},
  month           = May,
  url             = {http://sites.bu.edu/speechprosody2016/},
  abstract        = {Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.},
  file            = {Hatano2016.pdf:pdf/Hatano2016.pdf:PDF},
  keywords        = {laughter, social status, children, natural conversation, real environment},
}
Carlos T. Ishi, Hiroaki Hatano, Hiroshi Ishiguro, "Audiovisual analysis of relations between laughter types and laughter motions", In Speech Prosody 2016 Boston (Speech Prosode 8), Boston, USA, pp. 806-810, May, 2016.
Abstract: Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.
BibTeX:
@Inproceedings{Ishi2016,
  author    = {Carlos T. Ishi and Hiroaki Hatano and Hiroshi Ishiguro},
  title     = {Audiovisual analysis of relations between laughter types and laughter motions},
  booktitle = {Speech Prosody 2016 Boston (Speech Prosode 8)},
  year      = {2016},
  pages     = {806-810},
  address   = {Boston, USA},
  month     = May,
  url       = {http://sites.bu.edu/speechprosody2016/},
  abstract  = {Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.},
  file      = {Ishi2016.pdf:pdf/Ishi2016.pdf:PDF},
}
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned", In 11th ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand, pp. 303-310, March, 2016.
Abstract: Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.
BibTeX:
@Inproceedings{Glas2016a,
  author    = {Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned},
  booktitle = {11th ACM/IEEE International Conference on Human-Robot Interaction},
  year      = {2016},
  pages     = {303-310},
  address   = {Christchurch, New Zealand},
  month     = Mar,
  url       = {http://humanrobotinteraction.org/2016/},
  abstract  = {Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.},
  file      = {Glas2016a.pdf:pdf/Glas2016a.pdf:PDF},
}
Hidenobu Sumioka, Yuichiro Yoshikawa, Yasuo Wada, Hiroshi Ishiguro, "Teachers' impressions on robots for therapeutic applications", In International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot, Kanagawa, Japan, pp. (ASD-HR2), November, 2015.
Abstract: Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.
BibTeX:
@Inproceedings{Sumioka2015c,
  author    = {Hidenobu Sumioka and Yuichiro Yoshikawa and Yasuo Wada and Hiroshi Ishiguro},
  title     = {Teachers' impressions on robots for therapeutic applications},
  booktitle = {International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot},
  year      = {2015},
  pages     = {(ASD-HR2)},
  address   = {Kanagawa, Japan},
  month     = NOV,
  url       = {https://sites.google.com/site/asdhr2015/home},
  abstract  = {Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.},
  file      = {Sumioka2015c.pdf:pdf/Sumioka2015c.pdf:PDF},
}
Hiroaki Hatano, Carlos T. Ishi, Makiko Matsuda, "Automatic evaluation for accentuation of Japanese read speech", In International Workshop Construction of Digital Resources for Learning Japanese, Italy, pp. 4-5 (Abstracts), October, 2015.
Abstract: The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70   80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.
BibTeX:
@Inproceedings{Hatano2015a,
  author    = {Hiroaki Hatano and Carlos T. Ishi and Makiko Matsuda},
  title     = {Automatic evaluation for accentuation of Japanese read speech},
  booktitle = {International Workshop Construction of Digital Resources for Learning Japanese},
  year      = {2015},
  pages     = {4-5 (Abstracts)},
  address   = {Italy},
  month     = Oct,
  url       = {https://events.unibo.it/dit-workshop-japanese-digital-resources},
  abstract  = {The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70 ~ 80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.},
  file      = {Hatano2015a.pdf:pdf/Hatano2015a.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning", In Workshop "Quo Vadis Robotics & Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015, Bratislava, Slovakia, September, 2015.
Abstract: This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.
BibTeX:
@Inproceedings{Alimardani2015,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning},
  booktitle = {Workshop "Quo Vadis Robotics \& Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015},
  year      = {2015},
  address   = {Bratislava, Slovakia},
  month     = Sep,
  abstract  = {This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.},
  file      = {Alimardani2015.pdf:pdf/Alimardani2015.pdf:PDF},
}
Jani Even, Florent B.B. Ferreri, Atsushi Watanabe, Luis Y. S. Morales, Carlos T. Ishi, Norihiro Hagita, "Audio Augmented Point Clouds for Applications in Robotics", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 4846-4851, September, 2015.
Abstract: This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.
BibTeX:
@Inproceedings{Jani2015a,
  author    = {Jani Even and Florent B.B. Ferreri and Atsushi Watanabe and Luis Y. S. Morales and Carlos T. Ishi and Norihiro Hagita},
  title     = {Audio Augmented Point Clouds for Applications in Robotics},
  booktitle = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2015},
  pages     = {4846-4851},
  address   = {Hamburg, Germany},
  month     = SEP,
  abstract  = {This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.},
  file      = {Jani2015a.pdf:pdf/Jani2015a.pdf:PDF},
}
Carlos T. Ishi, Even Jani, Norihiro Hagita, "Speech activity detection and face orientation estimation using multiple microphone arrays and human position information", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 5574-5579, September, 2015.
Abstract: We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.
BibTeX:
@Inproceedings{Ishi2015b,
  author    = {Carlos T. Ishi and Even Jani and Norihiro Hagita},
  title     = {Speech activity detection and face orientation estimation using multiple microphone arrays and human position information},
  booktitle = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2015},
  pages     = {5574-5579},
  address   = {Hamburg, Germany},
  month     = SEP,
  abstract  = {We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.},
  file      = {Ishi2015b:pdf/Ishi2015b.pdf:PDF},
}
Kurima Sakai, Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Online speech-driven head motion generating system and evaluation on a tele-operated robot", In IEEE International Symposium on Robot and Human Interactive Communication, Kobe, Japan, pp. 529-534, August, 2015.
Abstract: We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation
BibTeX:
@Inproceedings{Sakai2015,
  author    = {Kurima Sakai and Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Online speech-driven head motion generating system and evaluation on a tele-operated robot},
  booktitle = {IEEE International Symposium on Robot and Human Interactive Communication},
  year      = {2015},
  pages     = {529-534},
  address   = {Kobe, Japan},
  month     = AUG,
  abstract  = {We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation},
  file      = {Sakai2015.pdf:pdf/Sakai2015.pdf:PDF},
}
Dylan F. Glas, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Can a social robot train itself just by observing human interactions?", In IEEE International Conference on Robotics and Automation, Seattle, WA, USA, May, 2015.
Abstract: In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a na&239;ve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.
BibTeX:
@Inproceedings{Glas2015a,
  author    = {Dylan F. Glas and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Can a social robot train itself just by observing human interactions?},
  booktitle = {IEEE International Conference on Robotics and Automation},
  year      = {2015},
  address   = {Seattle, WA, USA},
  month     = May,
  abstract  = {In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naïve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.},
  file      = {Glas2015a.pdf:pdf/Glas2015a.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems", In 10th ACM/IEEE International Conference on Human-Robot Interaction 2015, Portland, Oregon, USA, pp. 279-286, March, 2015.
Abstract: n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).
BibTeX:
@Inproceedings{Liu2015,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems},
  booktitle = {10th ACM/IEEE International Conference on Human-Robot Interaction 2015},
  year      = {2015},
  pages     = {279-286},
  address   = {Portland, Oregon, USA},
  month     = MAR,
  abstract  = {n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).},
  file      = {Liu2015.pdf:pdf/Liu2015.pdf:PDF},
}
Junya Nakanishi, Hidenobu Sumioka, Kurima Sakai, Daisuke Nakamichi, Masahiro Shiomi, Hiroshi Ishiguro, "Huggable Communication Medium Encourages Listening to Others", In 2nd International Conference on Human-Agent Interraction, Tsukuba, Japan, pp. pp 249-252, October, 2014.
Abstract: We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.
BibTeX:
@Inproceedings{Nakanishi2014,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Kurima Sakai and Daisuke Nakamichi and Masahiro Shiomi and Hiroshi Ishiguro},
  title     = {Huggable Communication Medium Encourages Listening to Others},
  booktitle = {2nd International Conference on Human-Agent Interraction},
  year      = {2014},
  pages     = {pp 249-252},
  address   = {Tsukuba, Japan},
  month     = OCT,
  url       = {http://hai-conference.net/hai2014/},
  abstract  = {We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.},
  file      = {Nakanishi2014.pdf:pdf/Nakanishi2014.pdf:PDF},
}
Marco Nørskov, "Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 319-327, August, 2014.
BibTeX:
@Inproceedings{Noerskov2014,
  author    = {Marco N\orskov},
  title     = {Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {319-327},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-319},
  url       = {http://ebooks.iospress.nl/publication/38578},
}
Daisuke Nakamichi, Shuichi Nishio, Hiroshi Ishiguro, "Training of telecommunication through teleoperated android "Telenoid" and its effect", In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, UK, pp. 1083-1088, August, 2014.
Abstract: Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.
BibTeX:
@Inproceedings{Nakamichi2014,
  author          = {Daisuke Nakamichi and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Training of telecommunication through teleoperated android "Telenoid" and its effect},
  booktitle       = {The 23rd IEEE International Symposium on Robot and Human Interactive Communication},
  year            = {2014},
  pages           = {1083-1088},
  address         = {Edinburgh, Scotland, UK},
  month           = Aug,
  day             = {25-29},
  url             = {http://rehabilitationrobotics.net/ro-man14/},
  abstract        = {Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.},
  file            = {Nakamichi2014.pdf:pdf/Nakamichi2014.pdf:PDF},
}
Ryuji Yamazaki, "Conditions of Empathy in Human-Robot Interaction", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 179-186, August, 2014.
BibTeX:
@Inproceedings{Yamazaki2014c,
  author    = {Ryuji Yamazaki},
  title     = {Conditions of Empathy in Human-Robot Interaction},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {179-186},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-179},
  url       = {http://ebooks.iospress.nl/publication/38560},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot", In IEEE International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, pp. 403-408, August, 2014.
Abstract: Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.
BibTeX:
@Inproceedings{Alimardani2014,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot},
  booktitle       = {IEEE International Conference on Biomedical Robotics and Biomechatronics},
  year            = {2014},
  pages           = {403-408},
  address         = {Sao Paulo, Brazil},
  month           = Aug,
  day             = {12-15},
  doi             = {10.1109/BIOROB.2014.6913810},
  abstract        = {Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.},
  file            = {Alimardani2014b.pdf:pdf/Alimardani2014b.pdf:PDF},
}
Kaiko Kuwamura, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Positive Communication with Senior Citizens", In The 13th Intelligent Autonomous Systems conference, Padova, Italy, July, 2014.
Abstract: Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.
BibTeX:
@Inproceedings{Kuwamura2014c,
  author          = {Kaiko Kuwamura and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing Robots for Positive Communication with Senior Citizens},
  booktitle       = {The 13th Intelligent Autonomous Systems conference},
  year            = {2014},
  address         = {Padova, Italy},
  month           = Jul,
  day             = {15-19},
  url             = {http://www.ias-13.org/},
  abstract        = {Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.},
  file            = {Kuwamura2014c.pdf:pdf/Kuwamura2014c.pdf:PDF},
}
Rosario Sorbello, Antonio Chella, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People", In the 13th International Conference on Intelligent Autonomous Systems, Padova, Italy, July, 2014.
Abstract: In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal de ning characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.
BibTeX:
@Inproceedings{Sorbello2014,
  author    = {Rosario Sorbello and Antonio Chella and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People},
  booktitle = {the 13th International Conference on Intelligent Autonomous Systems},
  year      = {2014},
  address   = {Padova, Italy},
  month     = Jul,
  day       = {15-19},
  abstract  = {In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal dening characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.},
  file      = {Sorbello2014.pdf:pdf/Sorbello2014.pdf:PDF},
  keywords  = {Humanoid Robot; Humanoid Robot Interaction; Life Support Empathic Robot; Telenoid},
}
Ryuji Yamazaki, Kaiko Kuwamura, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 311, June, 2014.
BibTeX:
@Inproceedings{Yamazaki2014a,
  author    = {Ryuji Yamazaki and Kaiko Kuwamura and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title     = {Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot},
  booktitle = {The 9th World Conference of Gerontechnology},
  year      = {2014},
  volume    = {13},
  number    = {2},
  pages     = {311},
  address   = {Taipei, Taiwan},
  month     = Jun,
  day       = {18-21},
  doi       = {10.4017/gt.2014.13.02.166.00},
  url       = {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.166.00/0},
  file      = {Yamazaki2014a.pdf:pdf/Yamazaki2014a.pdf:PDF},
  keywords  = {Elderly care robot; social isolation; embodied communication; community design},
}
Kaiko Kuwamura, Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, "Elderly Care Using Teleoperated Android Telenoid", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 226, June, 2014.
BibTeX:
@Inproceedings{Kuwamura2014,
  author    = {Kaiko Kuwamura and Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Elderly Care Using Teleoperated Android Telenoid},
  booktitle = {The 9th World Conference of Gerontechnology},
  year      = {2014},
  volume    = {13},
  number    = {2},
  pages     = {226},
  address   = {Taipei, Taiwan},
  month     = Jun,
  day       = {18-21},
  doi       = {10.4017/gt.2014.13.02.091.00},
  url       = {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.091.00},
  file      = {Kuwamura2014.pdf:pdf/Kuwamura2014.pdf:PDF},
  keywords  = {Elderly care robot; teleoperated android; cognitive disorder},
}
Carlos T. Ishi, Hiroaki Hatano, Miyako Kiso, "Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"", In Speech Prosody 7, Dublin, Ireland, pp. 100-104, May, 2014.
Abstract: The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.
BibTeX:
@Inproceedings{Ishi2014,
  author          = {Carlos T. Ishi and Hiroaki Hatano and Miyako Kiso},
  title           = {Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"},
  booktitle       = {Speech Prosody 7},
  year            = {2014},
  pages           = {100-104},
  address         = {Dublin, Ireland},
  month           = May,
  day             = {20-23},
  abstract        = {The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.},
  file            = {Ishi2014.pdf:pdf/Ishi2014.pdf:PDF},
  keywords        = {interjections; acoustic-prosodic features; paralinguistic information; spontaneous conversational speech},
}
Kaiko Kuwamura, Shuichi Nishio, "Modality reduction for enhancing human likeliness", In Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour, London, UK, pp. 83-89, April, 2014.
Abstract: We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.
BibTeX:
@Inproceedings{Kuwamura2014b,
  author          = {Kaiko Kuwamura and Shuichi Nishio},
  title           = {Modality reduction for enhancing human likeliness},
  booktitle       = {Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour},
  year            = {2014},
  pages           = {83-89},
  address         = {London, UK},
  month           = Apr,
  day             = {1-4},
  url             = {http://doc.gold.ac.uk/aisb50/AISB50-S16/AISB50-S16-Kuwamura-paper.pdf},
  abstract        = {We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.},
  file            = {Kuwamura2014b.pdf:pdf/Kuwamura2014b.pdf:PDF},
}
Hidenobu Sumioka, Kensuke Koda, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 726-731, August, 2013.
Abstract: Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.
BibTeX:
@Inproceedings{Sumioka2013b,
  author          = {Hidenobu Sumioka and Kensuke Koda and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title           = {Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {726-731},
  address         = {Gyeongju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628399},
  abstract        = {Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.},
  file            = {Sumioka2013b.pdf:pdf/Sumioka2013b.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeonguju, Korea, pp. 565-570, August, 2013.
Abstract: Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.
BibTeX:
@Inproceedings{Nishio2013,
  author          = {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title           = {Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {565-570},
  address         = {Gyeonguju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628539},
  abstract        = {Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.},
  file            = {Nishio2013.pdf:pdf/Nishio2013.pdf:PDF},
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: A medium that fosters love", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 70-75, August, 2013.
Abstract: We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.
BibTeX:
@Inproceedings{Kuwamura2013,
  author          = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Hugvie: A medium that fosters love},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {70-75},
  address         = {Gyeongju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628533},
  abstract        = {We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.},
  file            = {Kuwamura2013.pdf:pdf/Kuwamura2013.pdf:PDF},
}
Junya Nakanishi, Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Evoking Affection for a Communication Partner by a Robotic Communication Medium", In the First International Conference on Human-Agent Interaction, Hokkaido University, Sapporo, Japan, pp. III-1-4, August, 2013.
Abstract: This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.
BibTeX:
@Inproceedings{Nakanishi2013,
  author          = {Junya Nakanishi and Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Evoking Affection for a Communication Partner by a Robotic Communication Medium},
  booktitle       = {the First International Conference on Human-Agent Interaction},
  year            = {2013},
  pages           = {III-1-4},
  address         = {Hokkaido University, Sapporo, Japan},
  month           = Aug,
  day             = {7-9},
  url             = {http://hai-conference.net/ihai2013/proceedings/html/paper/paper-III-1-4.html},
  abstract        = {This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.},
  file            = {Nakanishi2013.pdf:pdf/Nakanishi2013.pdf:PDF},
}
Rosario Sorbello, Hiroshi Ishiguro, Antonio Chella, Shuichi Nishio, Giovan Battista Presti, Marcello Giardina, "Telenoid mediated ACT Protocol to Increase Acceptance of Disease among Siblings of Autistic Children", In HRI2013 Workshop on Design of Humanlikeness in HRI : from uncanny valley to minimal design, Tokyo, Japan, pp. 26, March, 2013.
Abstract: We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.
BibTeX:
@Inproceedings{Sorbello2013,
  author    = {Rosario Sorbello and Hiroshi Ishiguro and Antonio Chella and Shuichi Nishio and Giovan Battista Presti and Marcello Giardina},
  title     = {Telenoid mediated {ACT} Protocol to Increase Acceptance of Disease among Siblings of Autistic Children},
  booktitle = {{HRI}2013 Workshop on Design of Humanlikeness in {HRI} : from uncanny valley to minimal design},
  year      = {2013},
  pages     = {26},
  address   = {Tokyo, Japan},
  month     = Mar,
  day       = {3},
  abstract  = {We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.},
  file      = {Sorbello2013.pdf:pdf/Sorbello2013.pdf:PDF},
}
Christian Becker-Asano, Severin Gustorff, Kai Oliver Arras, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, Bernhard Nebe, "Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline", In 8th ACM/IEEE International Conference on Human-Robot Interaction, National Museum of Emerging Science and innovation (Miraikan), Tokyo, pp. 79-80, March, 2013.
Abstract: This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University
BibTeX:
@Inproceedings{Becker-Asano2013,
  author          = {Christian Becker-Asano and Severin Gustorff and Kai Oliver Arras and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro and Bernhard Nebe},
  title           = {Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline},
  booktitle       = {8th ACM/IEEE International Conference on Human-Robot Interaction},
  year            = {2013},
  pages           = {79-80},
  address         = {National Museum of Emerging Science and innovation (Miraikan), Tokyo},
  month           = Mar,
  day             = {3-6},
  doi             = {10.1109/HRI.2013.6483510},
  url             = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6483510},
  abstract        = {This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University},
  file            = {Becker-Asano2013.pdf:pdf/Becker-Asano2013.pdf:PDF},
  keywords        = {Tele-existence; Copresence; Tele-robotic; Social robotics},
}
Shuichi Nishio, Koichi Taura, Hiroshi Ishiguro, "Regulating Emotion by Facial Feedback from Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 388-397, October, 2012.
Abstract: In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.
BibTeX:
@Inproceedings{Nishio2012b,
  author    = {Shuichi Nishio and Koichi Taura and Hiroshi Ishiguro},
  title     = {Regulating Emotion by Facial Feedback from Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year      = {2012},
  pages     = {388-397},
  address   = {Chengdu, China},
  month     = Oct,
  day       = {29-31},
  doi       = {10.1007/978-3-642-34103-8_39},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_39},
  abstract  = {In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.},
  file      = {Nishio2012b.pdf:pdf/Nishio2012b.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Takashi Minato, Marco Nørskov, Nobu Ishiguro, Masaru Nishikawa, Tsutomu Fujinami, "Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation", In 2012 IEEE International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 53-58, October, 2012.
Abstract: As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.
BibTeX:
@Inproceedings{Yamazaki2012d,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Takashi Minato and Marco N\orskov and Nobu Ishiguro and Masaru Nishikawa and Tsutomu Fujinami},
  title     = {Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation},
  booktitle = {2012 {IEEE} International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems},
  year      = {2012},
  pages     = {53--58},
  address   = {Vilamoura, Algarve, Portugal},
  month     = Oct,
  day       = {7-12},
  abstract  = {As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.},
  file      = {Yamazaki2012d.pdf:Yamazaki2012d.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark", In International Conference on Social Robotics, Chengdu, China, pp. 428-437, October, 2012.
Abstract: We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@Inproceedings{Yamazaki2012c,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title           = {Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark},
  booktitle       = {International Conference on Social Robotics},
  year            = {2012},
  pages           = {428-437},
  address         = {Chengdu, China},
  month           = Oct,
  day             = {29-31},
  doi             = {10.1007/978-3-642-34103-8_43},
  url             = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_43},
  abstract        = {We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  file            = {Yamazaki2012c.pdf:pdf/Yamazaki2012c.pdf:PDF},
  keywords        = {android;teleoperation;minimal design;communication;embodiment;inclusion;acceptability;elderly care},
}
Hiroshi Ishiguro, Shuichi Nishi, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting", In Biologically Inspired Cognitive Architectures 2012, Palermo, Italy, October, 2012.
Abstract: The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.
BibTeX:
@Inproceedings{Ishiguro2012a,
  author    = {Hiroshi Ishiguro and Shuichi Nishi and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  title     = {Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting},
  booktitle = {Biologically Inspired Cognitive Architectures 2012},
  year      = {2012},
  address   = {Palermo, Italy},
  month     = Oct,
  abstract  = {The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Recognizing Affection for a Touch-based Interaction with a Humanoid Robot", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 1420-1427, October, 2012.
Abstract: In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.
BibTeX:
@Inproceedings{Cooney2012a,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Recognizing Affection for a Touch-based Interaction with a Humanoid Robot},
  booktitle       = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year            = {2012},
  pages           = {1420--1427},
  address         = {Vilamoura, Algarve, Portugal},
  month           = Oct,
  day             = {7-12},
  abstract        = {In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.},
  file            = {Cooney2012a.pdf:Cooney2012a.pdf:PDF},
}
Shuichi Nishio, Tetsuya Watanabe, Kohei Ogawa, Hiroshi Ishiguro, "Body Ownership Transfer to Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 398-407, October, 2012.
Abstract: Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.
BibTeX:
@Inproceedings{Nishio2012a,
  author    = {Shuichi Nishio and Tetsuya Watanabe and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Body Ownership Transfer to Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year      = {2012},
  pages     = {398-407},
  address   = {Chengdu, China},
  month     = Oct,
  day       = {29-31},
  doi       = {10.1007/978-3-642-34103-8_40},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_40},
  abstract  = {Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.},
  file      = {Nishio2012a.pdf:pdf/Nishio2012a.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of formant-based lip motion generation in tele-operated humanoid robots", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 2377-2382, October, 2012.
Abstract: Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.
BibTeX:
@Inproceedings{Ishi2012,
  author    = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Evaluation of formant-based lip motion generation in tele-operated humanoid robots},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2012},
  pages     = {2377--2382},
  address   = {Vilamoura, Algarve, Portugal},
  month     = Oct,
  day       = {7-12},
  abstract  = {Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.},
  file      = {Ishi2012.pdf:pdf/Ishi2012.pdf:PDF},
}
Kohei Ogawa, Koichi Taura, Shuichi Nishio, Hiroshi Ishiguro, "Effect of perspective change in body ownership transfer to teleoperated android robot", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1072-1077, September, 2012.
Abstract: We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.
BibTeX:
@Inproceedings{Ogawa2012c,
  author          = {Kohei Ogawa and Koichi Taura and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Effect of perspective change in body ownership transfer to teleoperated android robot},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {1072--1077},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343891},
  abstract        = {We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.},
  file            = {Ogawa2012c.pdf:Ogawa2012c.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of a formant-based speech-driven lip motion generation", In 13th Annual Conference of the International Speech Communication Association, Portland, Oregon, pp. P1a.04, September, 2012.
Abstract: The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.
BibTeX:
@Inproceedings{Ishi2012b,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Evaluation of a formant-based speech-driven lip motion generation},
  booktitle       = {13th Annual Conference of the International Speech Communication Association},
  year            = {2012},
  pages           = {P1a.04},
  address         = {Portland, Oregon},
  month           = Sep,
  day             = {9-13},
  abstract        = {The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.},
  file            = {Ishi2012b.pdf:pdf/Ishi2012b.pdf:PDF},
  keywords        = {lip motion, formant, tele-operation, humanoid robot},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "From an Object to a Subject -- Transitions of an Android Robot into a Social Being", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 821-826, September, 2012.
Abstract: What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.
BibTeX:
@Inproceedings{Straub2012,
  author          = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {From an Object to a Subject -- Transitions of an Android Robot into a Social Being},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {821--826},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343853},
  abstract        = {What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.},
  file            = {Straub2012.pdf:Strabu2012.pdf:PDF},
}
Kohei Ogawa, Koichi Taura, Hiroshi Ishiguro, "Possibilities of Androids as Poetry-reciting Agent", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 565-570, September, 2012.
Abstract: In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.
BibTeX:
@Inproceedings{Ogawa2012d,
  author          = {Kohei Ogawa and Koichi Taura and Hiroshi Ishiguro},
  title           = {Possibilities of Androids as Poetry-reciting Agent},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {565--570},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343811},
  abstract        = {In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.},
  file            = {Ogawa2012d.pdf:Ogawa2012d.pdf:PDF},
  keywords        = {Robot; Android; Art; Geminoid; Poetry},
}
Takashi Minato, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Studying the Influence of Handheld Robotic Media on Social Communications", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 15-16, September, 2012.
Abstract: This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.
BibTeX:
@Inproceedings{Minato2012c,
  author    = {Takashi Minato and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Studying the Influence of Handheld Robotic Media on Social Communications},
  booktitle = {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year      = {2012},
  pages     = {15--16},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.},
  file      = {Minato2012c.pdf:Minato2012c.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1066-1071, September, 2012.
Abstract: Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.
BibTeX:
@Inproceedings{Yamazaki2012b,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2012},
  pages     = {1066--1071},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.},
  file      = {Yamazaki2012b.pdf:Yamazaki2012b.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 32-39, September, 2012.
Abstract: In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.
BibTeX:
@Inproceedings{Sumioka2012c,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design},
  booktitle       = {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year            = {2012},
  pages           = {32--39},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  abstract        = {In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.},
  file            = {Sumioka2012c.pdf:Sumioka2012c.pdf:PDF},
}
Shuichi Nishio, Kohei Ogawa, Yasuhiro Kanakogi, Shoji Itakura, Hiroshi Ishiguro, "Do robot appearance and speech affect people's attitude? Evaluation through the Ultimatum Game", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 809-814, September, 2012.
Abstract: In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.
BibTeX:
@Inproceedings{Nishio2012,
  author          = {Shuichi Nishio and Kohei Ogawa and Yasuhiro Kanakogi and Shoji Itakura and Hiroshi Ishiguro},
  title           = {Do robot appearance and speech affect people's attitude? Evaluation through the {U}ltimatum {G}ame},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {809--814},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343851},
  abstract        = {In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.},
  file            = {Nishio2012.pdf:Nishio2012.pdf:PDF},
}
Martin Cooney, Francesco Zanlungo, Shuichi Nishio, Hiroshi Ishiguro, "Designing a Flying Humanoid Robot (FHR): Effects of Flight on Interactive Communication", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 364-371, September, 2012.
Abstract: This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot (FHR), with a focus on human-robot interaction (HRI). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.
BibTeX:
@Inproceedings{Cooney2012b,
  author          = {Martin Cooney and Francesco Zanlungo and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing a Flying Humanoid Robot ({FHR}): Effects of Flight on Interactive Communication},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {364--371},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343780},
  abstract        = {This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot ({FHR}), with a focus on human-robot interaction ({HRI}). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.},
  file            = {Cooney2012b.pdf:Cooney2012b.pdf:PDF},
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Personality Distortion in Communication through Teleoperated Robots", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 49-54, September, 2012.
Abstract: Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.
BibTeX:
@Inproceedings{Kuwamura2012,
  author    = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Personality Distortion in Communication through Teleoperated Robots},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2012},
  pages     = {49--54},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.},
  file      = {Kuwamura2012.pdf:pdf/Kuwamura2012.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Isolation of physical traits and conversational content for personality design", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 596-601, September, 2012.
Abstract: In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality
BibTeX:
@Inproceedings{Sumioka2012d,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title           = {Isolation of physical traits and conversational content for personality design},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {596--601},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343816},
  abstract        = {In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality},
  file            = {Sumioka2012d.pdf:Sumioka2012d.pdf:PDF},
}
Antonio Chella, Haris Dindo, Rosario Sorbello, Shuichi Nishio, Hiroshi Ishiguro, "Sing with the Telenoid", In CogSci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art, Sapporo Convention Center, pp. 16-20, August, 2012.
BibTeX:
@Inproceedings{Chella2012,
  author    = {Antonio Chella and Haris Dindo and Rosario Sorbello and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Sing with the Telenoid},
  booktitle = {{C}og{S}ci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art},
  year      = {2012},
  pages     = {16--20},
  address   = {Sapporo Convention Center},
  month     = Aug,
  day       = {1-4},
  file      = {Chella2012.pdf:Chella2012.pdf:PDF},
  keywords  = {Computer Music; Embodiment; Emotions; Imitation learning; Creativity; Human-robot Interaction},
}
Shuichi Nishio, "Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care", In CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art, Sapporo, Japan, August, 2012.
Abstract: Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.
BibTeX:
@Inproceedings{Nishio2012d,
  author    = {Shuichi Nishio},
  title     = {Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care},
  booktitle = {CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art},
  year      = {2012},
  address   = {Sapporo, Japan},
  month     = Aug,
  abstract  = {Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study", In Annual meeting of the Cognitive Science Society (CogSci2012), Sapporo Convention Center, pp. 2375-2380, August, 2012.
Abstract: We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.
BibTeX:
@Inproceedings{Sumioka2012,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title           = {Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study},
  booktitle       = {Annual meeting of the Cognitive Science Society ({C}og{S}ci2012)},
  year            = {2012},
  pages           = {2375-2380},
  address         = {Sapporo Convention Center},
  month           = Aug,
  day             = {1-4},
  url             = {http://mindmodeling.org/cogsci2012/papers/0413/paper0413.pdf},
  abstract        = {We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.},
  file            = {Sumioka2012.pdf:Sumioka2012.pdf:PDF},
  keywords        = {social cognition; android science; human-robot interaction; personality psychology; personal presence},
}
Takashi Minato, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Development of Cellphone-type Tele-operated Android", Poster presentation at The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 665-666, August, 2012.
Abstract: This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.
BibTeX:
@Inproceedings{Minato2012b,
  author    = {Takashi Minato and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Development of Cellphone-type Tele-operated Android},
  booktitle = {The 10th Asia Pacific Conference on Computer Human Interaction},
  year      = {2012},
  pages     = {665-666},
  address   = {Matsue, Japan},
  month     = Aug,
  day       = {28-31},
  abstract  = {This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.},
  file      = {Minato2012b.pdf:Minato2012b.pdf:PDF},
  keywords  = {Communication media; minimal design; human's presence},
}
Hidenobu Sumioka, Takashi Minato, Kurima Sakai, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of an Interactive Small Humanoid Robot with Visual Illusion", In The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 93-100, August, 2012.
Abstract: We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.
BibTeX:
@Inproceedings{Sumioka2012a,
  author    = {Hidenobu Sumioka and Takashi Minato and Kurima Sakai and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Motion Design of an Interactive Small Humanoid Robot with Visual Illusion},
  booktitle = {The 10th Asia Pacific Conference on Computer Human Interaction},
  year      = {2012},
  pages     = {93-100},
  address   = {Matsue, Japan},
  month     = Aug,
  day       = {28-31},
  url       = {http://dl.acm.org/authorize?6720741},
  abstract  = {We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.},
  file      = {Sumioka2012a.pdf:Sumioka2012a.pdf:PDF},
  keywords  = {telecommunication; nonverbal communication; portable robot avatar; visual illusion of motion},
}
Hiroshi Ishiguro, Shuichi Nishio, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Perceptual Social Dimensions of Human-Humanoid Robot Interaction", In The 12th International Conference on Intelligent Autonomous Systems, Springer Berlin Heidelberg, vol. 194, Jeju International Convention Center, Korea, pp. 409-421, June, 2012.
Abstract: The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.
BibTeX:
@Inproceedings{Ishiguro2012,
  author    = {Hiroshi Ishiguro and Shuichi Nishio and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  title     = {Perceptual Social Dimensions of Human-Humanoid Robot Interaction},
  booktitle = {The 12th International Conference on Intelligent Autonomous Systems},
  year      = {2012},
  volume    = {194},
  series    = {Advances in Intelligent Systems and Computing},
  pages     = {409-421},
  address   = {Jeju International Convention Center, Korea},
  month     = Jun,
  publisher = {Springer Berlin Heidelberg},
  day       = {26-29},
  doi       = {10.1007/978-3-642-33932-5_38},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-33932-5_38},
  abstract  = {The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.},
  file      = {Ishiguro2012.pdf:Ishiguro2012.pdf:PDF},
  keywords  = {Telenoid, Geminoid, Human Robot Interaction, Social Robot, Humanoid Robot},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, Kohei Matsumura, Kensuke Koda, Tsutomu Fujinami, "How Does Telenoid Affect the Communication between Children in Classroom Setting ?", In Extended Abstracts of the Conference on Human Factors in Computing Systems, Austin, Texas, USA, pp. 351-366, May, 2012.
Abstract: Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.
BibTeX:
@Inproceedings{Yamazaki2012,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro and Kohei Matsumura and Kensuke Koda and Tsutomu Fujinami},
  title           = {How Does Telenoid Affect the Communication between Children in Classroom Setting ?},
  booktitle       = {Extended Abstracts of the Conference on Human Factors in Computing Systems},
  year            = {2012},
  pages           = {351-366},
  address         = {Austin, Texas, {USA}},
  month           = May,
  day             = {5-10},
  doi             = {10.1145/2212776.2212814},
  url             = {http://dl.acm.org/authorize?6764060},
  abstract        = {Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.},
  file            = {Yamazaki2012.pdf:Yamazaki2012.pdf:PDF},
  keywords        = {Tele-operation; android; minimal design; human interaction; role differentiation; cooperation},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BMI-teleoperation of androids can transfer the sense of body ownership", Poster presentation at Cognitive Neuroscience Society's Annual Meeting, Chicago, Illinois, USA, April, 2012.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.
BibTeX:
@Inproceedings{Alimardani2012,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {{BMI}-teleoperation of androids can transfer the sense of body ownership},
  booktitle = {Cognitive Neuroscience Society's Annual Meeting},
  year      = {2012},
  address   = {Chicago, Illinois, {USA}},
  month     = Apr,
  day       = {1},
  abstract  = {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.},
  file      = {Alimardani2012.pdf:Alimardani2012.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction", In ACM/IEEE International Conference on Human Robot Interaction, Boston, USA, pp. 285-292, March, 2012.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
BibTeX:
@Inproceedings{Liu2012,
  author          = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction},
  booktitle       = {{ACM/IEEE} International Conference on Human Robot Interaction},
  year            = {2012},
  pages           = {285--292},
  address         = {Boston, USA},
  month           = Mar,
  day             = {5-8},
  doi             = {10.1145/2157689.2157797},
  abstract        = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.},
  file            = {Liu2012.pdf:Liu2012.pdf:PDF},
  keywords        = {Head motion; dialogue acts; eye gazing; motion generation.},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Body ownership transfer to tele-operated android through mind controlling", In HAI-2011, Kyoto Institute of Technology, pp. I-2A-1, December, 2011.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.
BibTeX:
@Inproceedings{Alimardani2011,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Body ownership transfer to tele-operated android through mind controlling},
  booktitle       = {{HAI}-2011},
  year            = {2011},
  pages           = {I-2{A}-1},
  address         = {Kyoto Institute of Technology},
  month           = Dec,
  day             = {3-5},
  url             = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/html/paper/paper-1-2a-1.html},
  abstract        = {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.},
  file            = {Alimardani2011.pdf:Alimardani2011.pdf:PDF;I-2A-1.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/pdf/I-2A-1.pdf:PDF},
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Antonio Chella, Hiroshi Ishiguro, "A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors", In Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the BICA Society, IOS Press, vol. 233, Arlington, Virginia, USA, pp. 26-38, November, 2011.
Abstract: Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@Inproceedings{Balistreri2011a,
  author    = {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Antonio Chella and Hiroshi Ishiguro},
  title     = {A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors},
  booktitle = {Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the {BICA} Society},
  year      = {2011},
  volume    = {233},
  pages     = {26-38},
  address   = {Arlington, Virginia, {USA}},
  month     = Nov,
  publisher = {{IOS} Press},
  day       = {5-6},
  abstract  = {Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  file      = {Balistreri2011a.pdf:Balistreri2011a.pdf:PDF},
  keywords  = {Android; gaze; sensor network},
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot", In IEEE-RAS International Conference on Humanoid Robots (Humanoids), Bled, Slovenia, pp. 112-119, October, 2011.
Abstract: Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.
BibTeX:
@Inproceedings{Cooney2011,
  author          = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  title           = {Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot},
  booktitle       = {{IEEE-RAS} International Conference on Humanoid Robots (Humanoids)},
  year            = {2011},
  pages           = {112--119},
  address         = {Bled, Slovenia},
  month           = Oct,
  day             = {26-28},
  abstract        = {Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.},
  file            = {Cooney2011.pdf:Cooney2011.pdf:PDF},
  keywords        = {interaction design; enjoyment; playful human-robot interaction; small humanoid robot},
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Hiroshi Ishiguro, "Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction", In Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence), Springer, vol. 6934, Palermo, Italy, pp. 432-437, September, 2011.
Abstract: Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@Inproceedings{Balistreri2011,
  author    = {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Hiroshi Ishiguro},
  title     = {Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction},
  booktitle = {Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence)},
  year      = {2011},
  volume    = {6934},
  pages     = {432--437},
  address   = {Palermo, Italy},
  month     = Sep,
  publisher = {Springer},
  doi       = {10.1007/978-3-642-23954-0_43},
  url       = {http://www.springerlink.com/content/c015680178436107/},
  abstract  = {Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  bibsource = {DBLP, http://dblp.uni-trier.de},
  file      = {Balistreri2011.pdf:Balistreri2011.pdf:PDF},
  keywords  = {Android; gaze; sensor network},
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Koichi Taura, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Telenoid: Tele-presence android for communication", In SIGGRAPH Emerging Technology, Vancouver, Canada, pp. 15, August, 2011.
Abstract: In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.
BibTeX:
@Inproceedings{Ogawa2011a,
  author          = {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Koichi Taura and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title           = {Telenoid: Tele-presence android for communication},
  booktitle       = {{SIGGRAPH} Emerging Technology},
  year            = {2011},
  pages           = {15},
  address         = {Vancouver, Canada},
  month           = Aug,
  day             = {7-11},
  doi             = {10.1145/2048259.2048274},
  url             = {http://dl.acm.org/authorize?6594082},
  abstract        = {In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.},
  file            = {Ogawa2011a.pdf:Ogawa2011a.pdf:PDF},
}
Panikos Heracleous, Miki Sato, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition", In International Congress of Phonetic Sciences, Hong Kong, China, pp. 855-858, August, 2011.
Abstract: Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.
BibTeX:
@Inproceedings{Heracleous2011e,
  author          = {Panikos Heracleous and Miki Sato and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition},
  booktitle       = {International Congress of Phonetic Sciences},
  year            = {2011},
  pages           = {855--858},
  address         = {Hong Kong, China},
  month           = Aug,
  day             = {18-21},
  abstract        = {Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.},
  file            = {Heracleous2011e.pdf:Heracleous2011e.pdf:PDF;Heracleous.pdf:http\://www.icphs2011.hk/resources/OnlineProceedings/RegularSession/Heracleous/Heracleous.pdf:PDF},
  keywords        = {speech; noisy environments; Lombard effect; lipreading},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Speech-driven lip motion generation for tele-operated humanoid robots", In the International Conference on Audio-Visual Speech Processing 2011, Volterra, Italy, pp. 131-135, August, 2011.
Abstract: (such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.
BibTeX:
@Inproceedings{Ishi2011a,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Speech-driven lip motion generation for tele-operated humanoid robots},
  booktitle       = {the International Conference on Audio-Visual Speech Processing 2011},
  year            = {2011},
  pages           = {131-135},
  address         = {Volterra, Italy},
  month           = Aug,
  day             = {31-3},
  abstract        = {(such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.},
  file            = {Ishi2011a.pdf:pdf/Ishi2011a.pdf:PDF},
  keywords        = {lip motion; formant; humanoid robot; tele-operation; synchronization},
}
Panikos Heracleous, Norihiro Hagita, "Automatic Recognition of Speech without any audio information", In IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, pp. 2392-2395, May, 2011.
Abstract: This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography (EMA) device and are used as features to create hidden Markov models (HMMs) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.
BibTeX:
@Inproceedings{Heracleous2011a,
  author    = {Panikos Heracleous and Norihiro Hagita},
  title     = {Automatic Recognition of Speech without any audio information},
  booktitle = {{IEEE} International Conference on Acoustics, Speech and Signal Processing},
  year      = {2011},
  pages     = {2392--2395},
  address   = {Prague, Czech Republic},
  month     = May,
  day       = {22-27},
  doi       = {10.1109/ICASSP.2011.5946965},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5946965},
  abstract  = {This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography ({EMA}) device and are used as features to create hidden Markov models ({HMM}s) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.},
  file      = {Heracleous2011a.pdf:Heracleous2011a.pdf:PDF},
}
Panikos Heracleous, Hiroshi Ishiguro, Norihiro Hagita, "Visual-speech to text conversion applicable to telephone communication for deaf individuals", In International Conference on Telecommunications, Ayia Napa, Cyprus, pp. 130-133, May, 2011.
Abstract: The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.
BibTeX:
@Inproceedings{Heracleous2011f,
  author    = {Panikos Heracleous and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Visual-speech to text conversion applicable to telephone communication for deaf individuals},
  booktitle = {International Conference on Telecommunications},
  year      = {2011},
  pages     = {130--133},
  address   = {Ayia Napa, Cyprus},
  month     = May,
  day       = {8-11},
  doi       = {10.1109/CTS.2011.5898904},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5898904},
  abstract  = {The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.},
  file      = {Heracleous2011f.pdf:Heracleous2011f.pdf:PDF},
}
Panikos Heracleous, Miki Sato, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita, "The effect of environmental noise to automatic lip-reading", In Spring Meeting Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 5-8, March, 2011.
Abstract: In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.
BibTeX:
@Inproceedings{Heracleous2011c,
  author          = {Panikos Heracleous and Miki Sato and Carlos Toshinori Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {The effect of environmental noise to automatic lip-reading},
  booktitle       = {Spring Meeting Acoustical Society of Japan},
  year            = {2011},
  series          = {1-5-3},
  pages           = {5--8},
  address         = {Waseda University, Tokyo, Japan},
  month           = Mar,
  abstract        = {In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.},
  file            = {Heracleous2011c.pdf:Heracleous2011c.pdf:PDF},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An Android in the Field", In the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland, pp. 283-284, March, 2011.
Abstract: Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.
BibTeX:
@Inproceedings{Putten2011,
  author    = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title     = {An Android in the Field},
  booktitle = {the 6th {ACM/IEEE} International Conference on Human-Robot Interaction},
  year      = {2011},
  pages     = {283--284},
  address   = {Lausanne, Switzerland},
  month     = Mar,
  day       = {6-9},
  doi       = {10.1145/1957656.1957772},
  abstract  = {Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated identity in interaction with a teleoperated android robot: A case study", In IEEE International Symposium on Robot and Human Interactive Communication, Viareggio, Italy, pp. 139-144, September, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@Inproceedings{Straub2010a,
  author          = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Incorporated identity in interaction with a teleoperated android robot: A case study},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2010},
  pages           = {139--144},
  address         = {Viareggio, Italy},
  month           = Sep,
  doi             = {10.1109/ROMAN.2010.5598695},
  url             = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5598695},
  abstract        = {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  file            = {Straub2010a.pdf:Straub2010a.pdf:PDF},
  issn            = {1944-9445},
  keywords        = {Geminoid HI-1;artificial social agent robot;identity-creation;identity-imitation;identity-mediation;identity-switching;interaction tool analysis;metaphorical language;qualitative methods;teleoperated android robot;virtual agents;human-robot interaction;humanoid robots;telerobotics;},
}
Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploring the uncanny valley with Geminoid HI-1 in a real-world application", In IADIS International Conference on Interfaces and Human Computer Interaction, Freiburg, Germany, pp. 121-128, July, 2010.
Abstract: This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid HI-1, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid HI-1. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.
BibTeX:
@Inproceedings{Becker-Asano2010,
  author    = {Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Exploring the uncanny valley with Geminoid {HI}-1 in a real-world application},
  booktitle = {{IADIS} International Conference on Interfaces and Human Computer Interaction},
  year      = {2010},
  pages     = {121--128},
  address   = {Freiburg, Germany},
  month     = Jul,
  url       = {http://www.iadisportal.org/digital-library/exploring-the-uncanny-valley-with-geminoid-hi-1-in-a-real-world-application},
  abstract  = {This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid {HI-1}, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid {HI-1}. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.},
  file      = {Becker-Asano2010.pdf:Becker-Asano2010.pdf:PDF},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study", In International Conference on Culture and Computing, Kyoto, Japan, pp. 63-75, February, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@Inproceedings{Straub2010,
  author    = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study},
  booktitle = {International Conference on Culture and Computing},
  year      = {2010},
  pages     = {63--75},
  address   = {Kyoto, Japan},
  month     = Feb,
  abstract  = {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  file      = {Straub2010.pdf:Straub2010.pdf:PDF},
}
Christian Becker-Asano, Hiroshi Ishiguro, "Laughter in Social Robotics - no laughing matter", In International Workshop on Social Intelligence Design, Kyoto, Japan, pp. 287-300, November, 2009.
Abstract: In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.
BibTeX:
@Inproceedings{Becker-Asano2009,
  author          = {Christian Becker-Asano and Hiroshi Ishiguro},
  title           = {Laughter in Social Robotics - no laughing matter},
  booktitle       = {International Workshop on Social Intelligence Design},
  year            = {2009},
  pages           = {287--300},
  address         = {Kyoto, Japan},
  month           = Nov,
  url             = {http://www.becker-asano.de/SID09_LaughterInSocialRoboticsCameraReady.pdf},
  abstract        = {In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.},
  file            = {Becker-Asano2009.pdf:Becker-Asano2009.pdf:PDF},
  keywords        = {Affective Computing; Natural Interaction; Laughter; Social Robotics.},
}
Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, "Can an android persuade you?", In IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, pp. 516-521, September, 2009.
Abstract: The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
BibTeX:
@Inproceedings{Ogawa2009,
  author    = {Kohei Ogawa and Christoph Bartneck and Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro},
  title     = {Can an android persuade you?},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2009},
  pages     = {516--521},
  address   = {Toyama, Japan},
  month     = Sep,
  doi       = {10.1109/ROMAN.2009.5326352},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5326352},
  abstract  = {The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.},
  file      = {Ogawa2009.pdf:Ogawa2009.pdf:PDF},
  issn      = {1944-9445},
  keywords  = {Bluetooth headset;human counterpart;persuasive agent;persuasive android robot;robotic copy;Bluetooth;humanoid robots;},
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Expressing individuality through teleoperated android: a case study with children", In IASTED International Conference on Human Computer Interaction, ACTA Press, Innsbruck, Autria, pp. 297-302, March, 2008.
Abstract: When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Inproceedings{Nishio2008,
  author    = {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title     = {Expressing individuality through teleoperated android: a case study with children},
  booktitle = {{IASTED} International Conference on Human Computer Interaction},
  year      = {2008},
  pages     = {297--302},
  address   = {Innsbruck, Autria},
  month     = Mar,
  publisher = {{ACTA} Press},
  url       = {http://dl.acm.org/citation.cfm?id=1722359.1722414},
  abstract  = {When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file      = {Nishio2008.pdf:Nishio2008.pdf:PDF},
  keywords  = {android; human individuality; human-robot interaction; personal presence},
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Representing Personal Presence with a Teleoperated Android: A Case Study with Family", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, Stanford University, Palo Alto, California, USA, March, 2008.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Inproceedings{Nishio2008a,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title           = {Representing Personal Presence with a Teleoperated Android: A Case Study with Family},
  booktitle       = {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  year            = {2008},
  address         = {Stanford University, Palo Alto, California, {USA}},
  month           = Mar,
  abstract        = {Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file            = {Nishio2008a.pdf:Nishio2008a.pdf:PDF},
}
Freerk P. Wilbers, Carlos T. Ishi, Hiroshi Ishiguro, "A Blendshape Model for Mapping Facial Motions to an Android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 542-547, October, 2007.
Abstract: An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.
BibTeX:
@Inproceedings{Wilbers2007,
  author    = {Freerk P. Wilbers and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {A Blendshape Model for Mapping Facial Motions to an Android},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2007},
  pages     = {542--547},
  month     = Oct,
  doi       = {10.1109/IROS.2007.4399394},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399394},
  abstract  = {An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.},
  file      = {Wilbers2007.pdf:Wilbers2007.pdf:PDF},
  keywords  = {Repliee Q2;android;animated character;blendshape model;computer graphics animation;facial motions mapping;computer animation;face recognition;motion compensation;},
}
Carlos T. Ishi, Judith Haas, Freerk P. Wilbers, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of head motions and speech, and head motion control in an android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, California, USA, pp. 548-553, October, 2007.
Abstract: With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.
BibTeX:
@Inproceedings{Ishi2007,
  author    = {Carlos T. Ishi and Judith Haas and Freerk P. Wilbers and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Analysis of head motions and speech, and head motion control in an android},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2007},
  pages     = {548--553},
  address   = {San Diego, California, USA},
  month     = Oct,
  doi       = {10.1109/IROS.2007.4399335},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399335},
  abstract  = {With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.},
  file      = {Ishi2007.pdf:Ishi2007.pdf:PDF},
  keywords  = {android;head motion control;natural dialogue;paralinguistic information;phrase boundaries;speech analysis;speech utterances;voice quality information;humanoid robots;motion control;speech synthesis;},
}
Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, Norihiro Hagita, "Android as a telecommunication medium with a human-like presence", In ACM/IEEE International Conference on Human Robot Interaction, Arlington, Virginia, USA, pp. 193-200, March, 2007.
Abstract: In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.
BibTeX:
@Inproceedings{Sakamoto2007,
  author    = {Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Android as a telecommunication medium with a human-like presence},
  booktitle = {{ACM/IEEE} International Conference on Human Robot Interaction},
  year      = {2007},
  pages     = {193--200},
  address   = {Arlington, Virginia, {USA}},
  month     = Mar,
  doi       = {10.1145/1228716.1228743},
  url       = {http://doi.acm.org/10.1145/1228716.1228743},
  abstract  = {In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.},
  keywords  = {android science; humanoid robot; telecommunication; telepresence},
  numpages  = {8},
}
Non-Reviewed Conference Papers
Takuto Akiyoshi, Hidenobu Sumioka, Junya Nakanishi, Hirokazu Kato, Masahiro Shiomi, "Modeling of Touch Gestures during Human Hugging Interactions and Implementing on a Huggable Robot", In SIGDIAL 2024 Workshop on Spoken Dialogue Systems for Cybernetic Avatars (SDS4CA), 京都大学, 京都, September, 2024.
Abstract: As a first step toward the realization of huggable cybernetic avatars, this study modeled social touch gestures during hugs, such as patting and stroking, according to the flow of hugging interaction performed by the supporter to provide mental support to the client. Previous studies have developed huggable robots and implemented the ability to perform gestures. On the other hand, there have been few studies on how to perform gestures in response to the flow of hugging interaction. Since social touch at inappropriate times or inappropriate amounts can lead to negative effects, we focused on gestures performed by humans during hugging interaction as a first step toward clarifying the appropriate gestures according to the flow of hugging interaction. In this study, we collected gesture data during hugging interaction performed by participants in the role of supporter using a mannequin in the role of client. The hugging interaction scenarios in the data collection were designed based on the cognitive reconstruction method used to organize thoughts for mental health care. In this hugging interaction, participants asked questions about negative concerns and positive goals, and were organized by the following items: contents, triggers, emotions, acts, thoughts, alternative ideas, and awareness. After the participant asked a question, the participant listened to the mannequin's response, the participant provided an empathic response, and then repeated the process of asking questions about the next item. The hugging interaction was recorded by two video cameras, and the gesture data were recorded by two coders in terms of occurrence, type, area, start timing, and duration of the gesture. Since participants gestured freely during the hug interaction, the number of data obtained from each participant was not consistent and varied from person to person. Therefore, we analyzed the size of the influence of the flow of the dialogue and other gesture parameters...
BibTeX:
@InProceedings{Akiyoshi2024,
  author    = {Takuto Akiyoshi and Hidenobu Sumioka and Junya Nakanishi and Hirokazu Kato and Masahiro Shiomi},
  booktitle = {SIGDIAL 2024 Workshop on Spoken Dialogue Systems for Cybernetic Avatars (SDS4CA)},
  title     = {Modeling of Touch Gestures during Human Hugging Interactions and Implementing on a Huggable Robot},
  year      = {2024},
  address   = {京都大学, 京都},
  day       = {17-20},
  month     = sep,
  url       = {http://www.sap.ist.i.kyoto-u.ac.jp/seminar/sds4ca/},
  abstract  = {As a first step toward the realization of huggable cybernetic avatars, this study modeled social touch gestures during hugs, such as patting and stroking, according to the flow of hugging interaction performed by the supporter to provide mental support to the client. Previous studies have developed huggable robots and implemented the ability to perform gestures. On the other hand, there have been few studies on how to perform gestures in response to the flow of hugging interaction. Since social touch at inappropriate times or inappropriate amounts can lead to negative effects, we focused on gestures performed by humans during hugging interaction as a first step toward clarifying the appropriate gestures according to the flow of hugging interaction. In this study, we collected gesture data during hugging interaction performed by participants in the role of supporter using a mannequin in the role of client. The hugging interaction scenarios in the data collection were designed based on the cognitive reconstruction method used to organize thoughts for mental health care. In this hugging interaction, participants asked questions about negative concerns and positive goals, and were organized by the following items: contents, triggers, emotions, acts, thoughts, alternative ideas, and awareness. After the participant asked a question, the participant listened to the mannequin's response, the participant provided an empathic response, and then repeated the process of asking questions about the next item. The hugging interaction was recorded by two video cameras, and the gesture data were recorded by two coders in terms of occurrence, type, area, start timing, and duration of the gesture. Since participants gestured freely during the hug interaction, the number of data obtained from each participant was not consistent and varied from person to person. Therefore, we analyzed the size of the influence of the flow of the dialogue and other gesture parameters...},
}
David Achanccaray, Hidenobu Sumioka, Javier Andreu-Perez, "Neural profile of social robot's operator in teleoperation applications", In The IEEE World Congress on Computational Intelligence(IEEE WCCI 2024) FUZZ-IEEE Workshop; the 1st International Workshop on Computational Intelligence in Human Informatics, パシフィコ横浜, 神奈川, June, 2024.
Abstract: The teleoperation conditions can affect the operator’s performance due to the alteration in his/her workload and mental state. Decoding the robot's operator's neural profile might help mitigate these effects and provide assistance through the teleoperation interface. We have developed simulations and real experiments of teleoperated social tasks to evaluate the neural profile of the operator interacting with another individual through a robotic avatar. This presentation will show the findings of our studies.
BibTeX:
@InProceedings{Achanccaray2024b,
  author    = {David Achanccaray and Hidenobu Sumioka and Javier Andreu-Perez},
  booktitle = {The IEEE World Congress on Computational Intelligence(IEEE WCCI 2024) FUZZ-IEEE Workshop; the 1st International Workshop on Computational Intelligence in Human Informatics},
  title     = {Neural profile of social robot's operator in teleoperation applications},
  year      = {2024},
  address   = {パシフィコ横浜, 神奈川},
  day       = {30},
  month     = jun,
  url       = {https://csee.essex.ac.uk/research/SmartHealthTech/workshop-cihi/},
  abstract  = {The teleoperation conditions can affect the operator’s performance due to the alteration in his/her workload and mental state. Decoding the robot's operator's neural profile might help mitigate these effects and provide assistance through the teleoperation interface. We have developed simulations and real experiments of teleoperated social tasks to evaluate the neural profile of the operator interacting with another individual through a robotic avatar. This presentation will show the findings of our studies.},
}
東中竜一郎, 高橋哲朗, 稲葉通将, 斉志揚, 佐々木裕多, 船越孝太郎, 守屋彰二, 佐藤志貴, 港隆史, 境くりま, 船山智, 小室允人, 西川寛之, 牧野遼作, 菊池浩史, 宇佐美まゆみ, "Dialogue System Live Competition Goes Multimodal: Analyzing the Effects of Multimodal Information in Situated Dialogue Systems", In The 14th International Workshop on Spoken Dialogue Systems Technology(IWSDS2024), 札幌, 北海道, pp. 1-15, March, 2024.
Abstract: The Dialogue System Live Competition series is an annual event in Japanthat showcases the challenges and limitations inherent in human-computer dialoguewithin a live event context. Traditionally focused on text-based dialogue systems,the competition last year transitioned to encompass multimodal dialogue systems.This paper presents findings from the preliminary round of the most recent event,Dialogue System Live Competition 6, which featured situated multimodal dialoguesystems. In the preliminary round, eight systems from participating teams competedalongside three baseline systems. This paper details the performance of these systemsand analyzes the effect of multimodal information, as demonstrated by thesesystems, on subjective ratings.We also briefly touch on the results of the final round.
BibTeX:
@InProceedings{東中竜一郎2024,
  author    = {東中竜一郎 and 高橋哲朗 and 稲葉通将 and 斉志揚 and 佐々木裕多 and 船越孝太郎 and 守屋彰二 and 佐藤志貴 and 港隆史 and 境くりま and 船山智 and 小室允人 and 西川寛之 and 牧野遼作 and 菊池浩史 and 宇佐美まゆみ},
  booktitle = {The 14th International Workshop on Spoken Dialogue Systems Technology(IWSDS2024)},
  title     = {Dialogue System Live Competition Goes Multimodal: Analyzing the Effects of Multimodal Information in Situated Dialogue Systems},
  year      = {2024},
  address   = {札幌, 北海道},
  day       = {4-6},
  etitle    = {Dialogue System Live Competition Goes Multimodal: Analyzing the Effects of Multimodal Information in Situated Dialogue Systems},
  month     = mar,
  pages     = {1-15},
  url       = {https://sites.google.com/grp.riken.jp/iwsds2024},
  abstract  = {The Dialogue System Live Competition series is an annual event in Japanthat showcases the challenges and limitations inherent in human-computer dialoguewithin a live event context. Traditionally focused on text-based dialogue systems,the competition last year transitioned to encompass multimodal dialogue systems.This paper presents findings from the preliminary round of the most recent event,Dialogue System Live Competition 6, which featured situated multimodal dialoguesystems. In the preliminary round, eight systems from participating teams competedalongside three baseline systems. This paper details the performance of these systemsand analyzes the effect of multimodal information, as demonstrated by thesesystems, on subjective ratings.We also briefly touch on the results of the final round.},
}
Hiroshi Ishiguro, "Symbiotic Society with Avatars : Social Acceptance, Ethics, and Technologies (SSA)", In 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022), Naples, Italy (hybrid), August, 2022.
Abstract: Part of Morning Workshop (SALA ARAGONESE) Hybrid This workshop aims to provide an opportunity that researchers in communication robot, avatar, psychology, ethics, and law come together and discuss the issues described above to realize a symbiotic society with avatars.
BibTeX:
@InProceedings{Ishiguro2022b,
  author    = {Hiroshi Ishiguro},
  booktitle = {31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022)},
  title     = {Symbiotic Society with Avatars : Social Acceptance, Ethics, and Technologies (SSA)},
  year      = {2022},
  address   = {Naples, Italy (hybrid)},
  day       = {29-02},
  month     = aug,
  url       = {http://www.smile.unina.it/ro-man2022/2-september-2022/},
  abstract  = {Part of Morning Workshop (SALA ARAGONESE) Hybrid This workshop aims to provide an opportunity that researchers in communication robot, avatar, psychology, ethics, and law come together and discuss the issues described above to realize a symbiotic society with avatars.},
}
Hiroshi Ishiguro, "Realisation of the Avatar Symbiotic Society: The Concept and Technologies", In ROBOPHILOSOPHY CONFERENCE 2022, WORKSHOP 3: ELSI of the Avatar Symbiotic Society, University of Helsinki, Finland (online), August, 2022.
Abstract: Part of WORKSHOP 3: ELSI of the Avatar Symbiotic Society The author has long been engaged in research and development on robots that act as human surrogates. Moreover, the author has been addressing the issues of how to give robots a sense of presence, how to make them look and feel alive, how to enrich human-robot interaction, and how to design a society where humans and robots coexist. Recently, based on this research and development, the author is leading a project to realize the Avatar Symbiotic Society in which one can easily manipulate multiple avatars as one wishes and participate in various social activities through them. In this presentation, the author will introduce some of the technologies being developed in this research and introduce the concept of an avatar symbiotic society.
BibTeX:
@InProceedings{Ishiguro2022a,
  author    = {Hiroshi Ishiguro},
  booktitle = {ROBOPHILOSOPHY CONFERENCE 2022, WORKSHOP 3: ELSI of the Avatar Symbiotic Society},
  title     = {Realisation of the Avatar Symbiotic Society: The Concept and Technologies},
  year      = {2022},
  address   = {University of Helsinki, Finland (online)},
  day       = {16-19},
  month     = aug,
  url       = {https://cas.au.dk/robophilosophy/conferences/rpc2022/program/workshop-3-elsi-of-the-avatar-symbiotic-society},
  abstract  = {Part of WORKSHOP 3: ELSI of the Avatar Symbiotic Society
The author has long been engaged in research and development on robots that act as human surrogates. Moreover, the author has been addressing the issues of how to give robots a sense of presence, how to make them look and feel alive, how to enrich human-robot interaction, and how to design a society where humans and robots coexist. Recently, based on this research and development, the author is leading a project to realize the Avatar Symbiotic Society in which one can easily manipulate multiple avatars as one wishes and participate in various social activities through them. In this presentation, the author will introduce some of the technologies being developed in this research and introduce the concept of an avatar symbiotic society.},
}
李歆玥, 石井カルロス寿憲, 林良子, "日本語自然会話におけるフィラーの音響分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-", In 2022年3月日本音響学会音声コミュニケーション研究会, vol. 2, no. 2, online, pp. 27-30, March, 2022.
Abstract: The present study documents (1) how Japanese native speakers and L1-Chinese learners of L2 Japanese differin the production of filled pauses during spontaneous conversations, and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation. Prosodic and voice quality measurements were extracted and the results of acousticanalyses indicated that there are significant differences in prosodic and voice quality measurements including duration, F0mean,intensity, spectral tilt-related indices, jitter and shimmer, (1) between Japanese native speakers and Chinese learners of L2Japanese, as well as (2) between filled pauses and ordinary lexical items. Furthermore, results of random forest classification analysisindicate that duration and intensity play the most significant role, while voice quality related features make a secondary contribution to theclassification.
BibTeX:
@InProceedings{Li2022,
  author    = {李歆玥 and 石井カルロス寿憲 and 林良子},
  booktitle = {2022年3月日本音響学会音声コミュニケーション研究会},
  title     = {日本語自然会話におけるフィラーの音響分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-},
  year      = {2022},
  address   = {online},
  day       = {21},
  etitle    = {Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation -Japanese Native Speakers and L1-Chinese learners of L2 Japanese-},
  month     = mar,
  number    = {2},
  pages     = {27-30},
  url       = {https://asj-sccom.acoustics.jp/},
  volume    = {2},
  abstract  = {The present study documents (1) how Japanese native speakers and L1-Chinese learners of L2 Japanese differin the production of filled pauses during spontaneous conversations, and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation. Prosodic and voice quality measurements were extracted and the results of acousticanalyses indicated that there are significant differences in prosodic and voice quality measurements including duration, F0mean,intensity, spectral tilt-related indices, jitter and shimmer, (1) between Japanese native speakers and Chinese learners of L2Japanese, as well as (2) between filled pauses and ordinary lexical items. Furthermore, results of random forest classification analysisindicate that duration and intensity play the most significant role, while voice quality related features make a secondary contribution to theclassification.},
  keywords  = {Spontaneous conversation, Second language acquisition, Random Forest, Disfluency},
}
Yoji Kohda, Nobuo Yamato, Hidenobu Sumioka, "Role of Artificial Intelligence (AI) to Provide Quality Public Health Services", In International Conference On Sustainable Development : Opportunities And Challenges, American International University Bangladesh, Bangladesh (online), January, 2022.
Abstract: In this talk, I would like to talk about the role of AI in general from a knowledge science perspective, using the heath care sector as an example. I discuss the role of AI to answer two questions: "Can doctors learn from AI?" and "Will patients listen to AI?".
BibTeX:
@InProceedings{Kohda2022,
  author    = {Yoji Kohda and Nobuo Yamato and Hidenobu Sumioka},
  booktitle = {International Conference On Sustainable Development : Opportunities And Challenges},
  title     = {Role of Artificial Intelligence (AI) to Provide Quality Public Health Services},
  year      = {2022},
  address   = {American International University Bangladesh, Bangladesh (online)},
  day       = {12-13},
  month     = jan,
  url       = {https://aicss.aiub.edu/},
  abstract  = {In this talk, I would like to talk about the role of AI in general from a knowledge science perspective, using the heath care sector as an example. I discuss the role of AI to answer two questions: "Can doctors learn from AI?" and "Will patients listen to AI?".},
}
石井カルロス寿憲, "3者対話における視線の理由と視線逸らしの分析", In 日本音響学会2021年秋季研究発表会, no. 3-3-15, オンライン, pp. 1281-1282, September, 2021.
Abstract: 3者対話データベースを用いて、発話に伴う話者の顔に向けた視線および話者の顔以外に向けられた視線逸らしの理由を調べた。視線逸らしの場合は、黒目の動きの分布も分析した。
BibTeX:
@InProceedings{石井カルロス寿憲2021_,
  author    = {石井カルロス寿憲},
  booktitle = {日本音響学会2021年秋季研究発表会},
  title     = {3者対話における視線の理由と視線逸らしの分析},
  year      = {2021},
  address   = {オンライン},
  day       = {7-9},
  month     = sep,
  number    = {3-3-15},
  pages     = {1281-1282},
  url       = {https://acoustics.jp/annualmeeting/},
  abstract  = {3者対話データベースを用いて、発話に伴う話者の顔に向けた視線および話者の顔以外に向けられた視線逸らしの理由を調べた。視線逸らしの場合は、黒目の動きの分布も分析した。},
}
内田貴久, 港隆史, 石黒浩, "Autonomous Robots for Daily Dialogue Based on Preference and Experience Models", In The 3rd International Symposium on Symbiotic Intelligent Systems: "A New Era towards Responsible Robotics and Innovation" (3rd SISReC Symposium), online, November, 2020.
Abstract: This study develops robots that people want to engage in daily dialogue with. In this study, we hypothesize that “a dialogue robot that tries to understand human relationships both improve its human-likeness and the user’s motivation to talk with it.” In this presentation, we first explain a dialogue robot that estimates others’ preference models from its own preference model. Next, we propose a dialogue robot based on the similarity of personal preference models. Finally, we propose a dialogue robot based on the similarity of personal experience models. The experimental results of the three studies support the hypothesis. Future work needs to develop a human relationship model that considers cultural differences and types of desires.
BibTeX:
@InProceedings{内田貴久2020,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  booktitle = {The 3rd International Symposium on Symbiotic Intelligent Systems: "A New Era towards Responsible Robotics and Innovation" (3rd SISReC Symposium)},
  title     = {Autonomous Robots for Daily Dialogue Based on Preference and Experience Models},
  year      = {2020},
  address   = {online},
  day       = {19-20},
  month     = nov,
  url       = {https://sisrec.otri.osaka-u.ac.jp/the-3rd-international-symposium-on-symbiotic-intelligent-systems/},
  abstract  = {This study develops robots that people want to engage in daily dialogue with. In this study, we hypothesize that “a dialogue robot that tries to understand human relationships both improve its human-likeness and the user’s motivation to talk with it.” In this presentation, we first explain a dialogue robot that estimates others’ preference models from its own preference model. Next, we propose a dialogue robot based on the similarity of personal preference models. Finally, we propose a dialogue robot based on the similarity of personal experience models. The experimental results of the three studies support the hypothesis. Future work needs to develop a human relationship model that considers cultural differences and types of desires.},
}
Bowen Wu, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Improving Conditional-GAN using Unrolled-GAN for the Generation of Co-speech Upper Body Gesture", In 第57回人工知能学会 AI チャレンジ研究会, no. 057-15, オンライン開催, pp. 92-99, November, 2020.
Abstract: Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation show that the proposed model outperforms the existing deterministic model in terms of distribution, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.
BibTeX:
@InProceedings{Wu2020,
  author    = {Bowen Wu and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  booktitle = {第57回人工知能学会 AI チャレンジ研究会},
  title     = {Improving Conditional-GAN using Unrolled-GAN for the Generation of Co-speech Upper Body Gesture},
  year      = {2020},
  address   = {オンライン開催},
  day       = {20-21},
  month     = nov,
  number    = {057-15},
  pages     = {92-99},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-057/program.html},
  abstract  = {Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation show that the proposed model outperforms the existing deterministic model in terms of distribution, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.},
}
Hidenobu Sumioka, "A minimal design for intimate touch interaction toward interactive doll therapy", In Workshop on Socialware in human-robot collaboration and physical interaction (in the international conference on robot and human interactive communication), Online workshop (zoom), September, 2020.
BibTeX:
@Inproceedings{Sumioka2020a,
  author    = {Hidenobu Sumioka},
  title     = {A minimal design for intimate touch interaction toward interactive doll therapy},
  booktitle = {Workshop on Socialware in human-robot collaboration and physical interaction (in the international conference on robot and human interactive communication)},
  year      = {2020},
  address   = {Online workshop (zoom)},
  month     = sep,
  day       = {1},
  url       = {https://dil.atr.jp/crest2018_STI/socialware-in-roman2020/page.html},
}
Soheil Keshmiri, "Higher Specificity of Multiscale Entropy than Permutation Entropy in Quantification of the Brain Activity in Response to Naturalistic Stimuli: a Comparative Study", In The 1st International Symposium on Human InformatiX: X-Dimensional Human Informatics and Biology, ATR, Kyoto, February, 2020.
Abstract: I provide results on the comparative analyses of these measures with the entropy of the human subjects’ EEG recordings who watched short movie clips that elicited negative, neutral, and positive affect. The analyses results identified significant anti-correlations between all MSE scales and the entropy of these EEG recordings that were stronger in the negative than the positive and the neutral states. They also showed that MSE significantly differentiated between the brain responses to these affect. On the other hand, these results indicated that PE failed to identify such significant correlations and differences between the negative, neutral, and positive affect. These results provide insights on the level of association between the entropy, the MSE, and the PE of the brain variability in response to naturalistic stimuli, thereby enabling researchers to draw more informed conclusions on quantification of the brain variability by these measures.
BibTeX:
@InProceedings{Keshmiri2020a,
  author    = {Soheil Keshmiri},
  booktitle = {The 1st International Symposium on Human InformatiX: X-Dimensional Human Informatics and Biology},
  title     = {Higher Specificity of Multiscale Entropy than Permutation Entropy in Quantification of the Brain Activity in Response to Naturalistic Stimuli: a Comparative Study},
  year      = {2020},
  address   = {ATR, Kyoto},
  day       = {27-28},
  month     = feb,
  abstract  = {I provide results on the comparative analyses of these measures with the entropy of the human subjects’ EEG recordings who watched short movie clips that elicited negative, neutral, and positive affect. The analyses results identified significant anti-correlations between all MSE scales and the entropy of these EEG recordings that were stronger in the negative than the positive and the neutral states. They also showed that MSE significantly differentiated between the brain responses to these affect. On the other hand, these results indicated that PE failed to identify such significant correlations and differences between the negative, neutral, and positive affect. These results provide insights on the level of association between the entropy, the MSE, and the PE of the brain variability in response to naturalistic stimuli, thereby enabling researchers to draw more informed conclusions on quantification of the brain variability by these measures.},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care", In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), The Venetian Macau, China, November, 2019.
Abstract: In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.
BibTeX:
@InProceedings{Keshmiri2019a,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care},
  year      = {2019},
  address   = {The Venetian Macau, China},
  day       = {3-8},
  month     = nov,
  url       = {https://www.iros2019.org/},
  abstract  = {In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation", In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), The Venetian Macau, China, November, 2019.
Abstract: In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.
BibTeX:
@InProceedings{Keshmiri2019b,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation},
  year      = {2019},
  address   = {The Venetian Macau, China},
  day       = {3-8},
  month     = nov,
  url       = {https://www.iros2019.org/},
  abstract  = {In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
劉超然, 石井カルロス寿憲, "マイクロフォンアレイおよびデプスセンサーのオンラインキャリブレーションに関する考察", In 第55回人工知能学会 AI チャレンジ研究会, 慶応義塾大学 矢上キャンパス, 神奈川, pp. 18-23, November, 2019.
Abstract: RGB-D sensor and microphone array are widly used for providing an instantaneous representation of the current visual and auditory environment. Sensor pose is needed for sharing and combining sensing results together. However, manual calibration of different type of sensors is tedious and time consuming. In this paper, we propose an online calibration framework that can estimate sensors' 3D pose and works with RGB-D sensor and microphone array. In the proposed framework, the calibration problem is described as a factor graph inference problem and solved with a Graph Neural Network (GNN). Instead of frequently used visual markers, we use multiple moving people as reference objects to achieve automatic calibration.
BibTeX:
@InProceedings{劉超然2019b,
  author    = {劉超然 and 石井カルロス寿憲},
  booktitle = {第55回人工知能学会 AI チャレンジ研究会},
  title     = {マイクロフォンアレイおよびデプスセンサーのオンラインキャリブレーションに関する考察},
  year      = {2019},
  address   = {慶応義塾大学 矢上キャンパス, 神奈川},
  day       = {22},
  etitle    = {Online calibration of microphone array and depth sensors},
  month     = nov,
  pages     = {18-23},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-055/},
  abstract  = {RGB-D sensor and microphone array are widly used for providing an instantaneous representation of the current visual and auditory environment. Sensor pose is needed for sharing and combining sensing results together. However, manual calibration of different type of sensors is tedious and time consuming. In this paper, we propose an online calibration framework that can estimate sensors' 3D pose and works with RGB-D sensor and microphone array. In the proposed framework, the calibration problem is described as a factor graph inference problem and solved with a Graph Neural Network (GNN). Instead of frequently used visual markers, we use multiple moving people as reference objects to achieve automatic calibration.},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hirosh Ishiguro, "What Kinds of Robot's Touch Will Match Expressed Emotions?", In The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, pp. 755-762, October, 2019.
Abstract: This study investigated the effects of touch characteristics that change the strengths and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction mainly focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot’s touch characteristics that can affect people’s perceived emotions received less focus. In this study, we focused on three kinds of touch characteristics (length, type, and part) based on arousal/valence perspectives, their effects toward the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happy, and its counterpart emotion, (i.e., sad) based on Ekman’s definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggested that a brief pat and a longer touch by the fingers are better combinations to express happy and sad emotions with our robot.
BibTeX:
@InProceedings{Zheng2019,
  author    = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hirosh Ishiguro},
  booktitle = {The 2019 IEEE-RAS International Conference on Humanoid Robots},
  title     = {What Kinds of Robot's Touch Will Match Expressed Emotions?},
  year      = {2019},
  address   = {Toronto, Canada},
  day       = {15-17},
  month     = oct,
  pages     = {755-762},
  url       = {http://humanoids2019.loria.fr/},
  abstract  = {This study investigated the effects of touch characteristics that change the strengths and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction mainly focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot’s touch characteristics that can affect people’s perceived emotions received less focus. In this study, we focused on three kinds of touch characteristics (length, type, and part) based on arousal/valence perspectives, their effects toward the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happy, and its counterpart emotion, (i.e., sad) based on Ekman’s definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggested that a brief pat and a longer touch by the fingers are better combinations to express happy and sad emotions with our robot.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Hidenobu Sumioka, Soheil Keshmiri, Masahiro Shiomi, "The influence of virtual Hug in human-human interaction", In Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019), 京都工芸繊維大学, 京都, October, 2019.
Abstract: In this presentation, we will talk about what is required to achieve social touch between a human and a robot.
BibTeX:
@InProceedings{Sumioka2019c,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Masahiro Shiomi},
  booktitle = {Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019)},
  title     = {The influence of virtual Hug in human-human interaction},
  year      = {2019},
  address   = {京都工芸繊維大学, 京都},
  month     = oct,
  url       = {http://hai-conference.net/hai2019/},
  abstract  = {In this presentation, we will talk about what is required to achieve social touch between a human and a robot.},
}
Soheil Keshmiri, "HRI and the Aging Society:Recent Findings on the Utility of Embodied Media for Stimulating the Brain Functioning", In Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019), 京都工芸繊維大学, 京都, pp. 1-28, October, 2019.
Abstract: Physical embodiment of the media plays a crucial role in generating detectable brain responses to conversational interaction Entropic measures appear as reliable mathematical tools for quantification of such brain responses.
BibTeX:
@InProceedings{Keshmiri2019k,
  author    = {Soheil Keshmiri},
  booktitle = {Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019)},
  title     = {HRI and the Aging Society:Recent Findings on the Utility of Embodied Media for Stimulating the Brain Functioning},
  year      = {2019},
  address   = {京都工芸繊維大学, 京都},
  day       = {6},
  month     = oct,
  pages     = {1-28},
  url       = {http://hai-conference.net/hai2019/},
  abstract  = {Physical embodiment of the media plays a crucial role in generating detectable brain responses to conversational interaction Entropic measures appear as reliable mathematical tools for quantification of such brain responses.},
}
Hidenobu Sumioka, "Mediated Social Touch to Build Human Intimate Relationship", In Emotional Attachment to Machines: New Ways of Relationship-Building in Japan, Freie Universiat, Germany, October, 2019.
Abstract: Interpersonal touch is a fundamental component of emotional attachment in social interaction and shows several effects such as stress reduction, a calming effect, and impression formation. Despite such effects on human, human-robot interactions have mainly focused on visual-auditory information. Although studies in machine-mediated interaction are developing various devices that provide tactile stimuli to human users, serious validation studies are scarce. In my talk, I present how touch interaction with our teleoperated robot and huggable communication medium affects our feeling, behavior, and physiological states, and discuss the potential for intimate interaction between human and robot at close distance.
BibTeX:
@Inproceedings{Sumioka2019d,
  author    = {Hidenobu Sumioka},
  title     = {Mediated Social Touch to Build Human Intimate Relationship},
  booktitle = {Emotional Attachment to Machines: New Ways of Relationship-Building in Japan},
  year      = {2019},
  address   = {Freie Universiat, Germany},
  month     = oct,
  day       = {25-26},
  abstract  = {Interpersonal touch is a fundamental component of emotional attachment in social interaction and shows several effects such as stress reduction, a calming effect, and impression formation. Despite such effects on human, human-robot interactions have mainly focused on visual-auditory information. Although studies in machine-mediated interaction are developing various devices that provide tactile stimuli to human users, serious validation studies are scarce. In my talk, I present how touch interaction with our teleoperated robot and huggable communication medium affects our feeling, behavior, and physiological states, and discuss the potential for intimate interaction between human and robot at close distance.},
}
Sara Invitto, Alberto Grasso, Fabio Bona, Soheil Keshmiri, Hidenobu Sumioka, Masahiro Shiomi, Hiroshi Ishiguro, "Embodied communication through social odor, cortical spectral power and co-presence technology", In XXV Congresso AIP Sezione Sperimentale, Milano, Italy, September, 2019.
Abstract: Embodied communication (EC) happens through multisensory channels, involving not only linguistic and cognitive processes, but also complex cross-modal perceptive pathways. This type of bidirectional communication is applicable both to human interactions and to human-robot interaction (HRI). A cross-modal technological interface can increase the interaction and the feeling of co-presence (CP), highly related to an interactive relationship. Information Communication Technology (ICT) developed, in virtual interfaces, some embodied ‘communicative’ senses, placing little attention to the olfactory sense, which, instead, is developmentally and evolutionistically linked to social and affective relation. The purpose of this work is to investigate the EC through social odor (SO), EEG cortical spectral power and CP technology.
BibTeX:
@InProceedings{Invitto2019,
  author    = {Sara Invitto and Alberto Grasso and Fabio Bona and Soheil Keshmiri and Hidenobu Sumioka and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {XXV Congresso AIP Sezione Sperimentale},
  title     = {Embodied communication through social odor, cortical spectral power and co-presence technology},
  year      = {2019},
  address   = {Milano, Italy},
  day       = {18-20},
  month     = sep,
  url       = {https://aipass.org/xxv-congresso-aip-sezione-sperimentale-milano-san-raffaele-18-20-settembre-2019},
  abstract  = {Embodied communication (EC) happens through multisensory channels, involving not only linguistic and cognitive processes, but also complex cross-modal perceptive pathways. This type of bidirectional communication is applicable both to human interactions and to human-robot interaction (HRI). A cross-modal technological interface can increase the interaction and the feeling of co-presence (CP), highly related to an interactive relationship. Information Communication Technology (ICT) developed, in virtual interfaces, some embodied ‘communicative’ senses, placing little attention to the olfactory sense, which, instead, is developmentally and evolutionistically linked to social and affective relation. The purpose of this work is to investigate the EC through social odor (SO), EEG cortical spectral power and CP technology.},
}
Takashi Minato, Kurima Sakai, Hiroshi Ishiguro, "Design of a robot's conversational capability based on desire and intention", In IoT Enabling Sensing/Network/AI and Photonics Conference 2019 (IoT-SNAP2019) at OPTICS & PHOTONICS International Congress 2019, パシフィコ横浜, 神奈川, pp. 1-6, April, 2019.
Abstract: Numbers of devices surrounding us are connected to the network and have a capability to verbally provide services. Those devices are desired to proactively interact with us since it is difficult for us to set up the all control parameters of devices. For this sake, designing the desire and intention of the device is promising approach. This paper focuses on a conversational robot and describes the design of the robot's dialogue control based on its desire and intention.
BibTeX:
@InProceedings{Minato2019,
  author    = {Takashi Minato and Kurima Sakai and Hiroshi Ishiguro},
  booktitle = {IoT Enabling Sensing/Network/AI and Photonics Conference 2019 (IoT-SNAP2019) at OPTICS \& PHOTONICS International Congress 2019},
  title     = {Design of a robot's conversational capability based on desire and intention},
  year      = {2019},
  address   = {パシフィコ横浜, 神奈川},
  day       = {23-25},
  month     = apr,
  pages     = {1-6},
  series    = {IoT-SNAP2-02},
  url       = {https://opicon.jp/ja/conferences/iot},
  abstract  = {Numbers of devices surrounding us are connected to the network and have a capability to verbally provide services. Those devices are desired to proactively interact with us since it is difficult for us to set up the all control parameters of devices. For this sake, designing the desire and intention of the device is promising approach. This paper focuses on a conversational robot and describes the design of the robot's dialogue control based on its desire and intention.},
}
Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Brain Healthcare through iterated conversations with a teleoperated robot", In Toward Brain Health -The Present and the Future of Brain Data Sharing-, ITU, Geneva, Switzerland, March, 2019.
Abstract: In this presentation, we show how communication robot helps elderly people maintain health for their brain.
BibTeX:
@InProceedings{Sumioka2019a,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  booktitle = {Toward Brain Health -The Present and the Future of Brain Data Sharing-},
  title     = {Brain Healthcare through iterated conversations with a teleoperated robot},
  year      = {2019},
  address   = {ITU, Geneva, Switzerland},
  day       = {20},
  month     = mar,
  abstract  = {In this presentation, we show how communication robot helps elderly people maintain health for their brain.},
}
Shuichi Nishio, "Portable android robots for aged citizens: overview and current results", In Dementia & Technology, Seoul, Korea, December, 2018.
Abstract: I introduce our research acvtivities on Telenoid and Bonoid.
BibTeX:
@Inproceedings{Nishio2018a,
  author    = {Shuichi Nishio},
  title     = {Portable android robots for aged citizens: overview and current results},
  booktitle = {Dementia \& Technology},
  year      = {2018},
  address   = {Seoul, Korea},
  month     = Dec,
  day       = {17},
  url       = {http://www.docdocdoc.co.kr/event/event20.html},
  abstract  = {I introduce our research acvtivities on Telenoid and Bonoid.},
}
Carlos T. Ishi, Daichi Machiyashiki, Ryusuke Mikata, Hiroshi Ishiguro, "A speech-driven hand gesture generation method and evaluation in android robots", In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.
BibTeX:
@InProceedings{Ishi2018b,
  author    = {Carlos T. Ishi and Daichi Machiyashiki and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  title     = {A speech-driven hand gesture generation method and evaluation in android robots},
  year      = {2018},
  address   = {Madrid, Spain},
  day       = {1-5},
  month     = Oct,
  url       = {https://www.iros2018.org/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "How should a Robot React before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot’s Face", In the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: This study addresses pre-touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, mainly focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another’s touch by observing a dataset of 48 human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with 30 participants. The experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.
BibTeX:
@InProceedings{Shiomi2018b,
  author    = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  title     = {How should a Robot React before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot’s Face},
  year      = {2018},
  address   = {Madrid, Spain},
  day       = {1-5},
  month     = oct,
  url       = {https://www.iros2018.org/},
  abstract  = {This study addresses pre-touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, mainly focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another’s touch by observing a dataset of 48 human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with 30 participants. The experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Hidenobu Sumioka, "Robotics for elderly society", In Summer school at Osaka University 2018 : Long term care system & scientific tecnology in Japan aging society, Osaka University, Osaka, August, 2018.
Abstract: In this talk, I will introduce several possibilities how social robot help human caregivers in elderly care.
BibTeX:
@Inproceedings{Sumioka2018a,
  author    = {Hidenobu Sumioka},
  title     = {Robotics for elderly society},
  booktitle = {Summer school at Osaka University 2018 : Long term care system \& scientific tecnology in Japan aging society},
  year      = {2018},
  address   = {Osaka University, Osaka},
  month     = Aug,
  day       = {7},
  abstract  = {In this talk, I will introduce several possibilities how social robot help human caregivers in elderly care.},
}
Takashi Minato, "Development of an autonomous android that can naturally talk with people", In World Symposium on Digital Intelligence for Systems and Machines (DISA2018), Technical University of Kosice, Slovakia, pp. 19-21, August, 2018.
Abstract: Our research group have been developing a very humanlike android robot that can talk with people in a humanlike manner involving not only verbal but also non-verbal manner such as gestures, facial expressions, and gaze behaviors, while exploring essential mechanisms for generating natural conversation. Humans most effectively interact only with other humans, hence, very humanlike androids can be promising communication media to support people's daily life. The existing spoken dialogue services have mainly focused on task-oriented communication, like voice search service on smartphones and traffic information services, to serve information through natural verbal interaction. But there is no intention and agency in the dialogue system itself, and it cannot be a conversation partner for casual conversation. A conversation essentially involves mutual understanding of each intention and opinion between conversation participants; therefore, as a humanlike manner, we introduced a hierarchical model of decision-making for dialogue generation in our android, that is based on the android's desires and intentions. Furthermore, it is also important to express humanlike bodily movements for natural conversation, and we have developed a method to automatically generate humanlike motions which are synchronized with the android utterances. So far, we have studied human-android interaction in both of verbal and non-verbal aspects, and this talk will introduce some research topics which are related those studies.
BibTeX:
@Inproceedings{Minato2018,
  author    = {Takashi Minato},
  title     = {Development of an autonomous android that can naturally talk with people},
  booktitle = {World Symposium on Digital Intelligence for Systems and Machines (DISA2018)},
  year      = {2018},
  pages     = {19-21},
  address   = {Technical University of Kosice, Slovakia},
  month     = Aug,
  day       = {23-25},
  url       = {http://www.disa2018.org},
  abstract  = {Our research group have been developing a very humanlike android robot that can talk with people in a humanlike manner involving not only verbal but also non-verbal manner such as gestures, facial expressions, and gaze behaviors, while exploring essential mechanisms for generating natural conversation. Humans most effectively interact only with other humans, hence, very humanlike androids can be promising communication media to support people's daily life. The existing spoken dialogue services have mainly focused on task-oriented communication, like voice search service on smartphones and traffic information services, to serve information through natural verbal interaction. But there is no intention and agency in the dialogue system itself, and it cannot be a conversation partner for casual conversation. A conversation essentially involves mutual understanding of each intention and opinion between conversation participants; therefore, as a humanlike manner, we introduced a hierarchical model of decision-making for dialogue generation in our android, that is based on the android's desires and intentions. Furthermore, it is also important to express humanlike bodily movements for natural conversation, and we have developed a method to automatically generate humanlike motions which are synchronized with the android utterances. So far, we have studied human-android interaction in both of verbal and non-verbal aspects, and this talk will introduce some research topics which are related those studies.},
}
Malcolm Doering, Dylan F. Glas, Hiroshi Ishiguro, "Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior", In The 1st International Symposium on Systems Intelligence Division, A&H Hall, Osaka, January, 2018.
Abstract: We present a learning-by-imitation technique that learns social robot interaction behaviors from natural human-human interaction data and requires minimum input from a designer. In particular, we focus on the problems of responding to ambiguous human actions and interpretability of the learned behaviors. To solve these problems, we introduce a novel topic clustering algorithm based on action co-occurrence frequencies to discover the topics of conversation in the training data and incorporate them into a rule learning system. The system learns human-readable rules that dictate which action the robot should take in response to a human action, given the current topic of conversation. We demonstrated our technique in a travel agent scenario where the robot learns to play the role of the travel agent. Our proposed technique outperformed several baseline techniques in qualitative and quantitative evaluations. The results showed that the proposed system responded more accurately to ambiguous questions and participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with.
BibTeX:
@Inproceedings{Doering2018,
  author    = {Malcolm Doering and Dylan F. Glas and Hiroshi Ishiguro},
  title     = {Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior},
  booktitle = {The 1st International Symposium on Systems Intelligence Division},
  year      = {2018},
  address   = {A\&H Hall, Osaka},
  month     = Jan,
  day       = {21-22},
  url       = {http://sid-osaka-u.org/2017/12/08/the-1st-international-symposium-on-systems-intelligence-division/},
  abstract  = {We present a learning-by-imitation technique that learns social robot interaction behaviors from natural human-human interaction data and requires minimum input from a designer. In particular, we focus on the problems of responding to ambiguous human actions and interpretability of the learned behaviors. To solve these problems, we introduce a novel topic clustering algorithm based on action co-occurrence frequencies to discover the topics of conversation in the training data and incorporate them into a rule learning system. The system learns human-readable rules that dictate which action the robot should take in response to a human action, given the current topic of conversation. We demonstrated our technique in a travel agent scenario where the robot learns to play the role of the travel agent. Our proposed technique outperformed several baseline techniques in qualitative and quantitative evaluations. The results showed that the proposed system responded more accurately to ambiguous questions and participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with.},
  file      = {Doering2018.pdf:pdf/Doering2018.pdf:PDF},
}
Xiqian Zheng, Dylan F. Glas, Hiroshi Ishiguro, "Robot Social Memory System: Memory-Based Interaction Strategies for a Social Robot", In The 1st International Symposium on Systems Intelligence Division, A&H Hall, Osaka, January, 2018.
Abstract: Osaka University, Open and Transdisciplinary Research Initiatives (OTRI) is a new research institution that started in April 2017 and shifted from the “Cognitive Neuroscience Robotics Division (CNR)" of the Institute for Academic Initiatives (IAI) to the “Systems Intelligence Division (SID)" of the OTRI. This time, as a kick-off symposium.
BibTeX:
@Inproceedings{Zheng2018,
  author    = {Xiqian Zheng and Dylan F. Glas and Hiroshi Ishiguro},
  title     = {Robot Social Memory System: Memory-Based Interaction Strategies for a Social Robot},
  booktitle = {The 1st International Symposium on Systems Intelligence Division},
  year      = {2018},
  address   = {A\&H Hall, Osaka},
  month     = Jan,
  day       = {20-21},
  url       = {http://sid-osaka-u.org/2017/12/08/the-1st-international-symposium-on-systems-intelligence-division/},
  abstract  = {Osaka University, Open and Transdisciplinary Research Initiatives (OTRI) is a new research institution that started in April 2017 and shifted from the “Cognitive Neuroscience Robotics Division (CNR)" of the Institute for Academic Initiatives (IAI) to the “Systems Intelligence Division (SID)" of the OTRI. This time, as a kick-off symposium.},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "DNN Based Pitch Estimation Using Microphone Array", In 第49回人工知能学会 AI チャレンジ研究会, 慶応義塾大学 矢上キャンパス, 神奈川, pp. 43-46, November, 2017.
Abstract: This paper presents some preliminary experiment for pitch classification of distant speech recorded with a microphone array. The pitch classification is performed by a deep neural network. Using the microphone array to perform beamforming is beneficial to the pitch classification. However it requires a larger amount of data for training the network. The network seems to be robust to data miss-matched as pre-training with close speech data improved the results for distant speech.
BibTeX:
@Inproceedings{Even2017b,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {DNN Based Pitch Estimation Using Microphone Array},
  booktitle = {第49回人工知能学会 AI チャレンジ研究会},
  year      = {2017},
  pages     = {43-46},
  address   = {慶応義塾大学 矢上キャンパス, 神奈川},
  month     = Nov,
  day       = {25},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-049/program.html},
  abstract  = {This paper presents some preliminary experiment for pitch classification of distant speech recorded with a microphone array. The pitch classification is performed by a deep neural network. Using the microphone array to perform beamforming is beneficial to the pitch classification. However it requires a larger amount of data for training the network. The network seems to be robust to data miss-matched as pre-training with close speech data improved the results for distant speech.},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Effect of Utterance Synchronized Gaze Pattern on Response Time during Human-Robot Interaction.", In 日本音響学会2017年秋季研究発表会, vol. 3-P-20, 愛媛大学城北キャンパス, 愛媛, pp. 373-374, September, 2017.
Abstract: This paper describes an experiment where the gaze pattern of a robot is modulated during speech production in order to influence the response time of the person interacting with the robot.
BibTeX:
@Inproceedings{Even2017a,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Effect of Utterance Synchronized Gaze Pattern on Response Time during Human-Robot Interaction.},
  booktitle = {日本音響学会2017年秋季研究発表会},
  year      = {2017},
  volume    = {3-P-20},
  pages     = {373-374},
  address   = {愛媛大学城北キャンパス, 愛媛},
  month     = Sep,
  day       = {25-27},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {This paper describes an experiment where the gaze pattern of a robot is modulated during speech production in order to influence the response time of the person interacting with the robot.},
  file      = {Even2017a.pdf:pdf/Even2017a.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions and motion generation in android robots", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September, 2017.
Abstract: Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.
BibTeX:
@InProceedings{Ishi2017a,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  title     = {Motion analysis in vocalized surprise expressions and motion generation in android robots},
  year      = {2017},
  address   = {Vancouver, Canada},
  day       = {24-28},
  month     = Sep,
  url       = {http://www.iros2017.org/},
  abstract  = {Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
  file      = {Ishi2017a.pdf:pdf/Ishi2017a.pdf:PDF},
}
Hidenobu Sumioka, "Brain and soft body in Human-Robot interaction", In The Human Brain Project Symposium on Building Bodies for Brains & Brains for Bodies, Geneva, Switzerland, June, 2017.
Abstract: This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control" of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.
BibTeX:
@Inproceedings{Sumioka2017,
  author    = {Hidenobu Sumioka},
  title     = {Brain and soft body in Human-Robot interaction},
  booktitle = {The Human Brain Project Symposium on Building Bodies for Brains \& Brains for Bodies},
  year      = {2017},
  address   = {Geneva, Switzerland},
  month     = Jun,
  day       = {16},
  abstract  = {This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control" of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.},
  file      = {Sumioka2017.pdf:pdf/Sumioka2017.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Automatic labelling for DNN pitch classification", In 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 1-P-32, 明治大学生田キャンパス, 神奈川, pp. 595-596, march, 2017.
Abstract: This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.
BibTeX:
@Inproceedings{Even2017,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Automatic labelling for DNN pitch classification},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year      = {2017},
  volume    = {1-P-32},
  pages     = {595-596},
  address   = {明治大学生田キャンパス, 神奈川},
  month     = March,
  day       = {15},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.},
  file      = {Even2017.pdf:pdf/Even2017.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using utterance timing to generate gaze pattern", In 第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016), vol. SIG-Challenge-046-09, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, pp. 50-55, November, 2016.
Abstract: This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.
BibTeX:
@Inproceedings{Even2016,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Using utterance timing to generate gaze pattern},
  booktitle = {第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016)},
  year      = {2016},
  volume    = {SIG-Challenge-046-09},
  pages     = {50-55},
  address   = {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month     = Nov,
  day       = {9},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-046/program.html},
  abstract  = {This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using Sensor Network for Android gaze control", In 第43回 人工知能学会 AIチャレンジ研究会, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, November, 2015.
Abstract: This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.
BibTeX:
@Inproceedings{Even2015a,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Using Sensor Network for Android gaze control},
  booktitle = {第43回 人工知能学会 AIチャレンジ研究会},
  year      = {2015},
  address   = {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month     = Nov,
  abstract  = {This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.},
  file      = {Even2015a.pdf:pdf/Even2015a.pdf:PDF},
}
石井カルロス寿憲, エヴァンイアニ, モラレスサイキルイスヨウイチ, 渡辺敦志, "複数のマイクロホンアレイの連携による音環境知能技術の研究開発", In ICTイノベーションフォーラム2015, 幕張メッセ, 千葉, October, 2015.
Abstract: 平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」
BibTeX:
@Inproceedings{石井カルロス寿憲2015c,
  author    = {石井カルロス寿憲 and エヴァンイアニ and モラレスサイキルイスヨウイチ and 渡辺敦志},
  title     = {複数のマイクロホンアレイの連携による音環境知能技術の研究開発},
  booktitle = {ICTイノベーションフォーラム2015},
  year      = {2015},
  address   = {幕張メッセ, 千葉},
  month     = OCT,
  abstract  = {平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」},
  file      = {石井カルロス寿憲2015c.pdf:pdf/石井カルロス寿憲2015c.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Investigation of motion generation in android robots during laughing speech", In International Workshop on Speech Robotics, Dresden, Germany, September, 2015.
Abstract: In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.
BibTeX:
@Inproceedings{Ishi2015c,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Investigation of motion generation in android robots during laughing speech},
  booktitle = {International Workshop on Speech Robotics},
  year      = {2015},
  address   = {Dresden, Germany},
  month     = SEP,
  url       = {https://register-tubs.de/interspeech},
  abstract  = {In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.},
  file      = {Ishi2015c.pdf:pdf/Ishi2015c.pdf:PDF},
}
Jani Even, Jonas Furrer Michael, Carlos Toshinori Ishi, Norihiro Hagita, "In situ automated impulse response measurement with a mobile robot", In 日本音響学会 2015年春季研究発表会, 中央大学後楽園キャンパス(東京都文京区), March, 2015.
Abstract: This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.
BibTeX:
@Inproceedings{Jani2015,
  author    = {Jani Even and Jonas Furrer Michael and Carlos Toshinori Ishi and Norihiro Hagita},
  title     = {In situ automated impulse response measurement with a mobile robot},
  booktitle = {日本音響学会 2015年春季研究発表会},
  year      = {2015},
  address   = {中央大学後楽園キャンパス(東京都文京区)},
  month     = Mar,
  abstract  = {This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.},
  file      = {Even2015.pdf:pdf/Even2015.pdf:PDF},
}
劉超然, 石井カルロス寿憲, 石黒浩, 萩田紀博, "臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~", In 第41回 人工知能学会 AIチャレンジ研究会, 慶應義塾大学日吉キャンパス 来住舎(東京), pp. 26-32, November, 2014.
Abstract: 本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。
BibTeX:
@Inproceedings{劉超然2014,
  author    = {劉超然 and 石井カルロス寿憲 and 石黒浩 and 萩田紀博},
  title     = {臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~},
  booktitle = {第41回 人工知能学会 AIチャレンジ研究会},
  year      = {2014},
  pages     = {26-32},
  address   = {慶應義塾大学日吉キャンパス 来住舎(東京)},
  month     = Nov,
  abstract  = {本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。},
  file      = {劉超然2014.pdf:pdf/劉超然2014.pdf:PDF},
}
Ryuji Yamazaki, Marco Nørskov, "Self-alteration in HRI", Poster presentation at International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany, February, 2014.
BibTeX:
@Inproceedings{Yamazaki2014,
  author    = {Ryuji Yamazaki and Marco N\orskov},
  title     = {Self-alteration in HRI},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year      = {2014},
  address   = {Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany},
  month     = Feb,
  day       = {13-15},
  file      = {Yamazaki2014.pdf:pdf/Yamazaki2014.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kaiko Kuwamura, "Identity Construction of the Hybrid of Robot and Human", In 22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications, Gyeongju, Korea, August, 2013.
BibTeX:
@Inproceedings{Yamazaki2013,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Kaiko Kuwamura},
  title     = {Identity Construction of the Hybrid of Robot and Human},
  booktitle = {22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications},
  year      = {2013},
  address   = {Gyeongju, Korea},
  month     = Aug,
  day       = {26-29},
}
Astrid M. von der Pütten, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploration and Analysis of People's Nonverbal Behavior Towards an Android", In the annual meeting of the International Communication Association, Phoenix, USA, May, 2012.
BibTeX:
@Inproceedings{Putten2012,
  author    = {Astrid M. von der P\"{u}tten and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Exploration and Analysis of People's Nonverbal Behavior Towards an Android},
  booktitle = {the annual meeting of the International Communication Association},
  year      = {2012},
  address   = {Phoenix, USA},
  month     = May,
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Tele-operating the lip motion of humanoid robots from the operator's voice", In 第29回日本ロボット学会学術講演会, 芝浦工業大学豊洲キャンパス, 東京, pp. C1J3-6, September, 2011.
BibTeX:
@Inproceedings{Ishi2011,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Tele-operating the lip motion of humanoid robots from the operator's voice},
  booktitle       = {第29回日本ロボット学会学術講演会},
  year            = {2011},
  pages           = {C1J3-6},
  address         = {芝浦工業大学豊洲キャンパス, 東京},
  month           = Sep,
  day             = {7-9},
  file            = {Ishi2011.pdf:pdf/Ishi2011.pdf:PDF},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An android in the field. How people react towards Geminoid HI-1 in a real world scenario", In the 7th Conference of the Media Psychology Division of the German Psychological Society, Jacobs University, Bremen, Germany, August, 2011.
BibTeX:
@Inproceedings{Putten2011a,
  author    = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title     = {An android in the field. How people react towards Geminoid HI-1 in a real world scenario},
  booktitle = {the 7th Conference of the Media Psychology Division of the German Psychological Society},
  year      = {2011},
  address   = {Jacobs University, Bremen, Germany},
  month     = Aug,
  day       = {10-11},
}
Panikos Heracleous, Norihiro Hagita, "A visual mode for communication in the deaf society", In Spring Meeting of Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 57-60, March, 2011.
Abstract: In this article, automatic recognition of Cued Speech in French based on hidden Markov models (HMMs) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.
BibTeX:
@Inproceedings{Heracleous2011d,
  author          = {Panikos Heracleous and Norihiro Hagita},
  title           = {A visual mode for communication in the deaf society},
  booktitle       = {Spring Meeting of Acoustical Society of Japan},
  year            = {2011},
  series          = {2-5-6},
  pages           = {57--60},
  address         = {Waseda University, Tokyo, Japan},
  month           = Mar,
  abstract        = {In this article, automatic recognition of Cued Speech in French based on hidden Markov models ({HMM}s) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.},
  file            = {Heracleous2011d.pdf:Heracleous2011d.pdf:PDF},
}