Top Mission Laboratories Members Projects Research Robots Resources Publications Access
Top > Publications
English

QuickSearch:   Matching entries: 0.

settings...
書籍
石黒浩, "アバターと共生する未来社会", 集英社, June, 2023.
Abstract: アバター(分身)を使って、メタバースの世界だけでなく、実社会でも、別のキャラクターとして遠隔地で仕事をしたり、家にいながらにして趣味の仲間と旅行をしたり、AIと協業したり…姿や年齢を超えた多彩な人生を体験できる時代がやって来る。新しい未来の幕開けだ! 【目次】 第一章 アバターとは何か──実世界でも稼働する遠隔操作が可能な分身 第二章 アバター共生社会が目指すもの 第三章 ムーンショットが進めるアバター研究 第四章 技術の社会実装──AVITAの取り組み 第五章 仮想化実世界とアバターの倫理問題 第六章 さらなる未来──大阪· 関西万博とアバター
BibTeX:
@Book{石黒浩2023d,
  author     = {石黒浩},
  publisher  = {集英社},
  title      = {アバターと共生する未来社会},
  year       = {2023},
  abstract   = {アバター(分身)を使って、メタバースの世界だけでなく、実社会でも、別のキャラクターとして遠隔地で仕事をしたり、家にいながらにして趣味の仲間と旅行をしたり、AIと協業したり…姿や年齢を超えた多彩な人生を体験できる時代がやって来る。新しい未来の幕開けだ!

【目次】
第一章 アバターとは何か──実世界でも稼働する遠隔操作が可能な分身
第二章 アバター共生社会が目指すもの
第三章 ムーンショットが進めるアバター研究
第四章 技術の社会実装──AVITAの取り組み
第五章 仮想化実世界とアバターの倫理問題
第六章 さらなる未来──大阪· 関西万博とアバター},
  day        = {26},
  etitle     = {Future Society in Harmony with Avatars},
  isbn       = {978-4-08-786136-5},
  month      = jun,
  price      = {¥2,090},
  totalpages = {296},
  url        = {https://www.shueisha.co.jp/books/items/contents.html?isbn=978-4-08-786136-5},
}
石黒浩, "ロボットと人間 人とは何か", 岩波新書, no. 新赤版 1901, November, 2021.
Abstract: ロボットを研究することは、人間を深く知ることでもある。ロボット学の世界的第一人者である著者は、長年の研究を通じて、人間にとって自律、心、存在、対話、体、進化、生命などは何かを問い続ける。ロボットと人間の未来に向けての関係性にも言及。人と関わるロボットがますます身近になる今こそ、必読の書。
BibTeX:
@Book{石黒浩2021q,
  author     = {石黒浩},
  publisher  = {岩波新書},
  title      = {ロボットと人間 人とは何か},
  year       = {2021},
  abstract   = {ロボットを研究することは、人間を深く知ることでもある。ロボット学の世界的第一人者である著者は、長年の研究を通じて、人間にとって自律、心、存在、対話、体、進化、生命などは何かを問い続ける。ロボットと人間の未来に向けての関係性にも言及。人と関わるロボットがますます身近になる今こそ、必読の書。},
  day        = {19},
  isbn       = {9784004319016},
  month      = nov,
  number     = {新赤版 1901},
  price      = {¥1,034},
  totalpages = {286},
  url        = {https://www.iwanami.co.jp/book/b593235.html},
}
漱石アンドロイド共同研究プロジェクト, "アンドロイド基本原則", 日刊工業新聞社, January, 2019.
Abstract: 近い未来、さまざまなアンドロイドが誕生することが考えられるが、我々に、人間存在をアンドロイドとして甦らせる権利などあるのだろうか?夏目漱石のアンドロイドを製作する過程を通じて見えてきた、人間を複製することに関しての課題、疑問について考える。
BibTeX:
@Book{漱石アンドロイド共同研究プロジェクト2019,
  title     = {アンドロイド基本原則},
  publisher = {日刊工業新聞社},
  year      = {2019},
  author    = {漱石アンドロイド共同研究プロジェクト},
  month     = Jan,
  abstract  = {近い未来、さまざまなアンドロイドが誕生することが考えられるが、我々に、人間存在をアンドロイドとして甦らせる権利などあるのだろうか?夏目漱石のアンドロイドを製作する過程を通じて見えてきた、人間を複製することに関しての課題、疑問について考える。},
  day       = {29},
}
石黒浩, "僕がロボットをつくる理由-未来の生き方を日常からデザインする", 世界思想社, March, 2018.
Abstract: ロボットやAIで、私たちの生活はどう変わるか? 衣食住から恋愛・仕事・創造の方法まで、 ロボット研究の第一人者・石黒浩が、 自身の経験や日々の過ごし方を交えて、 「新しい世界を拓く楽しさ」と人生、そして 「ロボットと生きる未来」を率直に語る。 〇全編語り下ろし。「未来の生き方」を考えるヒントが見つかる1冊です。 〇カバーと本編のイラストは、マンガ『孤食ロボット』の岩岡ヒサエ先生 〇世界思想社創業70周年記念新シリーズ「教養みらい選書」第1弾
BibTeX:
@Book{石黒浩2018d,
  title     = {僕がロボットをつくる理由-未来の生き方を日常からデザインする},
  publisher = {世界思想社},
  year      = {2018},
  author    = {石黒浩},
  month     = Mar,
  abstract  = {ロボットやAIで、私たちの生活はどう変わるか?
衣食住から恋愛・仕事・創造の方法まで、
ロボット研究の第一人者・石黒浩が、
自身の経験や日々の過ごし方を交えて、
「新しい世界を拓く楽しさ」と人生、そして
「ロボットと生きる未来」を率直に語る。
〇全編語り下ろし。「未来の生き方」を考えるヒントが見つかる1冊です。
〇カバーと本編のイラストは、マンガ『孤食ロボット』の岩岡ヒサエ先生
〇世界思想社創業70周年記念新シリーズ「教養みらい選書」第1弾},
  day       = {8},
}
石黒浩, "人間とロボットの法則", 日刊工業新聞社, July, 2017.
Abstract: これまでの研究のもとになった着想や研究でわかった人間の本質、ロボットの在り方、頭の中のアイデアなどを、文章と図版の見開き構成で紹介する。
BibTeX:
@Book{石黒浩2017n,
  title      = {人間とロボットの法則},
  publisher  = {日刊工業新聞社},
  year       = {2017},
  author     = {石黒浩},
  month      = Jul,
  isbn       = {9784526077319},
  abstract   = {これまでの研究のもとになった着想や研究でわかった人間の本質、ロボットの在り方、頭の中のアイデアなどを、文章と図版の見開き構成で紹介する。},
  totalpages = {144},
  price      = {¥1,620},
}
石黒浩, "枠を壊して自分を生きる。: 自分の頭で考えて動くためのヒント", 三笠書房, April, 2017.
Abstract: もっと自由に生きるための考え方のヒント。 全てのバイアスを取り払ってみると…… ◆夢――本当に必要なものか? それが将来を制限するかもしれない ◆友達――必ずしも必要ではない。なぜなら…… ◆自分らしさ――ひとつに絞るな、無限につくれ ◆人づきあい――「好きな人」より「嫌いな人」があなたの財産になる ◆生き甲斐――社会に自分をどう活かすか、を考える ……etc. 世界の見え方、自分を見る目がガラリと変わる!
BibTeX:
@Book{石黒浩2017g,
  title      = {枠を壊して自分を生きる。: 自分の頭で考えて動くためのヒント},
  publisher  = {三笠書房},
  year       = {2017},
  author     = {石黒浩},
  month      = Apr,
  isbn       = {9784837926672},
  abstract   = {もっと自由に生きるための考え方のヒント。

全てのバイアスを取り払ってみると……

◆夢――本当に必要なものか? それが将来を制限するかもしれない
◆友達――必ずしも必要ではない。なぜなら……
◆自分らしさ――ひとつに絞るな、無限につくれ
◆人づきあい――「好きな人」より「嫌いな人」があなたの財産になる
◆生き甲斐――社会に自分をどう活かすか、を考える ……etc.

世界の見え方、自分を見る目がガラリと変わる!},
  totalpages = {240},
  price      = {¥1,512},
}
飯田一史 石黒浩, "人はアンドロイドになるために", 筑摩書房, March, 2017.
Abstract: アンドロイドと人間が共存する世界で、「人間とはなにか」を問う――アンドロイド研究の鬼才・石黒浩が挑む初の近未来フィクション。 人間とアンドロイドの未来をめぐる5つの思考実験。アンドロイド研究の第一人者が、最先端の研究をステップボードに大胆に想像力をはばたかせた初の小説集。
BibTeX:
@Book{2017,
  title      = {人はアンドロイドになるために},
  publisher  = {筑摩書房},
  year       = {2017},
  author     = {石黒浩, 飯田一史},
  month      = Mar,
  isbn       = {9784480804693},
  abstract   = {アンドロイドと人間が共存する世界で、「人間とはなにか」を問う――アンドロイド研究の鬼才・石黒浩が挑む初の近未来フィクション。 
人間とアンドロイドの未来をめぐる5つの思考実験。アンドロイド研究の第一人者が、最先端の研究をステップボードに大胆に想像力をはばたかせた初の小説集。},
  totalpages = {317},
  price      = {¥2,052},
}
Shuichi Nishio, Hideyuki Nakanishi, Tsumomu Fujinami, "Investigating Human Nature and Communication through Robots", Frontiers Media, January, 2017.
Abstract: The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.
BibTeX:
@Book{Nishio2017,
  title     = {Investigating Human Nature and Communication through Robots},
  publisher = {Frontiers Media},
  year      = {2017},
  editor    = {Shuichi Nishio and Hideyuki Nakanishi and Tsumomu Fujinami},
  month     = Jan,
  isbn      = {9782889450862},
  abstract  = {The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.},
  file      = {Nishio2017.pdf:pdf/Nishio2017.pdf:PDF},
  url       = {http://www.frontiersin.org/books/Investigating_Human_Nature_and_Communication_through_Robots/1098},
}
石黒浩, "糞袋の内と外", 朝日新聞出版, April, 2013.
BibTeX:
@Book{石黒浩2013b,
  title      = {糞袋の内と外},
  publisher  = {朝日新聞出版},
  year       = {2013},
  author     = {石黒浩},
  month      = Apr,
  isbn       = {9784023311800},
  totalpages = {257},
  price      = {¥ 1,575},
}
石黒浩, "人と芸術とアンドロイド-- 私はなぜロボットを作るのか", 日本評論社, September, 2012.
BibTeX:
@Book{石黒浩2012,
  title      = {人と芸術とアンドロイド-- 私はなぜロボットを作るのか},
  publisher  = {日本評論社},
  year       = {2012},
  author     = {石黒浩},
  month      = Sep,
  isbn       = {9784535586246},
  totalpages = {190},
  price      = {¥ 1,575},
}
石黒浩, "アンドロイドを造る", オーム社, August, 2011.
BibTeX:
@Book{石黒浩2011b,
  title      = {アンドロイドを造る},
  publisher  = {オーム社},
  year       = {2011},
  author     = {石黒浩},
  month      = Aug,
  isbn       = {9784274210686},
  totalpages = {112},
  price      = {¥ 2,100},
}
石黒浩, "どうすれば「人」を創れるか アンドロイドになった私", 新潮社, April, 2011.
BibTeX:
@Book{石黒浩2011a,
  title      = {どうすれば「人」を創れるか アンドロイドになった私},
  publisher  = {新潮社},
  year       = {2011},
  author     = {石黒浩},
  month      = Apr,
  isbn       = {9784103294214},
  totalpages = {217},
  price      = {¥ 1,400},
  etitle     = {How can we create a Human Android ?},
}
石黒浩, 鷲田清一, "生きるってなんやろか?", 毎日新聞社, March, 2011.
BibTeX:
@Book{石黒浩2011,
  title      = {生きるってなんやろか?},
  publisher  = {毎日新聞社},
  year       = {2011},
  author     = {石黒浩 and 鷲田清一},
  month      = Mar,
  isbn       = {9784620320199},
  totalpages = {208},
  price      = {¥ 1,260},
  url        = {http://amazon.co.jp/o/ASIN/4620320196/},
}
石黒浩, "ロボットとは何か -- 人の心を映す鏡 --", 講談社, November, 2009.
BibTeX:
@Book{石黒浩2009,
  title      = {ロボットとは何か -- 人の心を映す鏡 --},
  publisher  = {講談社},
  year       = {2009},
  author     = {石黒浩},
  series     = {現代新書},
  month      = Nov,
  isbn       = {9784062880237},
  totalpages = {240},
  price      = {¥ 777},
  url        = {http://amazon.co.jp/o/ASIN/4062880237/},
}
石黒浩, "アンドロイドサイエンス  人間を知るためのロボット研究 ", 毎日コミュニケーションズ, September, 2007.
BibTeX:
@Book{石黒浩2007,
  title      = {アンドロイドサイエンス ~人間を知るためのロボット研究~},
  publisher  = {毎日コミュニケーションズ},
  year       = {2007},
  author     = {石黒浩},
  month      = Sep,
  isbn       = {9784839923846},
  totalpages = {320},
  price      = {¥ 2,940},
  url        = {http://amazon.co.jp/o/ASIN/4839923841/},
}
書籍の章
李歆玥, 石井カルロス寿憲, 傅昌鋥, 林良子, "中国語を母語とする日本語学習者と母語話者を対象とする非流暢性発話フィラーの音声分析", ひつじ書房, pp. 417-428, February, 2024.
Abstract: 本研究では、中国語を母語とする日本語学習者による日本語自然会話に見られるフィラーの母音を対象とした音響的特徴を検討し、日本語母語話者によるフィラーの母音との比較検証を行なった。次に、自然会話におけるフィラーの母音と通常語彙項目の母音の相違について検討した。その結果、duration、F0mean、intensity、スペクトル傾斜関連特徴、jitter and shimmerに関して、中国人日本語学習者と日本語母語話者ともに、フィラーの母音と通常語彙項目の母音の間に顕著な差が観察された。さらに、random forestを用いた分類分析を行なったところ、フィラーの母音か通常語彙項目の母音かという分類には、duration と intensityは最も貢献しており、声質的特徴はその次に貢献していることが示された。
BibTeX:
@InBook{李歆玥2024,
  author    = {李歆玥 and 石井カルロス寿憲 and 傅昌鋥 and 林良子},
  booktitle = {流暢性と非流暢性},
  chapter   = {第6部 言語障害からみた(非)流暢性 第2章},
  pages     = {417-428},
  publisher = {ひつじ書房},
  title     = {中国語を母語とする日本語学習者と母語話者を対象とする非流暢性発話フィラーの音声分析},
  year      = {2024},
  abstract  = {本研究では、中国語を母語とする日本語学習者による日本語自然会話に見られるフィラーの母音を対象とした音響的特徴を検討し、日本語母語話者によるフィラーの母音との比較検証を行なった。次に、自然会話におけるフィラーの母音と通常語彙項目の母音の相違について検討した。その結果、duration、F0mean、intensity、スペクトル傾斜関連特徴、jitter and shimmerに関して、中国人日本語学習者と日本語母語話者ともに、フィラーの母音と通常語彙項目の母音の間に顕著な差が観察された。さらに、random forestを用いた分類分析を行なったところ、フィラーの母音か通常語彙項目の母音かという分類には、duration と intensityは最も貢献しており、声質的特徴はその次に貢献していることが示された。},
  date      = {2024-02-22},
  isbn      = {978-4-8234-1208-0},
  month     = feb,
  url       = {https://www.hituzi.co.jp/hituzibooks/ISBN978-4-8234-1208-0.htm},
  comment   = {y},
}
Hidenobu Sumioka, Junya Nakanishi, Masahiro Shiomi, Hiroshi Ishiguro, "Abbracci virtuali per l’educazione: studio pilota sul co-sleeping con un huggable communication medium e considerazioni di progettazione per applicazioni educative", pp. 169-190, July, 2023.
Abstract: In this chapter, we report two experiments to propose the application of virtual hug for educational contexts. In the first experiment, we report an experiment where we introduced huggable communication media into daytime sleep in a co-sleeping situation. In the second experiment, we investigated the effect of the gender perception from Hugvie on user’s touch perception.
BibTeX:
@InBook{Sumioka2020b,
  author    = {Hidenobu Sumioka and Junya Nakanishi and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {Robot sociali e educazione. Interazioni, applicazioni e nuove frontiere},
  chapter   = {11},
  pages     = {169-190},
  title     = {Abbracci virtuali per l’educazione: studio pilota sul co-sleeping con un huggable communication medium e considerazioni di progettazione per applicazioni educative},
  year      = {2023},
  abstract  = {In this chapter, we report two experiments to propose the application of virtual hug for educational contexts. In the first experiment, we report an experiment where we introduced huggable communication media into daytime sleep in a co-sleeping situation. In the second experiment, we investigated the effect of the gender perception from Hugvie on user’s touch perception.},
  date      = {2023-07-14},
  isbn      = {978-88-3285-557-9},
  month     = jul,
}
Carlos T. Ishi, "Motion generation during vocalized emotional expressions and evaluation in android robots", IntechOpen, pp. 1-20, August, 2019.
Abstract: Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression.
BibTeX:
@Inbook{Ishi2019d,
  chapter   = {1},
  pages     = {1-20},
  title     = {Motion generation during vocalized emotional expressions and evaluation in android robots},
  publisher = {IntechOpen},
  year      = {2019},
  author    = {Carlos T. Ishi},
  booktitle = {Future of Robotics - Becoming Human with Humanoid or Emotional Intelligence},
  month     = aug,
  isbn      = {978-1-78985-484-8},
  abstract  = {Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression.},
  url       = {https://www.intechopen.com/books/becoming-human-with-humanoid-from-physical-interaction-to-social-intelligence/motion-generation-during-vocalized-emotional-expressions-and-evaluation-in-android-robots},
  comment   = {y},
  doi       = {10.5772/intechopen.88457},
  keywords  = {emotion expression; laughter; surprise; motion generation; human-robot interaction; nonverbal information},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Brain-computer interface and motor imagery training: The role of visual feedback and embodiment", Chapter in Evolving BCI Therapy - Engaging Brain State Dynamics, pp. 73-88, October, 2018.
Abstract: We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users.
BibTeX:
@Incollection{Alimardani2018,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Brain-computer interface and motor imagery training: The role of visual feedback and embodiment},
  booktitle = {Evolving BCI Therapy - Engaging Brain State Dynamics},
  year      = {2018},
  chapter   = {5},
  pages     = {73-88},
  month     = oct,
  isbn      = {978-1-78984-070-4},
  abstract  = {We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users.},
}
小川浩平, 港隆史, 石黒浩, "自律会話可能なアンドロイド開発", Chapter in 人と協働するロボット革命最前線基盤技術から用途、デザイン、利用者心理、ISO13482、安全対策まで, 株式会社エヌ・ティー・エス, pp. 87-95, May, 2016.
Abstract: 自律対話可能なアンドロイドの研究開発について,タッチパネルを用いたアンドロイドとの対話システムとそれを用いた実証実験(大阪大学)や,石黒ERATOプロジェクトにおいて開発しているアンドロイドシステムとその研究課題について紹介する.
BibTeX:
@Incollection{小川浩平2016,
  author    = {小川浩平 and 港隆史 and 石黒浩},
  title     = {自律会話可能なアンドロイド開発},
  booktitle = {人と協働するロボット革命最前線基盤技術から用途、デザイン、利用者心理、ISO13482、安全対策まで},
  publisher = {株式会社エヌ・ティー・エス},
  year      = {2016},
  editor    = {佐藤知正},
  pages     = {87-95},
  month     = May,
  abstract  = {自律対話可能なアンドロイドの研究開発について,タッチパネルを用いたアンドロイドとの対話システムとそれを用いた実証実験(大阪大学)や,石黒ERATOプロジェクトにおいて開発しているアンドロイドシステムとその研究課題について紹介する.},
  file      = {小川浩平2016.pdf:pdf/小川浩平2016.pdf:PDF},
  url       = {http://www.nts-book.co.jp/index.html},
}
西尾修一, "アンドロイドへの身体感覚転移とニューロフィードバック", Chapter in ロボットと共生する社会脳 ーー 神経社会ロボット学, 新曜社, no. 第9巻, pp. 175-208, 2016.
BibTeX:
@Incollection{西尾修一2016a,
  author    = {西尾修一},
  title     = {アンドロイドへの身体感覚転移とニューロフィードバック},
  booktitle = {ロボットと共生する社会脳 ーー 神経社会ロボット学},
  publisher = {新曜社},
  year      = {2016},
  number    = {第9巻},
  series    = {社会脳シリーズ},
  pages     = {175-208},
  doi       = {ISBN 978-4-7885-1456-0},
  file      = {西尾修一2016a.pdf:pdf/西尾修一2016a.pdf:PDF},
}
西尾修一, "遠隔操作アンドロイドを通じて感じる他者の存在", Chapter in ロボットと共生する社会脳 ーー 神経社会ロボット学, 新曜社, no. 第9巻, pp. 141-169, 2016.
BibTeX:
@Incollection{西尾修一2016,
  author    = {西尾修一},
  title     = {遠隔操作アンドロイドを通じて感じる他者の存在},
  booktitle = {ロボットと共生する社会脳 ーー 神経社会ロボット学},
  publisher = {新曜社},
  year      = {2016},
  number    = {第9巻},
  series    = {社会脳シリーズ},
  pages     = {141-169},
  doi       = {ISBN 978-4-7885-1456-0},
  file      = {西尾修一2016.pdf:pdf/西尾修一2016.pdf:PDF},
}
坊農真弓, 石黒浩, "ロボット演劇が魅せるもの", Chapter in ロボットと共生する社会脳 ーー 神経社会ロボット学, 新曜社, no. 第9巻, pp. 43-73, 2016.
BibTeX:
@Incollection{石黒浩2016,
  author    = {坊農真弓 and 石黒浩},
  title     = {ロボット演劇が魅せるもの},
  booktitle = {ロボットと共生する社会脳 ーー 神経社会ロボット学},
  publisher = {新曜社},
  year      = {2016},
  number    = {第9巻},
  series    = {社会脳シリーズ},
  pages     = {43-73},
  doi       = {ISBN 978-4-7885-1456-0},
}
Panikos Heracleous, Denis Beautemps, Hiroshi Ishiguro, Norihiro Hagita, "Towards Augmentative Speech Communication", Chapter in Speech and Language Technologies, InTech, Vukovar, Croatia, pp. 303-318, June, 2011.
Abstract: Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \citeTADOMA). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.
BibTeX:
@Incollection{Heracleous2011,
  author    = {Panikos Heracleous and Denis Beautemps and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Towards Augmentative Speech Communication},
  booktitle = {Speech and Language Technologies},
  publisher = {{InT}ech},
  year      = {2011},
  editor    = {Ivo Ipsic},
  pages     = {303--318},
  address   = {Vukovar, Croatia},
  month     = Jun,
  abstract  = {Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \cite{TADOMA}). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.},
  file      = {Heracleous2011.pdf:Heracleous2011.pdf:PDF;InTech-Towards_augmentative_speech_communication.pdf:http\://www.intechopen.com/source/pdfs/15082/InTech-Towards_augmentative_speech_communication.pdf:PDF},
  url       = {http://www.intechopen.com/articles/show/title/towards-augmentative-speech-communication},
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Geminoid: Teleoperated Android of an Existing Person", Chapter in Humanoid Robots: New Developments, I-Tech Education and Publishing, Vienna, Austria, pp. 343-352, June, 2007.
BibTeX:
@Incollection{Nishio2007a,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Geminoid: Teleoperated Android of an Existing Person},
  booktitle       = {Humanoid Robots: New Developments},
  publisher       = {I-Tech Education and Publishing},
  year            = {2007},
  editor          = {Armando Carlos de Pina Filho},
  pages           = {343--352},
  address         = {Vienna, Austria},
  month           = Jun,
  file            = {Nishio2007a.pdf:Nishio2007a.pdf:PDF;InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:http\://www.intechopen.com/source/pdfs/240/InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:PDF},
  url             = {http://www.intechopen.com/articles/show/title/geminoid__teleoperated_android_of_an_existing_person},
}
解説記事
住岡英信, "明るく楽しいロボット共生介護施設の実現へ ~赤ちゃんのミニマルデザインで切り拓く~", 電波技術協会報誌 FORN, no. 346, pp. 18-21, May, 2022.
Abstract: 認知症は今や日本だけでなく、世界的な問題になっています。認知症患者がしばしば示す暴言や暴行、徘徊といった行動・心理症状は、本人だけでなく、介護者の負担を増加させ、ひいては社会全体の経済負担をも増加させる原因になっています。この問題に対し、私達は赤ちゃん型対話ロボットを用いて認知症高齢者だけでなく、介護者にとっても効果的な支援に取り組んでいます。その鍵となるのが赤ちゃんのようなロボットを実現するための最小限の特徴を検討するミニマルデザインアプローチです。
BibTeX:
@Article{住岡英信2022b,
  author   = {住岡英信},
  journal  = {電波技術協会報誌 FORN},
  title    = {明るく楽しいロボット共生介護施設の実現へ ~赤ちゃんのミニマルデザインで切り拓く~},
  year     = {2022},
  abstract = {認知症は今や日本だけでなく、世界的な問題になっています。認知症患者がしばしば示す暴言や暴行、徘徊といった行動・心理症状は、本人だけでなく、介護者の負担を増加させ、ひいては社会全体の経済負担をも増加させる原因になっています。この問題に対し、私達は赤ちゃん型対話ロボットを用いて認知症高齢者だけでなく、介護者にとっても効果的な支援に取り組んでいます。その鍵となるのが赤ちゃんのようなロボットを実現するための最小限の特徴を検討するミニマルデザインアプローチです。},
  day      = {10},
  month    = may,
  number   = {346},
  pages    = {18-21},
  url      = {https://reea.or.jp/report/3949/},
}
住岡英信, "抱擁型通信メディアによる不安やストレスの軽減", 週刊 医学のあゆみ, vol. 278, no. 11, pp. 962-966, September, 2021.
Abstract: メンタルヘルスケアはストレス社会といわれる現代において重要な課題であるが、新型コロナ禍においてはさらに重要性を増している。新型コロナ感染予防対策として対面での対話が難しくなり、電話やWeb会議システムなどを用いた遠隔対話が当たり前になってきている。しかし、遠隔対話においてはこれまで人間が、他者とのコミュニケーションにおいて孤独感や不安、精神的ストレスを軽減するために利用してきた物理的な触れ合いが失われており、それに代わる方法を検討する必要性が叫ばれている。本稿では、遠隔対話に、仮想的な他者との抱擁を導入することを目指して開発された抱擁型通信メディア「ハグビー」について紹介し、それを使用することで得られる不安やストレスの軽減効果について心理的、生理的、脳科学的側面から調べた研究について紹介する。また、その実応用として児童に対する読み聞かせ支援について紹介し、感情的な制御が難しい人々への支援につながる可能性について述べる。
BibTeX:
@Article{住岡英信2021a,
  author   = {住岡英信},
  journal  = {週刊 医学のあゆみ},
  title    = {抱擁型通信メディアによる不安やストレスの軽減},
  year     = {2021},
  abstract = {メンタルヘルスケアはストレス社会といわれる現代において重要な課題であるが、新型コロナ禍においてはさらに重要性を増している。新型コロナ感染予防対策として対面での対話が難しくなり、電話やWeb会議システムなどを用いた遠隔対話が当たり前になってきている。しかし、遠隔対話においてはこれまで人間が、他者とのコミュニケーションにおいて孤独感や不安、精神的ストレスを軽減するために利用してきた物理的な触れ合いが失われており、それに代わる方法を検討する必要性が叫ばれている。本稿では、遠隔対話に、仮想的な他者との抱擁を導入することを目指して開発された抱擁型通信メディア「ハグビー」について紹介し、それを使用することで得られる不安やストレスの軽減効果について心理的、生理的、脳科学的側面から調べた研究について紹介する。また、その実応用として児童に対する読み聞かせ支援について紹介し、感情的な制御が難しい人々への支援につながる可能性について述べる。},
  day      = {11},
  etitle   = {Reducing anxiety and stress through a huggable communication medium},
  month    = sep,
  number   = {11},
  pages    = {962-966},
  url      = {https://www.ishiyaku.co.jp/magazines/ayumi/AyumiBookDetail.aspx?BC=927811},
  volume   = {278},
}
東中竜一郎, 港隆史, 境くりま, 船山智, 西崎博光, 長井隆行, "対話ロボットコンペティションにおける音声対話システム構築", 日本音響学会誌, vol. 77, no. 8, pp. 512-520, August, 2021.
Abstract: 近年,スマートフォン上の音声エージェント,AIスピーカ,コミュニケーションロボットという形で身の回りに対話デバイスが増加している。こういった対話デバイスの究極の形の一つが,人間のようなロボットとの対話であろう。そうした対話ロボットのコンペティションとして,我々は対話ロボットコンペティション(ロボットコンペ)1を主催している。対話システムに関するコンペティションはこれまでに幾つも開催されている。しかし,人型の対話ロボットによるコミュニケーションを対象としたものは他に類を見ない。その理由として,人型ロボットを準備できない,人型ロボット上の対話システムの実装に必要なソフトウェアが多すぎて敷居が高い,といったものが挙げられる。この対処として,ロボットコンペでは,主催者側でアンドロイドを用意するほか,シミュレータや音声認識,音声合成,身体を動作させるためのプログラムなど対話システムのコアである対話制御以外のソフトウェアをすべて提供する。もちろん,主催者側が準備するソフトウェアの代替や追加のソフトウェアとして,自身で用意したものを用いることも可能である。本稿では,ロボットコンペの趣旨や提供するシステム構築環境について述べたあと,音声対話を行う対話ロボットに関するソフトウェアとして,対話システム関連ツール,音声認識・音声合成ツール,ロボット関連ツールを紹介する。これらのツールについては,可能な限り関連URL(執筆時点のもの)を記載した。本稿を読んで,「対話ロボット構築も意外と簡単そう」,「対話ロボットを作ってみたい」,「ロボットコンペに参加してみたい」という方を増やすことが本稿の目的である。
BibTeX:
@Article{東中竜一郎2021,
  author   = {東中竜一郎 and 港隆史 and 境くりま and 船山智 and 西崎博光 and 長井隆行},
  title    = {対話ロボットコンペティションにおける音声対話システム構築},
  journal  = {日本音響学会誌},
  year     = {2021},
  volume   = {77},
  number   = {8},
  pages    = {512-520},
  month    = aug,
  abstract = {近年,スマートフォン上の音声エージェント,AIスピーカ,コミュニケーションロボットという形で身の回りに対話デバイスが増加している。こういった対話デバイスの究極の形の一つが,人間のようなロボットとの対話であろう。そうした対話ロボットのコンペティションとして,我々は対話ロボットコンペティション(ロボットコンペ)1を主催している。対話システムに関するコンペティションはこれまでに幾つも開催されている。しかし,人型の対話ロボットによるコミュニケーションを対象としたものは他に類を見ない。その理由として,人型ロボットを準備できない,人型ロボット上の対話システムの実装に必要なソフトウェアが多すぎて敷居が高い,といったものが挙げられる。この対処として,ロボットコンペでは,主催者側でアンドロイドを用意するほか,シミュレータや音声認識,音声合成,身体を動作させるためのプログラムなど対話システムのコアである対話制御以外のソフトウェアをすべて提供する。もちろん,主催者側が準備するソフトウェアの代替や追加のソフトウェアとして,自身で用意したものを用いることも可能である。本稿では,ロボットコンペの趣旨や提供するシステム構築環境について述べたあと,音声対話を行う対話ロボットに関するソフトウェアとして,対話システム関連ツール,音声認識・音声合成ツール,ロボット関連ツールを紹介する。これらのツールについては,可能な限り関連URL(執筆時点のもの)を記載した。本稿を読んで,「対話ロボット構築も意外と簡単そう」,「対話ロボットを作ってみたい」,「ロボットコンペに参加してみたい」という方を増やすことが本稿の目的である。},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/jasj/77/8/77_512/_article/-char/ja},
  doi      = {10.20697/jasj.77.8_512},
}
石黒浩, 港隆史, 小山虎, "意図欲求を持つ自律対話アンドロイドの研究開発", 日本ロボット学会誌, vol. 37, no. 4, pp. 312-317, May, 2019.
Abstract: 自律的に対話するロボットの研究開発は,近年ますます重要になってきている.しかしながら,多くの研究は音声認識やインタフェース等の研究にとどまり,意図や欲求を持ちながら,多様なモダリティで対話するロボットの開発に至っていない.これに対して,筆者等はJST ERATO石黒共生ヒューマンロボットインタラクションプロジェクトにおいて,意図や欲求を持ちながら多様なモダリティを通して,人間と人間らしく対話できるロボットの研究開発に取り組んできた.本稿では,ロボットのアーキテクチャの歴史を振り返りながら,ロボットがより人間らしく人間と対話するためのアーキテクチャを考察するとともに,実際に実装したアーキテクチャについて紹介する.
BibTeX:
@Article{石黒浩2019,
  author   = {石黒浩 and 港隆史 and 小山虎},
  title    = {意図欲求を持つ自律対話アンドロイドの研究開発},
  journal  = {日本ロボット学会誌},
  year     = {2019},
  volume   = {37},
  number   = {4},
  pages    = {312-317},
  month    = May,
  abstract = {自律的に対話するロボットの研究開発は,近年ますます重要になってきている.しかしながら,多くの研究は音声認識やインタフェース等の研究にとどまり,意図や欲求を持ちながら,多様なモダリティで対話するロボットの開発に至っていない.これに対して,筆者等はJST ERATO石黒共生ヒューマンロボットインタラクションプロジェクトにおいて,意図や欲求を持ちながら多様なモダリティを通して,人間と人間らしく対話できるロボットの研究開発に取り組んできた.本稿では,ロボットのアーキテクチャの歴史を振り返りながら,ロボットがより人間らしく人間と対話するためのアーキテクチャを考察するとともに,実際に実装したアーキテクチャについて紹介する.},
  day      = {15},
  url      = {https://www.jstage.jst.go.jp/article/jrsj/37/4/37_37_312/_article/-char/ja},
  doi      = {10.7210/jrsj.37.312},
  etitle   = {Development of an Autonomous Android with Conversational Capability based on Intention and Desire},
}
小川浩平, 住岡英信, 石黒浩, "感情でつながる,感情でつなげるロボット対話システム", 人工知能学会 人工知能 31巻5号 特集「人工知能とEmotion」, vol. 31, no. 5, pp. 650-655, September, 2016.
Abstract: 人は自分の中において一貫性を保つような理由付けを無意識のうちに働かせる性質がある.また,その際人は自分の都合の良いポジティブな想像を働かせる傾向にある.我々はこれまでこれら仮説を踏まえ,さまざまな人と関わるロボットを開発してきた.具体的には,自律的に人と関わることにより社会的な役割を果たすロボットと,人と人をつなぐロボットである.本稿では,人と感情でつながる,また,人同士を感情でつなげるロボットをどのように設計すればよいか,またその際,ロボットは人間社会において具体的にどのような役割を果たすことができるかについて,具体的な研究事例をあげながら議論を行う.
BibTeX:
@Article{小川浩平2016a,
  author   = {小川浩平 and 住岡英信 and 石黒浩},
  title    = {感情でつながる,感情でつなげるロボット対話システム},
  journal  = {人工知能学会 人工知能 31巻5号 特集「人工知能とEmotion」},
  year     = {2016},
  volume   = {31},
  number   = {5},
  pages    = {650-655},
  month    = Sep,
  abstract = {人は自分の中において一貫性を保つような理由付けを無意識のうちに働かせる性質がある.また,その際人は自分の都合の良いポジティブな想像を働かせる傾向にある.我々はこれまでこれら仮説を踏まえ,さまざまな人と関わるロボットを開発してきた.具体的には,自律的に人と関わることにより社会的な役割を果たすロボットと,人と人をつなぐロボットである.本稿では,人と感情でつながる,また,人同士を感情でつなげるロボットをどのように設計すればよいか,またその際,ロボットは人間社会において具体的にどのような役割を果たすことができるかについて,具体的な研究事例をあげながら議論を行う.},
  etitle   = {Robot Communication System That Connects Humans and Robots with Emotions},
  file     = {小川浩平2016a.pdf:pdf/小川浩平2016a.pdf:PDF},
}
住岡英信, 中江文, 石黒浩, "ロボットが医療にもたらすコミュニケーション支援の可能性", 大阪保険医雑誌, vol. 2016年2月号, no. 593, pp. 27-32, February, 2016.
Abstract: 本論文では、医療の現場におけるロボットを介したコミュニケーション支援の可能性をこれまでの研究を例にあげながら議論する。
BibTeX:
@Article{住岡英信2016a,
  author   = {住岡英信 and 中江文 and 石黒浩},
  title    = {ロボットが医療にもたらすコミュニケーション支援の可能性},
  journal  = {大阪保険医雑誌},
  year     = {2016},
  volume   = {2016年2月号},
  number   = {593},
  pages    = {27-32},
  month    = Feb,
  abstract = {本論文では、医療の現場におけるロボットを介したコミュニケーション支援の可能性をこれまでの研究を例にあげながら議論する。},
  url      = {https://osaka-hk.org/about/katsudou/publish/},
  file     = {住岡英信2016a.pdf:pdf/住岡英信2016a.pdf:PDF},
}
Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Using Androids to Provide Communication Support for the Elderly", New Breeze, vol. 27, no. 4, pp. 14-17, October, 2015.
BibTeX:
@Article{Nishio2015c,
  author   = {Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title    = {Using Androids to Provide Communication Support for the Elderly},
  journal  = {New Breeze},
  year     = {2015},
  volume   = {27},
  number   = {4},
  pages    = {14-17},
  month    = Oct,
  day      = {9},
  url      = {https://www.ituaj.jp/wp-content/uploads/2015/10/nb27-4_web_05_ROBOTS_usingandroids.pdf},
  file     = {Nishio2015c.pdf:pdf/Nishio2015c.pdf:PDF},
}
西尾修一, 港隆史, 石黒浩, "アンドロイドによる高齢者のコミュニケーション支援", ITUジャーナル, vol. 45, no. 9, pp. 18-21, September, 2015.
BibTeX:
@Article{西尾修一2015b,
  author   = {西尾修一 and 港隆史 and 石黒浩},
  title    = {アンドロイドによる高齢者のコミュニケーション支援},
  journal  = {ITUジャーナル},
  year     = {2015},
  volume   = {45},
  number   = {9},
  pages    = {18-21},
  month    = SEP,
  url      = {https://www.ituaj.jp/?itujournal=2015_09},
  file     = {Nishio2015b.pdf:pdf/Nishio2015b.pdf:PDF},
}
中江文, 住岡英信, 力石武信, 吉川雄一郎, 柴田政彦, 石黒浩, 眞下節, "アンドロイドによる医療支援の可能性", 整形・災害外科 2015年07月号, vol. 58, no. 8, pp. 1057-1061, july, 2015.
Abstract: 医療現場・介護現場では、人の手間をかけられる環境が理想である。しかしわが国のような高齢化社会では、最小限の人員配置の中で出来るだけ機械の助けを借りていくことが現実的である。そんな中、わが国のアンドロイド開発技術は目覚しいものがあり、人にしか出来ないと思われた役割も担っていける可能性がある。医師の現場のニーズを開発者に伝え、多くの人がハッピーになれるロボットと共存共栄できる社会の構築が望まれる。
BibTeX:
@Article{中江文2015,
  author   = {中江文 and 住岡英信 and 力石武信 and 吉川雄一郎 and 柴田政彦 and 石黒浩 and 眞下節},
  title    = {アンドロイドによる医療支援の可能性},
  journal  = {整形・災害外科 2015年07月号},
  year     = {2015},
  volume   = {58},
  number   = {8},
  pages    = {1057-1061},
  month    = JULY,
  abstract = {医療現場・介護現場では、人の手間をかけられる環境が理想である。しかしわが国のような高齢化社会では、最小限の人員配置の中で出来るだけ機械の助けを借りていくことが現実的である。そんな中、わが国のアンドロイド開発技術は目覚しいものがあり、人にしか出来ないと思われた役割も担っていける可能性がある。医師の現場のニーズを開発者に伝え、多くの人がハッピーになれるロボットと共存共栄できる社会の構築が望まれる。},
  etitle   = {Possibility of medical support by Androids},
  file     = {中江文2015.pdf:pdf/中江文2015.pdf:PDF},
}
港隆史, 石黒浩, "エルフォイド:人のミニマルデザインを持つロボット型通信メディア", 日本ロボット学会誌, vol. 32, no. 8, pp. 704-708, October, 2014.
Abstract: 本稿では,人と人を親密に結びつける新たなコミュニケーション技術としてい,人のミニマルデザインを持つロボット型通信メディアについて,著者らが開発してきた「テレノイド」,「エルフォイド」,「ハグビー」の研究を紹介しながら,解説する.
BibTeX:
@Article{港隆史2014,
  author   = {港隆史 and 石黒浩},
  title    = {エルフォイド:人のミニマルデザインを持つロボット型通信メディア},
  journal  = {日本ロボット学会誌},
  year     = {2014},
  volume   = {32},
  number   = {8},
  pages    = {704-708},
  month    = Oct,
  abstract = {本稿では,人と人を親密に結びつける新たなコミュニケーション技術としてい,人のミニマルデザインを持つロボット型通信メディアについて,著者らが開発してきた「テレノイド」,「エルフォイド」,「ハグビー」の研究を紹介しながら,解説する.},
  etitle   = {Elfoid: A Robotic Communication Media with a Minimalistic Human Design},
  file     = {港隆史2014a.pdf:pdf/港隆史2014a.pdf:PDF;::PDF},
}
西尾修一, 石黒浩, Ayse P. Saygin, "アンドロイドと不気味の谷", Clinical Neuroscience, vol. 33, no. 2, pp. 175-176, 2014.
Abstract: アンドロイドと不気味の谷仮説について解読する。
BibTeX:
@Article{西尾修一2014,
  author    = {西尾修一 and 石黒浩 and Ayse P. Saygin},
  title     = {アンドロイドと不気味の谷},
  journal   = {Clinical Neuroscience},
  year      = {2014},
  volume    = {33},
  number    = {2},
  pages     = {175-176},
  abstract  = {アンドロイドと不気味の谷仮説について解読する。},
  booktitle = {Cliical Neuroscience},
  file      = {西尾修一2015.pdf:pdf/西尾修一2015.pdf:PDF},
}
西尾修一, アリマルダニ マリヤム, 石黒浩, "遠隔操作アンドロイドへの身体感覚転移", 日本ロボット学会誌, vol. 31, no. 9, pp. 26-29, November, 2013.
BibTeX:
@Article{西尾修一2013,
  author          = {西尾修一 and アリマルダニ マリヤム and 石黒浩},
  title           = {遠隔操作アンドロイドへの身体感覚転移},
  journal         = {日本ロボット学会誌},
  year            = {2013},
  volume          = {31},
  number          = {9},
  pages           = {26-29},
  month           = Nov,
  day             = {15},
  doi             = {10.7210/jrsj.31.854},
  etitle          = {Body Ownership Transfer to Teleoperated Android},
  file            = {西尾修一2013.pdf:pdf/西尾修一2013.pdf:PDF},
}
石黒浩, 港隆史, 西尾修一, "人としてのミニマルデザインを持つ遠隔操作型ロボット", 情報処理, vol. 54, no. 7, pp. 694-697, July, 2013.
BibTeX:
@Article{石黒浩2013,
  author   = {石黒浩 and 港隆史 and 西尾修一},
  title    = {人としてのミニマルデザインを持つ遠隔操作型ロボット},
  journal  = {情報処理},
  year     = {2013},
  volume   = {54},
  number   = {7},
  pages    = {694--697},
  month    = Jul,
  day      = {15},
  url      = {ttps://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=92591&item_no=1&page_id=13&block_id=8},
  etitle   = {Tele-operated Robot with a Minimal Design of Human},
  file     = {:pdf/石黒浩2013.pdf:PDF},
}
西尾修一, 山崎竜二, 石黒浩, "遠隔操作アンドロイドを用いた認知症高齢者のコミュニケーション支援", システム/制御/情報, vol. 57, no. 1, pp. 31-36, January, 2013.
BibTeX:
@Article{西尾修一2012,
  author          = {西尾修一 and 山崎竜二 and 石黒浩},
  title           = {遠隔操作アンドロイドを用いた認知症高齢者のコミュニケーション支援},
  journal         = {システム/制御/情報},
  year            = {2013},
  volume          = {57},
  number          = {1},
  pages           = {31--36},
  month           = Jan,
  etitle          = {Communication Support for Demented Elderies using Teleoperated Android},
  file            = {西尾修一2012.pdf:pdf/西尾修一2012.pdf:PDF},
  keywords        = {dementia; elderly care; teleoperated android; communication support},
}
Kohei Ogawa, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Android Robots as Tele-presence Media", Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications, Medical Information Science Reference, Pennsylvania, USA, pp. 54-63, September, 2012.
Abstract: In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.
BibTeX:
@Article{Ogawa2012b,
  author    = {Kohei Ogawa and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title     = {Android Robots as Tele-presence Media},
  journal   = {Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications},
  year      = {2012},
  pages     = {54-63},
  month     = Sep,
  abstract  = {In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.},
  url       = {http://www.igi-global.com/chapter/android-robots-telepresence-media/69905},
  doi       = {10.4018/978-1-4666-2113-8.ch006},
  address   = {Pennsylvania, USA},
  chapter   = {6},
  editor    = {Jinglong Wu},
  file      = {Ogawa2012b.pdf:Ogawa2012b.pdf:PDF},
  isbn      = {9781466621138},
  publisher = {Medical Information Science Reference},
}
Daisuke Sakamoto, Hiroshi Ishiguro, "Geminoid: Remote-Controlled Android System for Studying Human Presence", Kansei Engineering International, vol. 8, no. 1, pp. 3-9, 2009.
BibTeX:
@Article{Sakamoto2009,
  author   = {Daisuke Sakamoto and Hiroshi Ishiguro},
  title    = {Geminoid: Remote-Controlled Android System for Studying Human Presence},
  journal  = {Kansei Engineering International},
  year     = {2009},
  volume   = {8},
  number   = {1},
  pages    = {3--9},
  url      = {http://mol.medicalonline.jp/archive/search?jo=dp7keint&ye=2009&vo=8&issue=1},
  file     = {Sakamoto2009.pdf:Sakamoto2009.pdf:PDF},
}
西尾修一, 石黒浩, "人として人とつながるロボット研究", 電子情報通信学会学会誌, vol. 91, no. 5, pp. 411-416, May, 2008.
Abstract: 人間の対象を擬人化するという能力をかんがみれば,人間型や人間酷似型ロボットの研究が,人とかかわるロボットを開発する上で重要な意味を持つことは明らかである.本稿では,筆者らが最近開発したジェミノイドの研究を基に,人とロボットをつなぐ研究がどのように発展していくかを議論する.
BibTeX:
@Article{西尾修一2008,
  author   = {西尾修一 and 石黒浩},
  title    = {人として人とつながるロボット研究},
  journal  = {電子情報通信学会学会誌},
  year     = {2008},
  volume   = {91},
  number   = {5},
  pages    = {411--416},
  month    = May,
  abstract = {人間の対象を擬人化するという能力をかんがみれば,人間型や人間酷似型ロボットの研究が,人とかかわるロボットを開発する上で重要な意味を持つことは明らかである.本稿では,筆者らが最近開発したジェミノイドの研究を基に,人とロボットをつなぐ研究がどのように発展していくかを議論する.},
  url      = {http://ci.nii.ac.jp/naid/110006664712},
  etitle   = {Android Science Research for Bridging Humans and Robots},
  file     = {西尾修一2008.pdf:西尾修一2008.pdf:PDF},
}
石黒浩, "アンドロイド、ジェミノイドと人間の相違", 情報処理, vol. 49, no. 1, pp. 7-14, January, 2008.
BibTeX:
@Article{石黒浩2008,
  author   = {石黒浩},
  title    = {アンドロイド、ジェミノイドと人間の相違},
  journal  = {情報処理},
  year     = {2008},
  volume   = {49},
  number   = {1},
  pages    = {7--14},
  month    = Jan,
  url      = {http://www.bookpark.ne.jp/cm/ipsj/search.asp?flag=6&keyword=IPSJ-MGN490104&mode=PDF},
  etitle   = {Differences among Android, Geminoid, Human},
  file     = {石黒浩2008.pdf:石黒浩2008.pdf:PDF},
}
招待講演
石黒浩, "なぜ人間を考えるためにロボットを作るのか?", "二松学舎大学2024年シンポジウム「ロボット学者はなぜ小説を書くのか?――漱石アンドロイドと人間学としてのロボット研究」", 二松学舎大学, 東京, March, 2024.
Abstract: 「人間のようなもの」の存在は、そもそも人間とは何かという問いを突きつける。人間そっくりのアンドロイドの研究を進め漱石アンドロイドの制作も手掛けた石黒浩、人間のように記号を生み出すロボットの研究「記号創発ロボティクス」を展開してきた谷口忠大。これら二人のロボット研究者は、ロボットを通して人間の輪郭を問いつづけてきた。加えて二人は、ロボットにまつわる小説を出版している異色のロボット研究者でもある。ロボット研究と小説の両面から、人間を考えるためのロボットについて討議する。
BibTeX:
@InProceedings{石黒浩2024a,
  author    = {石黒浩},
  booktitle = {"二松学舎大学2024年シンポジウム「ロボット学者はなぜ小説を書くのか?――漱石アンドロイドと人間学としてのロボット研究」"},
  title     = {なぜ人間を考えるためにロボットを作るのか?},
  year      = {2024},
  address   = {二松学舎大学, 東京},
  day       = {2},
  month     = mar,
  url       = {https://www.nishogakusha-u.ac.jp/android/event/20240302.html},
  abstract  = {「人間のようなもの」の存在は、そもそも人間とは何かという問いを突きつける。人間そっくりのアンドロイドの研究を進め漱石アンドロイドの制作も手掛けた石黒浩、人間のように記号を生み出すロボットの研究「記号創発ロボティクス」を展開してきた谷口忠大。これら二人のロボット研究者は、ロボットを通して人間の輪郭を問いつづけてきた。加えて二人は、ロボットにまつわる小説を出版している異色のロボット研究者でもある。ロボット研究と小説の両面から、人間を考えるためのロボットについて討議する。},
}
Hidenobu Sumioka, "Social robots for older people with dementia and care staff toward all-stakeholder-centered care.", In The History & Future of Care Robots, Claremont, USA, March, 2024.
Abstract: This symposium brings together scholars and students across diverse disciplines such as history, anthropology, engineering, technology, information sciences, and Japan studies, along with experts in the care industry, to share their research findings and experiences experiences related to the integration of assistive technologies in elderly and disability care in Japan, Denmark, and the US. We will also discuss strategies for enhancing the practicality and accessibility of care robots and other technological devices.
BibTeX:
@InProceedings{Sumioka2024,
  author    = {Hidenobu Sumioka},
  booktitle = {The History & Future of Care Robots},
  title     = {Social robots for older people with dementia and care staff toward all-stakeholder-centered care.},
  year      = {2024},
  address   = {Claremont, USA},
  day       = {30},
  month     = mar,
  abstract  = {This symposium brings together scholars and students across diverse disciplines such as history, anthropology, engineering, technology, information sciences, and Japan studies, along with experts in the care industry, to share their research findings and experiences experiences related to the integration of assistive technologies in elderly and disability care in Japan, Denmark, and the US. We will also discuss strategies for enhancing the practicality and accessibility of care robots and other technological devices.},
}
住岡英信, "認知行動療法対話ロボットを用いたデイケアでの精神疾患患者支援", 第42回日本社会精神医学会, 東北医科薬科大学小松島キャンパス, 宮城, March, 2024.
Abstract: 本発表では、これまで進めてきた対話ロボットによるカウンセリングについての取り組みを紹介する
BibTeX:
@InProceedings{住岡英信2024,
  author    = {住岡英信},
  booktitle = {第42回日本社会精神医学会},
  title     = {認知行動療法対話ロボットを用いたデイケアでの精神疾患患者支援},
  year      = {2024},
  address   = {東北医科薬科大学小松島キャンパス, 宮城},
  day       = {14-15},
  month     = mar,
  url       = {http://jssp42.umin.jp/},
  abstract  = {本発表では、これまで進めてきた対話ロボットによるカウンセリングについての取り組みを紹介する},
}
David Achanccaray, "Brain-Machine Interfaces: From Typical Paradigms to VR/Robot-based Social Applications", In Semana Internacional PUCP(International Week PUCP), Lima, Peru (online), March, 2024.
Abstract: BMI is a technology that provides an alternative way of communication and can augment human abilities. It can assist people to perform daily tasks, which is more beneficial for people with disabilities. This technology requires knowledge of several fields of engineering and neuroscience, which will be given in lectures and hands-on sessions. The knowledge for the development of a BMI application will be approached during this course.
BibTeX:
@InProceedings{Achanccaray2024,
  author    = {David Achanccaray},
  booktitle = {Semana Internacional PUCP(International Week PUCP)},
  title     = {Brain-Machine Interfaces: From Typical Paradigms to VR/Robot-based Social Applications},
  year      = {2024},
  address   = {Lima, Peru (online)},
  day       = {11-16},
  month     = mar,
  url       = {https://facultad-derecho.pucp.edu.pe/wp-content/uploads/2024/02/semana-internacional-2024-1.pdf},
  abstract  = {BMI is a technology that provides an alternative way of communication and can augment human abilities. It can assist people to perform daily tasks, which is more beneficial for people with disabilities. This technology requires knowledge of several fields of engineering and neuroscience, which will be given in lectures and hands-on sessions. The knowledge for the development of a BMI application will be approached during this course.},
}
中江文, "痛みの見える化とその先に見えるもの ~人工知能は我々の感覚を代弁できるか?~", 日本ペインクリニック学会 第4回中国・四国支部学術集会, 高知商工会館, 高知, February, 2024.
Abstract: いたみは主観的な感覚で、本人が痛いと言えばそれが尊重されるべきであるが、治療を行う上ではその表出の個人差で判断に迷う例が存在する。我々は誰でも等しく同じ医療が受けられる未来を目指し、痛みを脳波で見える化する試みを行ってきた。それには、出来ることと出来ないこと、将来できることがある。CHAT-GPTをはじめとする生成AIが注目される中、人間の持つ様々な感覚がどのくらい人工知能で代弁できるかなど、楽しみな未来についても議論したい。
BibTeX:
@InProceedings{中江文2024,
  author    = {中江文},
  booktitle = {日本ペインクリニック学会 第4回中国・四国支部学術集会},
  title     = {痛みの見える化とその先に見えるもの ~人工知能は我々の感覚を代弁できるか?~},
  year      = {2024},
  address   = {高知商工会館, 高知},
  day       = {3},
  month     = feb,
  url       = {https://www.jspc.gr.jp/branch/meeting/7},
  abstract  = {いたみは主観的な感覚で、本人が痛いと言えばそれが尊重されるべきであるが、治療を行う上ではその表出の個人差で判断に迷う例が存在する。我々は誰でも等しく同じ医療が受けられる未来を目指し、痛みを脳波で見える化する試みを行ってきた。それには、出来ることと出来ないこと、将来できることがある。CHAT-GPTをはじめとする生成AIが注目される中、人間の持つ様々な感覚がどのくらい人工知能で代弁できるかなど、楽しみな未来についても議論したい。},
}
石黒浩, "アバターと未来社会", 日経SDGsフェス大阪関西 -2025年大阪・関西万博に向けて-, ハービスホール, 大阪, February, 2024.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2024,
  author    = {石黒浩},
  booktitle = {日経SDGsフェス大阪関西 -2025年大阪・関西万博に向けて-},
  title     = {アバターと未来社会},
  year      = {2024},
  address   = {ハービスホール, 大阪},
  day       = {14},
  month     = feb,
  url       = {https://project.nikkeibp.co.jp/event/sdgs2024/02/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。},
}
中江文, "痛みの見える化を目指して ~脳波と人工知能を用いた挑戦~", 第30回高知いたみの研究会, 高知商工会館, 高知, January, 2024.
Abstract: いたみは主観的なものであり、本人の訴えに耳を傾けることは重要であることは言うまでもないが、本人の訴え方に個人差が大きいのも事実である。我々は公平な医療を提供する助けとなる機器を開発することを目的に痛みの見える化を目指して取り組んでいる。その内容について紹介する。
BibTeX:
@InProceedings{中江文2024a,
  author    = {中江文},
  booktitle = {第30回高知いたみの研究会},
  title     = {痛みの見える化を目指して ~脳波と人工知能を用いた挑戦~},
  year      = {2024},
  address   = {高知商工会館, 高知},
  day       = {20},
  month     = jan,
  abstract  = {いたみは主観的なものであり、本人の訴えに耳を傾けることは重要であることは言うまでもないが、本人の訴え方に個人差が大きいのも事実である。我々は公平な医療を提供する助けとなる機器を開発することを目的に痛みの見える化を目指して取り組んでいる。その内容について紹介する。},
}
石井カルロス寿憲, "音環境知能技術による人とロボットのインタラクション向上", 京都大学「情報通信技術のデザイン」, 京都大学, 京都, December, 2023.
Abstract: これまで発表者の研究グループが進めてきた音環境知能技術および対話ロボットへの応用に関するマルチモーダル音声情報処理の研究について紹介する。
BibTeX:
@InProceedings{石井カルロス寿憲2023a,
  author    = {石井カルロス寿憲},
  booktitle = {京都大学「情報通信技術のデザイン」},
  title     = {音環境知能技術による人とロボットのインタラクション向上},
  year      = {2023},
  address   = {京都大学, 京都},
  day       = {6},
  month     = dec,
  abstract  = {これまで発表者の研究グループが進めてきた音環境知能技術および対話ロボットへの応用に関するマルチモーダル音声情報処理の研究について紹介する。},
}
石黒浩, "ロボットが投げかける問い - 人間性とは何か?", Tokyo Forum 2023, 東京大学安田講堂, 東京 (オンライン), December, 2023.
Abstract: 人間のために反復作業をする用途で考案されたロボットは、いまや人間との境界線が曖昧になるほどの知性と自律性を備えるようになった。今、ロボットが人間に問いかけている。「人間とは?」「人間らしさとは?」人間とロボットの“境界線” https://www.tokyoforum.tc.u-tokyo.ac.jp/ja/index.htmlで活躍するロボット工学、哲学、パフォーミングアーツの登壇技術輸出となるような技術の提供はない者たちが議論する。
BibTeX:
@InProceedings{石黒浩2023h,
  author    = {石黒浩},
  booktitle = {Tokyo Forum 2023},
  title     = {ロボットが投げかける問い - 人間性とは何か?},
  year      = {2023},
  address   = {東京大学安田講堂, 東京 (オンライン)},
  day       = {1},
  etitle    = {Why Are Robots Questioning Humanity?},
  month     = dec,
  url       = {https://www.tokyoforum.tc.u-tokyo.ac.jp/ja/index.html},
  abstract  = {人間のために反復作業をする用途で考案されたロボットは、いまや人間との境界線が曖昧になるほどの知性と自律性を備えるようになった。今、ロボットが人間に問いかけている。「人間とは?」「人間らしさとは?」人間とロボットの“境界線” https://www.tokyoforum.tc.u-tokyo.ac.jp/ja/index.htmlで活躍するロボット工学、哲学、パフォーミングアーツの登壇技術輸出となるような技術の提供はない者たちが議論する。},
}
石黒浩, "アバターと未来社会", 第58回佛教徒大会, ホテル日航大阪, 大阪, November, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023g,
  author    = {石黒浩},
  booktitle = {第58回佛教徒大会},
  title     = {アバターと未来社会},
  year      = {2023},
  address   = {ホテル日航大阪, 大阪},
  day       = {22},
  month     = nov,
  url       = {https://www.haginotera.or.jp/info/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。},
}
石黒浩, "万博が拓く いのちの未来", 読売SDGsフォーラム2023, オンライン, September, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023f,
  author    = {石黒浩},
  booktitle = {読売SDGsフォーラム2023},
  title     = {万博が拓く いのちの未来},
  year      = {2023},
  address   = {オンライン},
  day       = {12},
  month     = sep,
  url       = {https://yab.yomiuri.co.jp/idomu/sdgs2023/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、2025年大阪・関西万博が拓くいのちの未来や、アバター共生社会の実現について語る。},
}
住岡英信, "ロボット×医療の未来", CSPOR-BCオンライン講演会, オンライン, September, 2023.
Abstract: 本発表では、コミュニケーションロボットを用いた我々の取り組みを紹介しながら、未来の医療にどのように関わるのかについて議論する
BibTeX:
@InProceedings{住岡英信2023a,
  author    = {住岡英信},
  booktitle = {CSPOR-BCオンライン講演会},
  title     = {ロボット×医療の未来},
  year      = {2023},
  address   = {オンライン},
  day       = {2},
  month     = sep,
  abstract  = {本発表では、コミュニケーションロボットを用いた我々の取り組みを紹介しながら、未来の医療にどのように関わるのかについて議論する},
}
Hiroshi Ishiguro, "AVATAR AND THE FUTURE SOCIETY", In Italian Tech Week, Torino, Italy, September, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023e,
  author    = {Hiroshi Ishiguro},
  booktitle = {Italian Tech Week},
  title     = {AVATAR AND THE FUTURE SOCIETY},
  year      = {2023},
  address   = {Torino, Italy},
  day       = {29},
  month     = sep,
  url       = {https://italiantechweek.com/en},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
石黒浩, "アバターと未来社会 -教育分野におけるアバター利用の可能性-", 日本赤ちゃん学会第23回学術集会, 千里ライフサイエンスセンター, 大阪, August, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の教育分野におけるアバター利用の可能性について語る。
BibTeX:
@InProceedings{石黒浩2023e,
  author    = {石黒浩},
  booktitle = {日本赤ちゃん学会第23回学術集会},
  title     = {アバターと未来社会 -教育分野におけるアバター利用の可能性-},
  year      = {2023},
  address   = {千里ライフサイエンスセンター, 大阪},
  day       = {6},
  month     = aug,
  url       = {https://www-ams.eng.osaka-u.ac.jp/akachan2023/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の教育分野におけるアバター利用の可能性について語る。},
}
Hiroshi Ishiguro, "GEMINOID, Avatar and the future society", In AI for Good, Geneva, Switzerland, July, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future. Robot on stage in Geneva, controlled by Prof Ishiguro remotely.
BibTeX:
@InProceedings{Ishiguro2023d,
  author    = {Hiroshi Ishiguro},
  booktitle = {AI for Good},
  title     = {GEMINOID, Avatar and the future society},
  year      = {2023},
  address   = {Geneva, Switzerland},
  day       = {7},
  month     = jul,
  url       = {https://aiforgood.itu.int/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future. Robot on stage in Geneva, controlled by Prof Ishiguro remotely.},
}
石黒浩, "人間を人間たらしめるものは何か", IVS2023 KYOTO, 京都市勧業館 みやこめっせ, 京都, June, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023c,
  author    = {石黒浩},
  booktitle = {IVS2023 KYOTO},
  title     = {人間を人間たらしめるものは何か},
  year      = {2023},
  address   = {京都市勧業館 みやこめっせ, 京都},
  day       = {28-30},
  month     = jun,
  url       = {https://events.bizzabo.com/IVS/agenda/session/1151864},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
石黒浩, "アバターと未来社会と医療", 令和5年度東北大学艮陵同窓会定期総会, 江陽グランドホテル, 宮城 (online配信), May, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023b,
  author    = {石黒浩},
  booktitle = {令和5年度東北大学艮陵同窓会定期総会},
  title     = {アバターと未来社会と医療},
  year      = {2023},
  address   = {江陽グランドホテル, 宮城 (online配信)},
  day       = {27},
  month     = may,
  url       = {http://www.gonryo.alumni.med.tohoku.ac.jp/info.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
Hiroshi Ishiguro, "Avatar and the future society", In 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, May, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023c,
  author    = {Hiroshi Ishiguro},
  booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
  title     = {Avatar and the future society},
  year      = {2023},
  address   = {London, UK},
  day       = {31},
  month     = may,
  url       = {https://www.icra2023.org/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. The speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and the tele-operated robots, called avatars, will coexist in the future.},
}
石黒浩, "アバターと未来社会", 人間拡張技術が成し得るHuman Innovation, ライフサイエンスハブウエスト, 大阪 (online配信), April, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023a,
  author    = {石黒浩},
  booktitle = {人間拡張技術が成し得るHuman Innovation},
  title     = {アバターと未来社会},
  year      = {2023},
  address   = {ライフサイエンスハブウエスト, 大阪 (online配信)},
  day       = {3},
  month     = apr,
  url       = {https://www.link-j.org/event/post-5824.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
Hiroshi Ishiguro, "Avatars and our future society", In HR Festival Europe, Zurich, Switzerland, March, 2023.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2023b,
  author    = {Hiroshi Ishiguro},
  booktitle = {HR Festival Europe},
  title     = {Avatars and our future society},
  year      = {2023},
  address   = {Zurich, Switzerland},
  day       = {28-29},
  month     = mar,
  url       = {https://www.hrfestival.ch/en/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
Hiroshi Ishiguro, "Me, Myself and AI: AI Avatar world", In DeepFest 2023, Riyadh Front Exhibition&Conference Centre, Saudi Arabia, February, 2023.
Abstract: DeepFest 2023 will be co-located with LEAP Tech Conference in Saudi Arabia 2023. In this interactive big talk, the speaker will talk about the basic ideas on interactive robots and avatars. An android copy of himself will also be on the stage and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2023,
  author    = {Hiroshi Ishiguro},
  booktitle = {DeepFest 2023},
  title     = {Me, Myself and AI: AI Avatar world},
  year      = {2023},
  address   = {Riyadh Front Exhibition&Conference Centre, Saudi Arabia},
  day       = {7},
  month     = feb,
  url       = {https://deepfest.com},
  abstract  = {DeepFest 2023 will be co-located with LEAP Tech Conference in Saudi Arabia 2023. In this interactive big talk, the speaker will talk about the basic ideas on interactive robots and avatars. An android copy of himself will also be on the stage and discuss about our future life.},
}
Hiroshi Ishiguro, "10 Ways Robotics Can Transform Our Future", In World Government Summit 2023, Madinat Jumeirah, Dubai, United Arab Emirates, February, 2023.
Abstract: n this talk, Professor Hiroshi Ishiguro of Osaka University provides insight into the very real threats posed by developments in robotics, avatar creation, and artificial intelligence and its effects on our collective future.
BibTeX:
@InProceedings{Ishiguro2023a,
  author    = {Hiroshi Ishiguro},
  booktitle = {World Government Summit 2023},
  title     = {10 Ways Robotics Can Transform Our Future},
  year      = {2023},
  address   = {Madinat Jumeirah, Dubai, United Arab Emirates},
  day       = {13},
  month     = feb,
  url       = {https://www.worldgovernmentsummit.org/home},
  abstract  = {n this talk, Professor Hiroshi Ishiguro of Osaka University provides insight into the very real threats posed by developments in robotics, avatar creation, and artificial intelligence and its effects on our collective future.},
}
石井カルロス寿憲, "マルチモーダル音声情報処理および対話ロボットへの応用", 富山県立大学 特別講義1, 富山県立大学, 富山, February, 2023.
Abstract: これまでの対話ロボットや音環境知能技術に関するマルチモーダル音声情報処理の研究および対話ロボットへの応用や評価について紹介する。
BibTeX:
@InProceedings{石井カルロス寿憲2023,
  author    = {石井カルロス寿憲},
  booktitle = {富山県立大学 特別講義1},
  title     = {マルチモーダル音声情報処理および対話ロボットへの応用},
  year      = {2023},
  address   = {富山県立大学, 富山},
  day       = {3},
  month     = feb,
  abstract  = {これまでの対話ロボットや音環境知能技術に関するマルチモーダル音声情報処理の研究および対話ロボットへの応用や評価について紹介する。},
}
石黒浩, "アンドロイド・アバター共存社会", 第25回日本ヒト脳機能マッピング学会, ウインクあいち(愛知県産業労働センター), 愛知, February, 2023.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2023,
  author    = {石黒浩},
  booktitle = {第25回日本ヒト脳機能マッピング学会},
  title     = {アンドロイド・アバター共存社会},
  year      = {2023},
  address   = {ウインクあいち(愛知県産業労働センター), 愛知},
  day       = {25},
  month     = feb,
  url       = {http://jhbm25.umin.jp/index.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
Hiroshi Ishiguro, "Avatar and the future society", In The 14th International Conference on Social Robotics (ICSR2022), Florence, Italy (hybrid), December, 2022.
Abstract: Part of Half Day Workshop "Realization of Avatar-Symbiotic Society". In this talk, the speaker will talk about the basic ideas on interactive robots and avatars, and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2022e,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 14th International Conference on Social Robotics (ICSR2022)},
  title     = {Avatar and the future society},
  year      = {2022},
  address   = {Florence, Italy (hybrid)},
  day       = {13},
  month     = dec,
  url       = {https://www.icsr2022.it/workshop-program-13th-december/},
  abstract  = {Part of Half Day Workshop "Realization of Avatar-Symbiotic Society". In this talk, the speaker will talk about the basic ideas on interactive robots and avatars, and discuss about our future life.},
}
住岡英信, "ロボットアバター技術がもたらす身体とコミュニケーションの拡張", 第12回CiNetシンポジウム, ナレッジキャピタル コングレコンベンションセンター, 大阪(オンライン), November, 2022.
Abstract: 新型コロナ禍において、私達の暮らし方や働き方は直接・対面から間接・遠隔がニューノーマルとなりつつあります。こういった中、ロボットを自分の身代わり(アバター)として遠隔操作することで、これまでできなかったような新しい暮らし方・働き方が提案されてきています。本講演では、ロボットを用いたアバター技術について、特に私達の身体に関する概念やコミュニケーションの方法を拡張する技術について紹介しながら、これからの仮想空間の脳情報通信について議論します。
BibTeX:
@InProceedings{住岡英信2022e,
  author    = {住岡英信},
  booktitle = {第12回CiNetシンポジウム},
  title     = {ロボットアバター技術がもたらす身体とコミュニケーションの拡張},
  year      = {2022},
  address   = {ナレッジキャピタル コングレコンベンションセンター, 大阪(オンライン)},
  day       = {8},
  month     = nov,
  url       = {https://cinet.jp/nict221108/},
  abstract  = {新型コロナ禍において、私達の暮らし方や働き方は直接・対面から間接・遠隔がニューノーマルとなりつつあります。こういった中、ロボットを自分の身代わり(アバター)として遠隔操作することで、これまでできなかったような新しい暮らし方・働き方が提案されてきています。本講演では、ロボットを用いたアバター技術について、特に私達の身体に関する概念やコミュニケーションの方法を拡張する技術について紹介しながら、これからの仮想空間の脳情報通信について議論します。},
}
Carlos Toshinori Ishi, "Analysis and generation of speech-related motions, and evaluation in humanoid robots", In The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2022, Bangalore, India (online), November, 2022.
Abstract: The generation of motions coordinated with speech utterances is important for dialogue robots or avatars, in both autonomous and tele-operated systems, to express humanlikeness and tele-presence. For that purpose, we have been studying on the relationships between speech and motion, and methods to generate motions from speech, for example, lip motion from formants, head motion from dialogue functions, facial and upper body motions coordinated with vocalized emotional expressions (such as laughter and surprise), hand gestures from linguistic and prosodic information, and gaze behaviors from dialogue states. In this talk, I will give an overview of our research activities on motion analysis and generation, and evaluation of speech-driven motions generated in several humanoid robots (such as the android ERICA, and a desktop robot CommU).
BibTeX:
@InProceedings{Ishi2022,
  author    = {Carlos Toshinori Ishi},
  booktitle = {The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2022},
  title     = {Analysis and generation of speech-related motions, and evaluation in humanoid robots},
  year      = {2022},
  address   = {Bangalore, India (online)},
  day       = {7},
  month     = nov,
  url       = {https://genea-workshop.github.io/2022/workshop/#workshop-programme},
  abstract  = {The generation of motions coordinated with speech utterances is important for dialogue robots or avatars, in both autonomous and tele-operated systems, to express humanlikeness and tele-presence. For that purpose, we have been studying on the relationships between speech and motion, and methods to generate motions from speech, for example, lip motion from formants, head motion from dialogue functions, facial and upper body motions coordinated with vocalized emotional expressions (such as laughter and surprise), hand gestures from linguistic and prosodic information, and gaze behaviors from dialogue states. In this talk, I will give an overview of our research activities on motion analysis and generation, and evaluation of speech-driven motions generated in several humanoid robots (such as the android ERICA, and a desktop robot CommU).},
}
Hiroshi Ishiguro, "Robotics and Health: Avatar technology for supporting our future society", In The 29th Scientific Meeting of the International Society of Hypertension (ISH2022), Kyoto International Conference Center, 京都, October, 2022.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. Research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots, avatars, such as Geminoid and discuss in what kind of society humans and robots will coexist in the future. By using avatars, anyone, including the elderly and people with disabilities, will be able to freely participate in various activities with abilities beyond ordinary people while expanding their physical, cognitive, and perceptual abilities using a large number of avatars. Anyone will be able to work and study anytime, anywhere, minimize commuting to work, and have plenty of free time in the future society.
BibTeX:
@InProceedings{Ishiguro2022d,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 29th Scientific Meeting of the International Society of Hypertension (ISH2022)},
  title     = {Robotics and Health: Avatar technology for supporting our future society},
  year      = {2022},
  address   = {Kyoto International Conference Center, 京都},
  day       = {13},
  month     = oct,
  url       = {https://www.ish2022.org/scientific-information/scientific-program/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. Research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one's existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots, avatars, such as Geminoid and discuss in what kind of society humans and robots will coexist in the future. By using avatars, anyone, including the elderly and people with disabilities, will be able to freely participate in various activities with abilities beyond ordinary people while expanding their physical, cognitive, and perceptual abilities using a large number of avatars. Anyone will be able to work and study anytime, anywhere, minimize commuting to work, and have plenty of free time in the future society.},
}
石黒浩, "アバターによるDXと未来社会", やまぐちデジタルソリューション展示会, ニューメディアプラザ山口, 山口, October, 2022.
Abstract: リモートワークが定着しつつある中、今後さらに期待されるのがアバターの利用です。アバターは究極のDXでもあり、今後加速的に普及する可能性があります。本講演ではアバターの技術と、アバターが作る今後の社会について議論します。
BibTeX:
@InProceedings{石黒浩2022k,
  author    = {石黒浩},
  booktitle = {やまぐちデジタルソリューション展示会},
  title     = {アバターによるDXと未来社会},
  year      = {2022},
  address   = {ニューメディアプラザ山口, 山口},
  day       = {26},
  month     = oct,
  url       = {https://www.pref.yamaguchi.lg.jp/press/ybase-digitech/176019.html https://www.pref.yamaguchi.lg.jp/uploaded/attachment/129039.pdf},
  abstract  = {リモートワークが定着しつつある中、今後さらに期待されるのがアバターの利用です。アバターは究極のDXでもあり、今後加速的に普及する可能性があります。本講演ではアバターの技術と、アバターが作る今後の社会について議論します。},
}
石黒浩, "アバターと未来社会", 日本ロボット工業会 創立50周年記念シンポジウム, 東京ビッグサイト, 東京, October, 2022.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2022g,
  author    = {石黒浩},
  booktitle = {日本ロボット工業会 創立50周年記念シンポジウム},
  title     = {アバターと未来社会},
  year      = {2022},
  address   = {東京ビッグサイト, 東京},
  day       = {14},
  month     = oct,
  url       = {https://www.jara.jp/about/50th/symposium.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
石黒浩, "ロボットと未来社会", 第48回技術士全国大会, なら100年会館・ホテル日航奈良, 奈良, October, 2022.
Abstract: 本講演では、これまでに開発してきた様々な人と関わるロボットやその関連技術を紹介するとともに、そのロボット技術によってどのような未来社会が実現されるかを議論する。特に近年、遠隔操作ロボット、すなわちアバターの実用化が期待されているが、どのようなアバターを開発し、どのような社会を実現しようとしているかについても議論する。
BibTeX:
@InProceedings{石黒浩2022j,
  author    = {石黒浩},
  booktitle = {第48回技術士全国大会},
  title     = {ロボットと未来社会},
  year      = {2022},
  address   = {なら100年会館・ホテル日航奈良, 奈良},
  day       = {29},
  month     = oct,
  url       = {https://www.engineer.or.jp/c_topics/008/008726.html https://www.engineer.or.jp/c_topics/008/attached/attach_8726_1.pdf},
  abstract  = {本講演では、これまでに開発してきた様々な人と関わるロボットやその関連技術を紹介するとともに、そのロボット技術によってどのような未来社会が実現されるかを議論する。特に近年、遠隔操作ロボット、すなわちアバターの実用化が期待されているが、どのようなアバターを開発し、どのような社会を実現しようとしているかについても議論する。},
}
Hidenobu Sumioka, "Ethical consideration of companion robots for people with dementia", In 3rd joint ERCIM-JST Workshop 2022, Rocquencourt, France, October, 2022.
Abstract: BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. However, it also offers us ethical and legal issues. In this talk, I will discuss some issues, presenting short- and long-term experiments we have conducted with our baby-like interactive robot. I point out that there are no guidelines on robot therapy for people with dementia and indicate that the efforts made in doll therapy may be helpful. In addition, I will discuss that the caregiver's perspective must also be considered in developing a robot for the elderly with dementia.
BibTeX:
@InProceedings{Sumioka2022a,
  author    = {Hidenobu Sumioka},
  booktitle = {3rd joint ERCIM-JST Workshop 2022},
  title     = {Ethical consideration of companion robots for people with dementia},
  year      = {2022},
  address   = {Rocquencourt, France},
  day       = {20-21},
  month     = oct,
  url       = {https://www.ercim.eu/events/3rd-joint-ercim-jst-workshop},
  abstract  = {BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. However, it also offers us ethical and legal issues. In this talk, I will discuss some issues, presenting short- and long-term experiments we have conducted with our baby-like interactive robot. I point out that there are no guidelines on robot therapy for people with dementia and indicate that the efforts made in doll therapy may be helpful. In addition, I will discuss that the caregiver's perspective must also be considered in developing a robot for the elderly with dementia.},
}
石黒浩, "人間ロボット共生社会の未来", In 北陸技術交流テクノフェア, 福井生活学習館, 福井, October, 2022.
Abstract: 地方の中小企業はこれからのロボット社会とどう向き合うべきなのか─ 本講演では、これまでの研究成果を紹介すると共に、人間とロボット・アバターが共生するこれからの社会の姿について語る。
BibTeX:
@InProceedings{石黒浩2022i,
  author    = {石黒浩},
  booktitle = {北陸技術交流テクノフェア},
  title     = {人間ロボット共生社会の未来},
  year      = {2022},
  address   = {福井生活学習館, 福井},
  day       = {20},
  month     = oct,
  url       = {https://www.technofair.jp/seminar/},
  abstract  = {地方の中小企業はこれからのロボット社会とどう向き合うべきなのか─ 本講演では、これまでの研究成果を紹介すると共に、人間とロボット・アバターが共生するこれからの社会の姿について語る。},
}
Hidenobu Sumioka, "Humanlike Robots that connect people in Elderly Nursing Home", In 精準智慧照護 國際技術交流論壇, 新竹, 台湾(オンライン), October, 2022.
Abstract: BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. In this talk, I will present our study with humanlike robot for older people with dementia.
BibTeX:
@InProceedings{Sumioka2022b,
  author    = {Hidenobu Sumioka},
  booktitle = {精準智慧照護 國際技術交流論壇},
  title     = {Humanlike Robots that connect people in Elderly Nursing Home},
  year      = {2022},
  address   = {新竹, 台湾(オンライン)},
  day       = {24},
  month     = oct,
  url       = {https://aicspht.org.tw/news/精準健康與智慧照護-國際技術交流論壇/},
  abstract  = {BPSD (Behavioral and Psychological Symptoms of Dementia), often exhibited by older people with dementia, is not only a burden on caregivers but also a major social issue that increases the economic burden on society. Robot therapy is a promising approach to reducing BPSD. In this talk, I will present our study with humanlike robot for older people with dementia.},
}
石黒浩, "テクノロジーと社会―未来をどうつくる", In 朝日地球会議2022, ハイブリット開催, October, 2022.
Abstract: 近年の人工知能(AI)やロボットの技術は、障害や病気で失われた機能に置き換わるなど、社会をより便利で豊かなものにする一方で、人を殺傷する兵器にも応用されるなど、多様な可能性をはらんでいる。いつか人が老いなどの身体的な制約から解かれ、今と全く違う存在になる兆しすら見えてきた。どこまでの技術の進展を許容すべきか。また、すべての人がその恩恵を享受できるのだろうか。「人とは何か」を、歴史学者・哲学者であるユヴァル・ノア・ハラリ氏とともに語り、一人ひとりがどう未来に携わっていくか考える。
BibTeX:
@InProceedings{石黒浩2022h,
  author    = {石黒浩},
  booktitle = {朝日地球会議2022},
  title     = {テクノロジーと社会―未来をどうつくる},
  year      = {2022},
  address   = {ハイブリット開催},
  day       = {18},
  etitle    = {Hiroshi Ishiguro},
  month     = oct,
  url       = {https://www.asahi.com/eco/awf/program/?cid=prtimes&program=20},
  abstract  = {近年の人工知能(AI)やロボットの技術は、障害や病気で失われた機能に置き換わるなど、社会をより便利で豊かなものにする一方で、人を殺傷する兵器にも応用されるなど、多様な可能性をはらんでいる。いつか人が老いなどの身体的な制約から解かれ、今と全く違う存在になる兆しすら見えてきた。どこまでの技術の進展を許容すべきか。また、すべての人がその恩恵を享受できるのだろうか。「人とは何か」を、歴史学者・哲学者であるユヴァル・ノア・ハラリ氏とともに語り、一人ひとりがどう未来に携わっていくか考える。},
}
Hiroshi Ishiguo, "The Future of Robotics and Humanoids", In Global AI Summit, Riyadh, Saudi Arabia, September, 2022.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2022c,
  author    = {Hiroshi Ishiguo},
  booktitle = {Global AI Summit},
  title     = {The Future of Robotics and Humanoids},
  year      = {2022},
  address   = {Riyadh, Saudi Arabia},
  day       = {14},
  month     = sep,
  url       = {https://globalaisummit.org/en/default.aspx},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
住岡英信, "ロボット技術でめざす優しいケア", 第4回日本ユマニチュード学会総会, 京都大学国際科学イノベーション棟シンポジウムホール, 京都, September, 2022.
Abstract: 『「優しい介護」インタラクションの計算的・脳科学解明』は2017年に科学技術振興機構の戦略的創造研究推進事業(CREST)の研究テーマとして採択され、国の研究プロジェクトとして、情報学・工学・心理学・医学・看護学などさまざまな分野の専門家が「ユマニチュードはなぜ有効なのか?」に関する研究を進めてきました。 1日目の市民公開講座では、このプロジェクトの歩みと成果についてご紹介します。研究チームが開発した、仮想現実によるユマニチュード・トレーニングシステムや、触れる技術を搭載したロボット、ケア技術を評価する計測システムなども会場で体験できます。 2日目の学会総会では、ユマニチュードの実践、教育、研修効果、家族介護などに関する研究成果について、口頭発表やポスター発表、シンポジウムを通して学びを深めていきます。加えて、4月から開始した『ユマニチュード認証制度』への取り組むパイロット施設20事業所の進捗の報告と、第3回学会定時社員総会も開催いたします。
BibTeX:
@InProceedings{住岡英信2022c,
  author    = {住岡英信},
  booktitle = {第4回日本ユマニチュード学会総会},
  title     = {ロボット技術でめざす優しいケア},
  year      = {2022},
  address   = {京都大学国際科学イノベーション棟シンポジウムホール, 京都},
  day       = {24-25},
  month     = sep,
  url       = {https://jhuma.org/soukai4/},
  abstract  = {『「優しい介護」インタラクションの計算的・脳科学解明』は2017年に科学技術振興機構の戦略的創造研究推進事業(CREST)の研究テーマとして採択され、国の研究プロジェクトとして、情報学・工学・心理学・医学・看護学などさまざまな分野の専門家が「ユマニチュードはなぜ有効なのか?」に関する研究を進めてきました。 1日目の市民公開講座では、このプロジェクトの歩みと成果についてご紹介します。研究チームが開発した、仮想現実によるユマニチュード・トレーニングシステムや、触れる技術を搭載したロボット、ケア技術を評価する計測システムなども会場で体験できます。 2日目の学会総会では、ユマニチュードの実践、教育、研修効果、家族介護などに関する研究成果について、口頭発表やポスター発表、シンポジウムを通して学びを深めていきます。加えて、4月から開始した『ユマニチュード認証制度』への取り組むパイロット施設20事業所の進捗の報告と、第3回学会定時社員総会も開催いたします。},
}
石黒浩, "デジタルで実現する総活躍社会 ~その課題を、希望に変える~", 日本青年会議所 第55回ブロック大会明石大会, アワーズホール、兵庫, August, 2022.
Abstract: 内閣府の科学技術政策「ムーンショット目標」である研究開発プロジェクト「誰もが自在に活躍できるアバター共生社会の実現」のプロジェクトマネージャーである石黒浩氏を招き、現在講演者が取り組むアバター関連プロジェクトについて紹介しながら、デジタルを活用することにより実現する未来社会について議論する。
BibTeX:
@InProceedings{石黒浩2022d,
  author    = {石黒浩},
  booktitle = {日本青年会議所 第55回ブロック大会明石大会},
  title     = {デジタルで実現する総活躍社会 ~その課題を、希望に変える~},
  year      = {2022},
  address   = {アワーズホール、兵庫},
  day       = {21},
  month     = aug,
  url       = {https://www.jaycee.or.jp/2022/kinki/hyogo/?p=733},
  abstract  = {内閣府の科学技術政策「ムーンショット目標」である研究開発プロジェクト「誰もが自在に活躍できるアバター共生社会の実現」のプロジェクトマネージャーである石黒浩氏を招き、現在講演者が取り組むアバター関連プロジェクトについて紹介しながら、デジタルを活用することにより実現する未来社会について議論する。},
}
Hiroshi Ishiguro, "Avatar and the future society", In The 65th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS 2022), online, August, 2022.
Abstract: Prof. Hiroshi Ishiguro has been doing research on teleoperated robots, for more than two decades. In his research, he developed a series of avatars, called Geminoids, which resemble himself. The study not only helps to understand humans and apply methods from engineering, cognitive science and neuroscience to various research topics, but also practically allows a person to be physically present and work in different places without travelling. The talk will introduce research and development of teleoperated androids, such as Geminoids, and discuss how humans and robots can coexist in future society.
BibTeX:
@InProceedings{Ishiguro2022,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 65th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS 2022)},
  title     = {Avatar and the future society},
  year      = {2022},
  address   = {online},
  day       = {8},
  month     = aug,
  url       = {https://mwscas2022.org/keynotespeakers.php#speaker7},
  abstract  = {Prof. Hiroshi Ishiguro has been doing research on teleoperated robots, for more than two decades. In his research, he developed a series of avatars, called Geminoids, which resemble himself. The study not only helps to understand humans and apply methods from engineering, cognitive science and neuroscience to various research topics, but also practically allows a person to be physically present and work in different places without travelling. The talk will introduce research and development of teleoperated androids, such as Geminoids, and discuss how humans and robots can coexist in future society.},
}
石黒浩, "アバターと未来社会", 情報通信技術研究交流会(AC/Net) 第228回例会, NICT未来ICT研究所 脳情報通信融合研究センター, 大阪, June, 2022.
Abstract: コロナ禍を経て、リモートでの活動が定着した今後の社会において期待が集まるのが、アバターと呼ばれる遠隔操作CGキャラクターや遠隔操作ロボットを用いた活動である。アバターを用いることで、高齢者や障がい者を含む誰もが、身体的・認知・知覚能力を拡張しながら、何時でも何処でも様々な活動に自在に参加できるようになる。そうした新たな社会を仮想化実世界と呼ぶ。本講演ではアバターの研究開発とそれが実現する仮想化実世界の可能性と問題、さらには、そこから始まる未来社会について議論する。
BibTeX:
@InProceedings{石黒浩2022e,
  author    = {石黒浩},
  booktitle = {情報通信技術研究交流会(AC/Net) 第228回例会},
  title     = {アバターと未来社会},
  year      = {2022},
  address   = {NICT未来ICT研究所 脳情報通信融合研究センター, 大阪},
  day       = {14},
  month     = jun,
  url       = {https://www2.nict.go.jp/advanced_ict/ACnet/ },
  abstract  = {コロナ禍を経て、リモートでの活動が定着した今後の社会において期待が集まるのが、アバターと呼ばれる遠隔操作CGキャラクターや遠隔操作ロボットを用いた活動である。アバターを用いることで、高齢者や障がい者を含む誰もが、身体的・認知・知覚能力を拡張しながら、何時でも何処でも様々な活動に自在に参加できるようになる。そうした新たな社会を仮想化実世界と呼ぶ。本講演ではアバターの研究開発とそれが実現する仮想化実世界の可能性と問題、さらには、そこから始まる未来社会について議論する。},
}
石黒浩, "アバターと未来社会", 令和4年度鳥取西高等学校「著者と語る講演会」, 鳥取県民文化会館, 鳥取, June, 2022.
Abstract: コロナ禍の影響もあり、リモートで活動できる遠隔操作CG エージェントや遠隔操作ロボット、すなわちアバターの研究開発が注目されるようになってきた。本講演では、講演者のこれまでのロボット研究を紹介し、現在講演者が取り組むアバター関連プロジェクトについて紹介しながら、そのプロジェクトが実現する未来社会について議論する。
BibTeX:
@InProceedings{石黒浩2022f,
  author    = {石黒浩},
  booktitle = {令和4年度鳥取西高等学校「著者と語る講演会」},
  title     = {アバターと未来社会},
  year      = {2022},
  address   = {鳥取県民文化会館, 鳥取},
  day       = {29},
  month     = jun,
  url       = {http://db.pref.tottori.jp/pressrelease2.nsf/webview/56ED135CF659B1F44925885D00250563?OpenDocument},
  abstract  = {コロナ禍の影響もあり、リモートで活動できる遠隔操作CG エージェントや遠隔操作ロボット、すなわちアバターの研究開発が注目されるようになってきた。本講演では、講演者のこれまでのロボット研究を紹介し、現在講演者が取り組むアバター関連プロジェクトについて紹介しながら、そのプロジェクトが実現する未来社会について議論する。},
}
石黒浩, "アバターと未来社会", 第126回 日本眼科学会総会, 大阪国際会議場, 大阪, April, 2022.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2022c,
  author    = {石黒浩},
  booktitle = {第126回 日本眼科学会総会},
  title     = {アバターと未来社会},
  year      = {2022},
  address   = {大阪国際会議場, 大阪},
  day       = {15},
  month     = apr,
  url       = {http://www.congre.co.jp/jos2022/index.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
石黒浩, "アバターと未来社会", 第122回 日本外科学会定期学術集会, 熊本城ホール, 熊本, April, 2022.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@Inproceedings{石黒浩2022b,
  author    = {石黒浩},
  title     = {アバターと未来社会},
  booktitle = {第122回 日本外科学会定期学術集会},
  year      = {2022},
  address   = {熊本城ホール, 熊本},
  month     = apr,
  day       = {14},
  url       = {https://jp.jssoc.or.jp/jss122/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
石黒浩, "大阪・関西万博とロボット", 2022国際ロボット展 INTERNATIONAL ROBOT EXHIBITION 2022(iREX2022), 東京ビッグサイト, 東京 / オンライン会場(iREX ONLINE), March, 2022.
Abstract: 2025大阪・関西万博に向けて、万博プロデューサーである石黒浩が、「いのち輝く未来社会のデザイン」という大阪・関西万博のテーマを実現するための会場デザインやテーマ事業について語る。
BibTeX:
@InProceedings{石黒浩2022a,
  author    = {石黒浩},
  booktitle = {2022国際ロボット展 INTERNATIONAL ROBOT EXHIBITION 2022(iREX2022)},
  title     = {大阪・関西万博とロボット},
  year      = {2022},
  address   = {東京ビッグサイト, 東京 / オンライン会場(iREX ONLINE)},
  day       = {9-12},
  month     = mar,
  url       = {https://biz.nikkan.co.jp/eve/irex/seminars.html},
  abstract  = {2025大阪・関西万博に向けて、万博プロデューサーである石黒浩が、「いのち輝く未来社会のデザイン」という大阪・関西万博のテーマを実現するための会場デザインやテーマ事業について語る。},
}
石井カルロス寿憲, "声質の科学:音響特徴,EGG特性およびパラ言語的機能", 日本音響学会2022年1月音声研究会, vol. 2, no. 1, オンライン, pp. 71-74, January, 2022.
Abstract: 声帯振動の様式に関連する声質の特性について,著者がこれまで研究してきた結果を中心に紹介する.声質には,通常発声に対し,フライ発声,気息音発声,りきみ発声などが有り,これらは話者のなんらか意図・態度・感情などのパラ言語情報をもたらす機能を持つ場合が多い.これらの声質において,どのような音響特性を持ち,EGG(Electro-glottograph)を用いて声帯振動と音響特性を解明した結果,これらの特性とパラ言語的機能との関連性などについて紹介する.
BibTeX:
@InProceedings{石井カルロス寿憲2022,
  author    = {石井カルロス寿憲},
  booktitle = {日本音響学会2022年1月音声研究会},
  title     = {声質の科学:音響特徴,EGG特性およびパラ言語的機能},
  year      = {2022},
  address   = {オンライン},
  day       = {29-30},
  etitle    = {The science of voive quiality: Acoustic and EGG features, and paralinguistic functions},
  month     = jan,
  number    = {1},
  pages     = {71-74},
  url       = {https://asj-spcom.acoustics.jp/2021/11/22/2022%e5%b9%b41%e6%9c%88%e9%9f%b3%e5%a3%b0%e7%a0%94%e7%a9%b6%e4%bc%9a%e3%83%97%e3%83%ad%e3%82%b0%e3%83%a9%e3%83%a0/},
  volume    = {2},
  abstract  = {声帯振動の様式に関連する声質の特性について,著者がこれまで研究してきた結果を中心に紹介する.声質には,通常発声に対し,フライ発声,気息音発声,りきみ発声などが有り,これらは話者のなんらか意図・態度・感情などのパラ言語情報をもたらす機能を持つ場合が多い.これらの声質において,どのような音響特性を持ち,EGG(Electro-glottograph)を用いて声帯振動と音響特性を解明した結果,これらの特性とパラ言語的機能との関連性などについて紹介する.},
  keywords  = {Voice quality, Prosody, Acoustic features, EGG, Paralinguistic information},
}
石黒浩, "コロナ後の社会のアバターの果たす役割", 生産技術振興協会 2022新春トップセミナー  いのち輝く未来社会の実現に向けて ~アバターの果たす役割と大阪パビリオンを考える~, January, 2022.
Abstract: コロナウイルスの感染拡大によって、人との接触をできるだけ避ける新しい生活様式となった今、ロボットがアバターとなり自分の代わりに会社へ出勤したり、旅行に行ったりすることが現実味を帯びてきた。 ロボットやAIなどの技術が進歩し、発展していくこれからの社会について、「いのち広げる」をテーマに語る。
BibTeX:
@Inproceedings{石黒浩2022,
  author    = {石黒浩},
  title     = {コロナ後の社会のアバターの果たす役割},
  booktitle = {生産技術振興協会 2022新春トップセミナー  いのち輝く未来社会の実現に向けて ~アバターの果たす役割と大阪パビリオンを考える~},
  year      = {2022},
  month     = jan,
  day       = {19},
  etitle    = {Hiroshi Ishiguro},
  abstract  = {コロナウイルスの感染拡大によって、人との接触をできるだけ避ける新しい生活様式となった今、ロボットがアバターとなり自分の代わりに会社へ出勤したり、旅行に行ったりすることが現実味を帯びてきた。 ロボットやAIなどの技術が進歩し、発展していくこれからの社会について、「いのち広げる」をテーマに語る。},
}
石黒浩, "アバターと未来社会", 製造業DXの標-しるべ-~ものづくりと営業のデジタル改革がもたらす未来~, オンライン, December, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2021p,
  author    = {石黒浩},
  title     = {アバターと未来社会},
  booktitle = {製造業DXの標-しるべ-~ものづくりと営業のデジタル改革がもたらす未来~},
  year      = {2021},
  address   = {オンライン},
  month     = dec,
  day       = {2},
  url       = {https://satori.marketing/events/seminar_20211202_manufacturing/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。},
}
Hiroshi Ishiguro, "Interactive Intelligent Robots and Our Future", In The 9th RSI International Conference on Robotics and Mechatronics (ICRoM 2021), online, November, 2021.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2021b,
  author    = {Hiroshi Ishiguro},
  booktitle = {The 9th RSI International Conference on Robotics and Mechatronics (ICRoM 2021)},
  title     = {Interactive Intelligent Robots and Our Future},
  year      = {2021},
  address   = {online},
  day       = {18},
  month     = nov,
  url       = {https://icrom.ir/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
石黒浩, "私たちの明日の生活 ロボットと未来社会", 第27期 関西市民文化塾, no. 第8回, 大阪市中央公会堂, 大阪, November, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。
BibTeX:
@InProceedings{石黒浩2021m,
  author    = {石黒浩},
  booktitle = {第27期 関西市民文化塾},
  title     = {私たちの明日の生活 ロボットと未来社会},
  year      = {2021},
  address   = {大阪市中央公会堂, 大阪},
  day       = {13},
  month     = nov,
  number    = {第8回},
  url       = {https://www.mainichi-ok.co.jp/develop/kansai/001.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。},
}
石黒浩, "アバターと未来社会", 日本官能評価学会2021年大会, オンライン, November, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2021n,
  author    = {石黒浩},
  title     = {アバターと未来社会},
  booktitle = {日本官能評価学会2021年大会},
  year      = {2021},
  address   = {オンライン},
  month     = nov,
  day       = {28},
  url       = {https://www.jsse.net/taikai/index.html},
  etitle    = {Hiroshi Ishiguro},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。},
}
石黒浩, "アバターと未来社会", けいはんなR&Dフェア2021, オンライン, November, 2021.
Abstract: コロナ禍の影響もあり、リモートで活動できる遠隔操作CGエージェントや遠隔操作ロボット、すなわちアバターの研究開発が注目されるようになってきた。本講演では、講演者のこれまでのロボット研究を紹介し、あわせて現在講演者が取り組むアバター関連プロジェクトについても紹介しながら、そのプロジェクトが実現する未来社会について議論する。
BibTeX:
@InProceedings{石黒浩2021l,
  author    = {石黒浩},
  booktitle = {けいはんなR\&Dフェア2021},
  title     = {アバターと未来社会},
  year      = {2021},
  address   = {オンライン},
  day       = {11},
  month     = nov,
  url       = {https://keihanna-fair.jp/},
  abstract  = {コロナ禍の影響もあり、リモートで活動できる遠隔操作CGエージェントや遠隔操作ロボット、すなわちアバターの研究開発が注目されるようになってきた。本講演では、講演者のこれまでのロボット研究を紹介し、あわせて現在講演者が取り組むアバター関連プロジェクトについても紹介しながら、そのプロジェクトが実現する未来社会について議論する。},
}
石黒浩, "アバターと未来社会", 2021年度 生存科学研究所 自主研究事業 第4回研究会, オンライン, November, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2021o,
  author    = {石黒浩},
  title     = {アバターと未来社会},
  booktitle = {2021年度 生存科学研究所 自主研究事業 第4回研究会},
  year      = {2021},
  address   = {オンライン},
  month     = nov,
  day       = {25},
  url       = {https://www.internet.ac.jp/news/news-4939/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。},
}
石黒浩, "誰もが自在に活躍できるアバター共生社会の実現", 第37回 日本義肢装具学会学術大会, オンライン, October, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。
BibTeX:
@InProceedings{石黒浩2021k,
  author    = {石黒浩},
  booktitle = {第37回 日本義肢装具学会学術大会},
  title     = {誰もが自在に活躍できるアバター共生社会の実現},
  year      = {2021},
  address   = {オンライン},
  day       = {16},
  month     = oct,
  url       = {https://jspo2021.com/},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後のアバター共生社会の実現について語る。},
}
Hidenobu Sumioka, "Human-Robot Deep interaction", In CiNET Friday Lunch Seminar, online, October, 2021.
Abstract: Communication robots are expected to provide a variety of support services through interaction with people. They have been reported to be especially effective for the elderly and patients with mental illness. In the past, research on human-robot interaction has examined the effects of actual interaction with robots using psychological scales and motor information such as gaze and movement. However, in recent years, researchers have started to focus on brain activity during interaction to investigate the effects of actual interaction on the brain and control robot behavior based on brain activity to facilitate smooth interaction with humans. In this presentation, I will introduce our ongoing research to realize human-robot interaction using brain activity during the interaction. First, we will report the effect of the robot’s appearance on brain activity. Next, we will present a method for detecting subjective difficulty based on the cognitive load during a working memory task. Finally, we will introduce our ongoing efforts to investigate how humans are affected by robot interaction from multi-layer information among human behavior, brain activity, and metabolites.
BibTeX:
@Inproceedings{Sumioka2021b,
  author    = {Hidenobu Sumioka},
  title     = {Human-Robot Deep interaction},
  booktitle = {CiNET Friday Lunch Seminar},
  year      = {2021},
  address   = {online},
  month     = oct,
  day       = {1},
  url       = {https://cinet.jp/japanese/event/20211001_4027/},
  abstract  = {Communication robots are expected to provide a variety of support services through interaction with people. They have been reported to be especially effective for the elderly and patients with mental illness.
In the past, research on human-robot interaction has examined the effects of actual interaction with robots using psychological scales and motor information such as gaze and movement. However, in recent years, researchers have started to focus on brain activity during interaction to investigate the effects of actual interaction on the brain and control robot behavior based on brain activity to facilitate smooth interaction with humans.
In this presentation, I will introduce our ongoing research to realize human-robot interaction using brain activity during the interaction.
First, we will report the effect of the robot’s appearance on brain activity. Next, we will present a method for detecting subjective difficulty based on the cognitive load during a working memory task.
Finally, we will introduce our ongoing efforts to investigate how humans are affected by robot interaction from multi-layer information among human behavior, brain activity, and metabolites.},
}
石黒浩, "アバターとコロナ後の社会", 中央大学フェア2021, オンライン開催, September, 2021.
Abstract: コロナ禍後の社会においてはテレワークが定着しアバター利用が進んでいくと期待される。 本講演では、アバターを世界に先駆けて研究開発してきた講演者が、 これまでの研究成果を紹介すると共に、今後の未来社会について議論する。 また、2025年の大阪・関西万博においても、アバターを利用して多くの人が参加すると期待されるが、どのような万博になるか紹介する。
BibTeX:
@InProceedings{石黒浩2021i,
  author    = {石黒浩},
  booktitle = {中央大学フェア2021},
  title     = {アバターとコロナ後の社会},
  year      = {2021},
  address   = {オンライン開催},
  day       = {16},
  month     = sep,
  url       = {https://www.chubu-univ.jp/fair2021},
  abstract  = {コロナ禍後の社会においてはテレワークが定着しアバター利用が進んでいくと期待される。 本講演では、アバターを世界に先駆けて研究開発してきた講演者が、 これまでの研究成果を紹介すると共に、今後の未来社会について議論する。 また、2025年の大阪・関西万博においても、アバターを利用して多くの人が参加すると期待されるが、どのような万博になるか紹介する。},
}
石黒浩, "ロボットがあなたに「共感」してくれたら、うれしいですか?", 日本科学未来館 オンラインイベント, 日本科学未来館, 東京(YouTube Live), September, 2021.
Abstract: ロボットと人間が信頼関係をつくるうえで重要な「共感」について語る。
BibTeX:
@Inproceedings{石黒浩2021h,
  author    = {石黒浩},
  title     = {ロボットがあなたに「共感」してくれたら、うれしいですか?},
  booktitle = {日本科学未来館 オンラインイベント},
  year      = {2021},
  address   = {日本科学未来館, 東京(YouTube Live)},
  month     = sep,
  day       = {24},
  url       = {https://www.miraikan.jst.go.jp/events/202109242130.html},
  abstract  = {ロボットと人間が信頼関係をつくるうえで重要な「共感」について語る。},
}
石黒浩, "ロボットと未来社会", GIGAスクール時代のロボット活用 〜対話型ロボットとの新しい学習のかたち〜, 津リージョンプラザ お城ホール, 三重(オンライン), September, 2021.
Abstract: 本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。 また、対話型ロボットとの学習について、子ども達との対話を行う。
BibTeX:
@InProceedings{石黒浩2021j,
  author    = {石黒浩},
  booktitle = {GIGAスクール時代のロボット活用 〜対話型ロボットとの新しい学習のかたち〜},
  title     = {ロボットと未来社会},
  year      = {2021},
  address   = {津リージョンプラザ お城ホール, 三重(オンライン)},
  day       = {17},
  month     = sep,
  url       = {https://www.fuzoku.edu.mie-u.ac.jp/sho/2021/08/post-397.html},
  abstract  = {本講演では、これまでの研究成果を紹介すると共に、今後の未来社会について語る。 また、対話型ロボットとの学習について、子ども達との対話を行う。},
}
石黒浩, "アバターと万博と未来社会", 「公認会計士の日」記念セミナー, オンライン, July, 2021.
Abstract: テレワークが定着しアバター(自分の分身のキャラクター)利用が進んでいくこれからの社会。2025年の大阪・関西万博においても、多くの人がアバターを利用して参加すると期待される。ロボット研究の第一人者である石黒浩氏が描く「アバターと万博と未来社会」について語る。
BibTeX:
@InProceedings{石黒浩2021e,
  author    = {石黒浩},
  booktitle = {「公認会計士の日」記念セミナー},
  title     = {アバターと万博と未来社会},
  year      = {2021},
  address   = {オンライン},
  day       = {1},
  month     = jul,
  url       = {https://www.jicpa-knk.ne.jp/news/2021/003117.html},
  abstract  = {テレワークが定着しアバター(自分の分身のキャラクター)利用が進んでいくこれからの社会。2025年の大阪・関西万博においても、多くの人がアバターを利用して参加すると期待される。ロボット研究の第一人者である石黒浩氏が描く「アバターと万博と未来社会」について語る。},
}
石黒浩, "人間がロボットになることを阻むもの", 日本学術会議 情報学委員会 環境知能分科会シンポジウム, オンライン開催, July, 2021.
Abstract: 環境知能分科会では、以前から自然・社会環境の変化や新たな社会問題に対して、生存に関する様々な心理的・精神的不安を解消し、新しい生活様式に適応し、経済活動を発展するるための生存情報学の必要性に着目して、検討してきた。本シンポジウムでは、ダイバシティ(多様性)&インクルージョン(包摂性)を考慮して、生存情報学による新たな価値を生み出すために、ロボットの分野の専門家の視点から生存情報学のあるべき姿と今後やるべき課題について語る。
BibTeX:
@InProceedings{石黒浩2021g,
  author    = {石黒浩},
  booktitle = {日本学術会議 情報学委員会 環境知能分科会シンポジウム},
  title     = {人間がロボットになることを阻むもの},
  year      = {2021},
  address   = {オンライン開催},
  day       = {19},
  month     = jul,
  url       = {https://www.nadasemi.jp/kankyo/},
  abstract  = {環境知能分科会では、以前から自然・社会環境の変化や新たな社会問題に対して、生存に関する様々な心理的・精神的不安を解消し、新しい生活様式に適応し、経済活動を発展するるための生存情報学の必要性に着目して、検討してきた。本シンポジウムでは、ダイバシティ(多様性)&インクルージョン(包摂性)を考慮して、生存情報学による新たな価値を生み出すために、ロボットの分野の専門家の視点から生存情報学のあるべき姿と今後やるべき課題について語る。},
}
石黒浩, "AIと医療", 札幌市立大学オンライン公開講座 -とおくのAIをちかくで見よう-, オンライン, July, 2021.
Abstract: AIはあらゆる産業分野において活用が期待されている。医療も例外ではない。松浦和代(札幌市立大学 副学長・看護学部長)と石黒浩(大阪大学 栄誉教授)による本対談では、AIの活用で、今後のヘルスプロモーションや医療・看護はどのように変化するのか、その可能性と課題を探る。
BibTeX:
@InProceedings{石黒浩2021f,
  author    = {石黒浩},
  booktitle = {札幌市立大学オンライン公開講座 -とおくのAIをちかくで見よう-},
  title     = {AIと医療},
  year      = {2021},
  address   = {オンライン},
  day       = {2},
  month     = jul,
  url       = {https://www.scu.ac.jp/campus/crc/news/#pdn-76760},
  abstract  = {AIはあらゆる産業分野において活用が期待されている。医療も例外ではない。松浦和代(札幌市立大学 副学長・看護学部長)と石黒浩(大阪大学 栄誉教授)による本対談では、AIの活用で、今後のヘルスプロモーションや医療・看護はどのように変化するのか、その可能性と課題を探る。},
}
石黒浩, "アバターと未来社会", 第80回日本医学放射線学会総会, パシフィコ横浜, 神奈川(オンライン), April, 2021.
Abstract: ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。
BibTeX:
@InProceedings{石黒浩2021a,
  author    = {石黒浩},
  booktitle = {第80回日本医学放射線学会総会},
  title     = {アバターと未来社会},
  year      = {2021},
  address   = {パシフィコ横浜, 神奈川(オンライン)},
  day       = {15-18},
  month     = apr,
  url       = {https://site2.convention.co.jp/jrs80/},
  abstract  = {ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。},
}
石黒浩, "人と関わるボットと未来社会", 第107回日本消化器病学会総会, 京王プラザホテル, 東京(オンライン), April, 2021.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2021,
  author    = {石黒浩},
  title     = {人と関わるボットと未来社会},
  booktitle = {第107回日本消化器病学会総会},
  year      = {2021},
  address   = {京王プラザホテル, 東京(オンライン)},
  month     = apr,
  day       = {15-17},
  url       = {https://site.convention.co.jp/jsge107/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "進化するコミュニケーション~食と笑いの共有~", 超異分野学会 大阪大会2021, 丸善インテックアリーナ大阪, 大阪(オンライン), April, 2021.
Abstract: 2020年、新型コロナウイルスの影響により、人と人との接触は減り、これまで当たり前に行ってきた会議、対面での食事や雑談の頻度は減少し、その大切さをより実感することにつながった。人と人とのコミュニケーションの在り方はこれからどのように変化していくのだろうか。アンドロイド研究を通して、「人間」を追求する石黒が、大阪の文化の特徴とも言える「食」そして「笑い」に焦点を当て、人とコミュニケーションに関する講演を行う。 テクノロジーにより、「人間」や「コミュニケーション」はどこまで明らかになり、テクノロジーの介入によってコミュニケーションそして人の生活にどのようなアップデートがかかるのか、自身の研究内容を含めて語る。
BibTeX:
@InProceedings{石黒浩2021b,
  author    = {石黒浩},
  booktitle = {超異分野学会 大阪大会2021},
  title     = {進化するコミュニケーション~食と笑いの共有~},
  year      = {2021},
  address   = {丸善インテックアリーナ大阪, 大阪(オンライン)},
  day       = {24},
  month     = apr,
  url       = {https://hic.lne.st/},
  abstract  = {2020年、新型コロナウイルスの影響により、人と人との接触は減り、これまで当たり前に行ってきた会議、対面での食事や雑談の頻度は減少し、その大切さをより実感することにつながった。人と人とのコミュニケーションの在り方はこれからどのように変化していくのだろうか。アンドロイド研究を通して、「人間」を追求する石黒が、大阪の文化の特徴とも言える「食」そして「笑い」に焦点を当て、人とコミュニケーションに関する講演を行う。 テクノロジーにより、「人間」や「コミュニケーション」はどこまで明らかになり、テクノロジーの介入によってコミュニケーションそして人の生活にどのようなアップデートがかかるのか、自身の研究内容を含めて語る。},
}
石黒浩, "人と関わるボットと未来社会ーアバターで変わるコロナ禍後の社会ー", 第61回日本呼吸器学会学術講演会, 東京国際フォーラム, 東京(オンライン), April, 2021.
Abstract: ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2021c,
  author    = {石黒浩},
  title     = {人と関わるボットと未来社会ーアバターで変わるコロナ禍後の社会ー},
  booktitle = {第61回日本呼吸器学会学術講演会},
  year      = {2021},
  address   = {東京国際フォーラム, 東京(オンライン)},
  month     = apr,
  day       = {23-25},
  url       = {https://www.jrs.or.jp/jrs61/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。},
}
石黒浩, "人間とバーチャルの接点-ロボット研究から見えてきた共生・共創・共感の未来 ~アバターが変える近未来社会~", 文藝春秋 トータルエクスペリエンス カンファレンス, オンライン, April, 2021.
Abstract: ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。
BibTeX:
@InProceedings{石黒浩2021d,
  author    = {石黒浩},
  booktitle = {文藝春秋 トータルエクスペリエンス カンファレンス},
  title     = {人間とバーチャルの接点-ロボット研究から見えてきた共生・共創・共感の未来 ~アバターが変える近未来社会~},
  year      = {2021},
  address   = {オンライン},
  day       = {26},
  month     = apr,
  url       = {https://bunshun.jp/articles/-/43917?utm_source=twitter.com&utm_medium=social&utm_campaign=socialLink},
  abstract  = {ロボット研究の成果の紹介を交え、アバターロボットと未来社会について語る。},
}
Hiroshi Ishiguro, "Constructive Approach for Interactive Robots and the Fundamental Issues", In ACM/IEEE International Conference on Human-Robot Interaction (HRI2021), virtual, March, 2021.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2021a,
  author    = {Hiroshi Ishiguro},
  booktitle = {ACM/IEEE International Conference on Human-Robot Interaction (HRI2021)},
  title     = {Constructive Approach for Interactive Robots and the Fundamental Issues},
  year      = {2021},
  address   = {virtual},
  day       = {9},
  month     = mar,
  url       = {https://humanrobotinteraction.org/2021/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
Hiroshi Ishiguro, "Studies on avatars and our future society", In the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020), Yokohama, Japan (virtual), January, 2021.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@InProceedings{Ishiguro2021,
  author    = {Hiroshi Ishiguro},
  booktitle = {the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020)},
  title     = {Studies on avatars and our future society},
  year      = {2021},
  address   = {Yokohama, Japan (virtual)},
  doi       = {https://ijcai20.org/},
  month     = jan,
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
Hiroshi Ishiguro, "Studies on interactive robots", In IEEE TALE2020, virtual, December, 2020.
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
BibTeX:
@Inproceedings{Ishiguro2020,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on interactive robots},
  booktitle = {IEEE TALE2020},
  year      = {2020},
  address   = {virtual},
  month     = dec,
  day       = {8-11},
  url       = {http://tale2020.org/},
  abstract  = {The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of precence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.},
}
港隆史, "アンドロイドは日常会話の話し相手になれるか?", 日立返仁会フォーラム, オンライン開催, December, 2020.
Abstract: 石黒ERATOプロジェクトにおいて行っている日常対話アンドロイドの研究において,人々がロボットを日常対話相手とみなすようになるための必要な要因についての知見を紹介する.
BibTeX:
@InProceedings{港隆史2020,
  author    = {港隆史},
  booktitle = {日立返仁会フォーラム},
  title     = {アンドロイドは日常会話の話し相手になれるか?},
  year      = {2020},
  address   = {オンライン開催},
  day       = {4},
  month     = dec,
  abstract  = {石黒ERATOプロジェクトにおいて行っている日常対話アンドロイドの研究において,人々がロボットを日常対話相手とみなすようになるための必要な要因についての知見を紹介する.},
}
石黒浩, "渋沢栄一、夏目漱石アンドロイドが目指すもの   未来を現実にするイマジネーション~", 朝日教育会議2020 二松学舎大学, オンライン開催, December, 2020.
Abstract: 新一万円札の肖像となることが決定した渋沢栄一は、二松学舎の創立者・三島中洲との縁が深く、第三代舎長も務めました。渋沢の著書『論語と算盤』に焦点をあて、見えない未来を豊かにするために私たちはそこから何を学び、どう生きるべきかを考えます。“渋沢栄一アンドロイド”も登場します。
BibTeX:
@InProceedings{石黒浩2020h,
  author    = {石黒浩},
  booktitle = {朝日教育会議2020 二松学舎大学},
  title     = {渋沢栄一、夏目漱石アンドロイドが目指すもの ~ 未来を現実にするイマジネーション~},
  year      = {2020},
  address   = {オンライン開催},
  day       = {12},
  month     = dec,
  url       = {https://aef.asahi.com/2020/nishogakusha.html},
  abstract  = {新一万円札の肖像となることが決定した渋沢栄一は、二松学舎の創立者・三島中洲との縁が深く、第三代舎長も務めました。渋沢の著書『論語と算盤』に焦点をあて、見えない未来を豊かにするために私たちはそこから何を学び、どう生きるべきかを考えます。“渋沢栄一アンドロイド”も登場します。},
}
石黒浩, "アンドロイド演劇 「さようなら」 &平田オリザと石黒浩の対談", ロボット演劇プロジェクト, 豊田市民文化会館, 愛知, December, 2020.
Abstract: 劇団青年団を率いる劇作家・演出家の平田オリザと、 ロボッ卜研究の第一人者である石黒浩により、 大阪大学で始まったロボッ卜演劇プロジ 工ク卜。 アンド口イド演劇『さようなら』は、 アンド口イドと人間の関わりの中に、 「人聞にとって、 ロボッ卜にとって、 生とは、 そして死とは…」を鋭く問う、衝撃の短編作品を上映。 平田オリザと石黒浩により質疑応答を含む対談を上映後に行う。
BibTeX:
@InProceedings{石黒浩2020i,
  author    = {石黒浩},
  booktitle = {ロボット演劇プロジェクト},
  title     = {アンドロイド演劇 「さようなら」 &平田オリザと石黒浩の対談},
  year      = {2020},
  address   = {豊田市民文化会館, 愛知},
  day       = {6},
  month     = dec,
  url       = {http://www.cul-toyota.or.jp/eventda/event_20201206simin_b-android.html},
  abstract  = {劇団青年団を率いる劇作家・演出家の平田オリザと、 ロボッ卜研究の第一人者である石黒浩により、 大阪大学で始まったロボッ卜演劇プロジ 工ク卜。 アンド口イド演劇『さようなら』は、 アンド口イドと人間の関わりの中に、 「人聞にとって、 ロボッ卜にとって、 生とは、 そして死とは…」を鋭く問う、衝撃の短編作品を上映。 平田オリザと石黒浩により質疑応答を含む対談を上映後に行う。},
}
石黒浩, "遠隔操作対話ロボットとコロナ後の社会", フレキシブル3次元実装コンソーシアム「5G/6Gが拓く未来社会」シンポジウム, オンライン開催, November, 2020.
Abstract: ポストコロナ社会と遠隔操作可能な対話型ロボットについて語る
BibTeX:
@InProceedings{石黒浩2020g,
  author    = {石黒浩},
  booktitle = {フレキシブル3次元実装コンソーシアム「5G/6Gが拓く未来社会」シンポジウム},
  title     = {遠隔操作対話ロボットとコロナ後の社会},
  year      = {2020},
  address   = {オンライン開催},
  day       = {13},
  month     = nov,
  url       = {https://www.kansai.meti.go.jp/3jisedai/project/elctronics/press/jisedai_electronics_press_event2.html},
  abstract  = {ポストコロナ社会と遠隔操作可能な対話型ロボットについて語る},
}
石黒浩, "ポストコロナ時代の人間と幸福", INNOVATION GARDEN 2020, オンライン開催, October, 2020.
Abstract: ポストコロナの「持続可能な未来」に向けて、アイデアやビジョンを共有する。
BibTeX:
@Inproceedings{石黒浩2020d,
  author    = {石黒浩},
  title     = {ポストコロナ時代の人間と幸福},
  booktitle = {INNOVATION GARDEN 2020},
  year      = {2020},
  address   = {オンライン開催},
  month     = oct,
  day       = {9},
  url       = {https://innovation-garden.com},
  abstract  = {ポストコロナの「持続可能な未来」に向けて、アイデアやビジョンを共有する。},
}
石黒浩, "宗教家とロボット研究者が見る未来 ~人間とは何か~", 京都スマートシティエキスポ 2020, 国際電気通信基礎技術研究所(ATR), 京都 (YouTube Live), October, 2020.
Abstract: 「宗教とロボット」一見相容れないように見える両者は、近い将来、というより今目の前にある未来で、正面から向き合わざるを得ないテーマになる。「人間とは何か」という根源的な問題と私たちの未来について考える。
BibTeX:
@InProceedings{石黒浩2020e,
  author    = {石黒浩},
  booktitle = {京都スマートシティエキスポ 2020},
  title     = {宗教家とロボット研究者が見る未来 ~人間とは何か~},
  year      = {2020},
  address   = {国際電気通信基礎技術研究所(ATR), 京都 (YouTube Live)},
  day       = {27},
  month     = oct,
  url       = {https://www.youtube.com/watch?v=Zr251rG8OlY&feature=youtu.be},
  abstract  = {「宗教とロボット」一見相容れないように見える両者は、近い将来、というより今目の前にある未来で、正面から向き合わざるを得ないテーマになる。「人間とは何か」という根源的な問題と私たちの未来について考える。},
}
石黒浩, "バーチャルなキャラに「権利」は必要?", 日本科学未来館 オンラインイベント, 日本科学未来館, 東京(YouTube Live), September, 2020.
Abstract: バーチャルなキャラクターの「権利」をあえて考えてみることで、バーチャルとリアルのはざまに生きる私たちの未来について語る。(コロナウイルス感染症の感染拡大防止のため、オンラインにて実施)
BibTeX:
@InProceedings{石黒浩2020c,
  author    = {石黒浩},
  booktitle = {日本科学未来館 オンラインイベント},
  title     = {バーチャルなキャラに「権利」は必要?},
  year      = {2020},
  address   = {日本科学未来館, 東京(YouTube Live)},
  day       = {26},
  month     = sep,
  url       = {https://www.miraikan.jst.go.jp/events/202009261564.html},
  abstract  = {バーチャルなキャラクターの「権利」をあえて考えてみることで、バーチャルとリアルのはざまに生きる私たちの未来について語る。(コロナウイルス感染症の感染拡大防止のため、オンラインにて実施)},
}
石黒浩, "人間機械共生社会を目指した対話認知システム学「対話知能学」", 一般財団法人マルチメディア振興センター「世界のAI戦略―各国が描く未来創造のビジョン」出版記念講演会, オンライン, August, 2020.
Abstract: AI・ロボットと人の共生社会における対話知能学の可能性について語る
BibTeX:
@Inproceedings{石黒浩2020f,
  author    = {石黒浩},
  title     = {人間機械共生社会を目指した対話認知システム学「対話知能学」},
  booktitle = {一般財団法人マルチメディア振興センター「世界のAI戦略―各国が描く未来創造のビジョン」出版記念講演会},
  year      = {2020},
  address   = {オンライン},
  month     = aug,
  day       = {28},
  url       = {https://www.fmmc.or.jp/Portals/0/resources/ann/pdf/news/kouenkai_20200828.pdf},
  abstract  = {AI・ロボットと人の共生社会における対話知能学の可能性について語る},
}
石黒浩, "AI研究の未来", 札幌市立大学公開講座, 札幌市立大学桑園キャンパス, 北海道, July, 2020.
Abstract: 次世代においてAIを活用して社会生活を送ることは不可欠である。AIに係わる最先端情報を共有し、 研究者の観点からAI技術の応用の可能性について語る。
BibTeX:
@Inproceedings{石黒浩2020a,
  author    = {石黒浩},
  title     = {AI研究の未来},
  booktitle = {札幌市立大学公開講座},
  year      = {2020},
  address   = {札幌市立大学桑園キャンパス, 北海道},
  month     = jul,
  day       = {3},
  url       = {https://www.scu.ac.jp/cms/wp-content/uploads/2020/06/d0a78175a6f6aa6626557bd2de61026d-1.pdf},
  abstract  = {次世代においてAIを活用して社会生活を送ることは不可欠である。AIに係わる最先端情報を共有し、
研究者の観点からAI技術の応用の可能性について語る。},
}
石黒浩, "知能ロボットと暮らす未来にはどんなルールが必要ですか?", 日本科学未来館 オンラインイベント, 日本科学未来館, 東京(YouTube Live), July, 2020.
Abstract: 個人情報の保護など人間と知能ロボットが話をするときに守るべきルールはどう変わるのか。人間と知能ロボットが共生する社会にはどんなルールが必要なのかについて語る。 (コロナウイルス感染症の感染拡大防止のため、オンラインにて実施)
BibTeX:
@InProceedings{石黒浩2020b,
  author    = {石黒浩},
  booktitle = {日本科学未来館 オンラインイベント},
  title     = {知能ロボットと暮らす未来にはどんなルールが必要ですか?},
  year      = {2020},
  address   = {日本科学未来館, 東京(YouTube Live)},
  day       = {24},
  month     = jul,
  url       = {https://www.miraikan.jst.go.jp/events/202007241465.html},
  abstract  = {個人情報の保護など人間と知能ロボットが話をするときに守るべきルールはどう変わるのか。人間と知能ロボットが共生する社会にはどんなルールが必要なのかについて語る。 (コロナウイルス感染症の感染拡大防止のため、オンラインにて実施)},
}
石黒浩, "人と関わるロボットの研究と人間理解", 奈良女子大学附属中等教育学校 2019年度 公開研究会 & SSH成果発表会, 奈良女子大附属中等教育学校, 奈良, February, 2020.
Abstract: 技術開発は生活を豊かにすると同時に、人間や人間社会について新たな理解をもたらす。特に人間らしい姿形で人間と関わるロボットは、日常生活において、身振り手振り、表情、言語を用いて、新たな対話サービスを提供する。その一方で、この対話サービスを提供するロボットと関わることで、人々はロボットに感情や知能や意識を感じる。この感情や知能や意識は、人間にとって非常に重要な問題であり、だれもが感じるものであるにも関わらず、その仕組みは複雑で未だ明らかにされていない。ロボットはこのような人間や人間社会に関する複雑な問題を理解する手段になる。ロボットと関わることで、感情や知能や意識を感じることができたのなら、そのロボットの仕組みをもう一度調べることで、感情や知識や意識とは何かを理解できる可能性があるのである。 このように技術開発とは、人間にとって単に便利な社会を提供するものではなく、人間や人間社会の理解を目的とするものである。豊かな技術に支えられた現代においては、特に、人間や人間社会の理解が重要となる。「技術の時代」から「思考の時代」へと我々の社会は向かっている。 人間の生きる目的とは、人間や人間社会について知ることではないのだろうか。この講演ではロボット研究を通して、どのように人間が理解できるかを議論するとともに、これから来る「思考の時代」に向けた未来思考学会の活動について紹介する。思考の時代においては、特定の枠の中で知識や技術を学ぶことだけでなく、むしろ特定の枠にはまらず自由に発想することが重要になる。 本講演では、ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@InProceedings{石黒浩2020,
  author    = {石黒浩},
  booktitle = {奈良女子大学附属中等教育学校 2019年度 公開研究会 \& SSH成果発表会},
  title     = {人と関わるロボットの研究と人間理解},
  year      = {2020},
  address   = {奈良女子大附属中等教育学校, 奈良},
  day       = {15},
  month     = feb,
  url       = {http://www.nara-wu.ac.jp/fuchuko/kenkyu/kenkyu.html},
  abstract  = {技術開発は生活を豊かにすると同時に、人間や人間社会について新たな理解をもたらす。特に人間らしい姿形で人間と関わるロボットは、日常生活において、身振り手振り、表情、言語を用いて、新たな対話サービスを提供する。その一方で、この対話サービスを提供するロボットと関わることで、人々はロボットに感情や知能や意識を感じる。この感情や知能や意識は、人間にとって非常に重要な問題であり、だれもが感じるものであるにも関わらず、その仕組みは複雑で未だ明らかにされていない。ロボットはこのような人間や人間社会に関する複雑な問題を理解する手段になる。ロボットと関わることで、感情や知能や意識を感じることができたのなら、そのロボットの仕組みをもう一度調べることで、感情や知識や意識とは何かを理解できる可能性があるのである。 このように技術開発とは、人間にとって単に便利な社会を提供するものではなく、人間や人間社会の理解を目的とするものである。豊かな技術に支えられた現代においては、特に、人間や人間社会の理解が重要となる。「技術の時代」から「思考の時代」へと我々の社会は向かっている。 人間の生きる目的とは、人間や人間社会について知ることではないのだろうか。この講演ではロボット研究を通して、どのように人間が理解できるかを議論するとともに、これから来る「思考の時代」に向けた未来思考学会の活動について紹介する。思考の時代においては、特定の枠の中で知識や技術を学ぶことだけでなく、むしろ特定の枠にはまらず自由に発想することが重要になる。 本講演では、ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hidenobu Sumioka, "Social Robots for Touch interaction and Education", In 2019 International Conference on Advances in STEM Education (ASTEM 2019), The Education University of Hong Kong (EdUHK), Hong Kong, December, 2019.
Abstract: In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change its relationship with us by playing different roles. They can become our teacher, our student, care-receiver, and our peer, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve relationship between elderly people and care staffs. Finally, I present the physical embodiment of the robot enables us to overcome our limitation to build social bond with people and provide us with a new way of making close human relationship.
BibTeX:
@InProceedings{Sumioka2019g,
  author    = {Hidenobu Sumioka},
  booktitle = {2019 International Conference on Advances in STEM Education (ASTEM 2019)},
  title     = {Social Robots for Touch interaction and Education},
  year      = {2019},
  address   = {The Education University of Hong Kong (EdUHK), Hong Kong},
  day       = {18-20},
  month     = dec,
  url       = {https://www.eduhk.hk/astem/},
  abstract  = {In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change its relationship with us by playing different roles. They can become our teacher, our student, care-receiver, and our peer, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve relationship between elderly people and care staffs. Finally, I present the physical embodiment of the robot enables us to overcome our limitation to build social bond with people and provide us with a new way of making close human relationship.},
}
石黒浩, "人と関わるロボットの研究 - ロボットによる生活・学習支援 -", 日本子ども虐待防止学会第25回学術集会ひょうご大会 (JaSPCAN HYOGO), 神戸ポートピアホテル南館 ポートピアホール, 兵庫, December, 2019.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@InProceedings{石黒浩2019j,
  author    = {石黒浩},
  booktitle = {日本子ども虐待防止学会第25回学術集会ひょうご大会 (JaSPCAN HYOGO)},
  title     = {人と関わるロボットの研究 - ロボットによる生活・学習支援 -},
  year      = {2019},
  address   = {神戸ポートピアホテル南館 ポートピアホール, 兵庫},
  day       = {21-22},
  month     = dec,
  url       = {https://www.jaspcan25.jp/pro.php},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "人間対話型ロボットと未来社会", 第61回 京都大学11月祭, November, 2019.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@InProceedings{石黒浩2019i,
  author    = {石黒浩},
  booktitle = {第61回 京都大学11月祭},
  title     = {人間対話型ロボットと未来社会},
  year      = {2019},
  day       = {21-24},
  month     = nov,
  url       = {https://nf.la/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "ロボットと未来社会", 山梨大学全学同窓会講演会, 山梨大学甲府キャンパス, 山梨, November, 2019.
BibTeX:
@InProceedings{石黒浩2019g,
  author    = {石黒浩},
  booktitle = {山梨大学全学同窓会講演会},
  title     = {ロボットと未来社会},
  year      = {2019},
  address   = {山梨大学甲府キャンパス, 山梨},
  day       = {2},
  month     = nov,
  url       = {https://ymu-dousou.jp/Information/Detail.aspx?code=136},
}
Hidenobu Sumioka, "Emerging Education with Social Robots", In The 11th Asian Conference on Education (ACE2019), Toshi Center Hotel, Tokyo, November, 2019.
Abstract: Recent advances in robotic technologies enable robots to support us in our daily activities such as social interactions. Such robots, called social robots, often make us interact in more intuitive and casual ways than a real human because of the lack of nonverbal cues and demographic messages. Thanks to this characteristic, they are just beginning to be applied to various fields of social interaction such as education. In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change their relationship with us by playing different roles. They can become our teachers, our students, and our peers, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve communication skills. Finally, I will present the physical embodiment of the robot that enables us to overcome our limitation to build social bonds with people and provide us with a new way of making close human relationships.
BibTeX:
@InProceedings{Sumioka2019f,
  author    = {Hidenobu Sumioka},
  booktitle = {The 11th Asian Conference on Education (ACE2019)},
  title     = {Emerging Education with Social Robots},
  year      = {2019},
  address   = {Toshi Center Hotel, Tokyo},
  day       = {1-3},
  month     = nov,
  url       = {https://ace.iafor.org/},
  abstract  = {Recent advances in robotic technologies enable robots to support us in our daily activities such as social interactions. Such robots, called social robots, often make us interact in more intuitive and casual ways than a real human because of the lack of nonverbal cues and demographic messages. Thanks to this characteristic, they are just beginning to be applied to various fields of social interaction such as education. In this talk, I will present the potential applications of social robots in education, introducing three aspects. First, social robots can easily change their relationship with us by playing different roles. They can become our teachers, our students, and our peers, depending on their social contexts. Second, by referring to our field experiment with a teleoperated android, I will show that they can facilitate human-human communication and can also provide opportunities for us to improve communication skills. Finally, I will present the physical embodiment of the robot that enables us to overcome our limitation to build social bonds with people and provide us with a new way of making close human relationships.},
}
石黒浩, "アンドロイドロボット「テレノイド」を用いた、新しいケアのアプローチ方法と導入事例", 第4回 CareTEX関西2019, インテックス大阪, 大阪, October, 2019.
Abstract: 1.介護職員のコミュニケーション技術研修の教材 2.高齢者向け個人面談ツールとしての実践事例について、開発者、講師、受講生の3者が解説、報告する。
BibTeX:
@InProceedings{石黒浩2019h,
  author    = {石黒浩},
  booktitle = {第4回 CareTEX関西2019},
  title     = {アンドロイドロボット「テレノイド」を用いた、新しいケアのアプローチ方法と導入事例},
  year      = {2019},
  address   = {インテックス大阪, 大阪},
  day       = {9-11},
  month     = oct,
  url       = {http://caretex.org/info/conference2019},
  abstract  = {1.介護職員のコミュニケーション技術研修の教材 2.高齢者向け個人面談ツールとしての実践事例について、開発者、講師、受講生の3者が解説、報告する。},
}
Soheil Keshmiri, "Human-Robot Physical Interaction: The recent Findings and their Utilities for preventing age-related cognitive decline, improving the quality of child care, and advancing quality of mental disorder services", In Big Data and AI Congress 5th Edition 2019, Barcelona, Spain, pp. 1-33, October, 2019.
BibTeX:
@Inproceedings{Keshmiri2019j,
  author    = {Soheil Keshmiri},
  title     = {Human-Robot Physical Interaction: The recent Findings and their Utilities for preventing age-related cognitive decline, improving the quality of child care, and advancing quality of mental disorder services},
  booktitle = {Big Data and AI Congress 5th Edition 2019},
  year      = {2019},
  pages     = {1-33},
  address   = {Barcelona, Spain},
  month     = oct,
  day       = {17},
  url       = {https://bigdatacongress.barcelona/en/},
}
Hiorshi Ishiguro, "Human Robots and Smart Textiles", In Comfort and Smart Textile International Symposium 2019, Kasugano International Forum, Nara, September, 2019.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@InProceedings{Ishiguro2019c,
  author    = {Hiorshi Ishiguro},
  booktitle = {Comfort and Smart Textile International Symposium 2019},
  title     = {Human Robots and Smart Textiles},
  year      = {2019},
  address   = {Kasugano International Forum, Nara},
  day       = {6-7},
  month     = sep,
  url       = {https://cscenter.co.jp/issttcc2019/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
石黒浩, "人と関わるロボットの研究開発", AIと人がつくる未来社会, 大阪大学豊中キャンパス, 大阪, August, 2019.
Abstract: AIと人がつくる未来社会について多角的に論じるとともに、未来社会構築における学術の貢献について議論する。
BibTeX:
@Inproceedings{石黒浩2019f,
  author    = {石黒浩},
  title     = {人と関わるロボットの研究開発},
  booktitle = {AIと人がつくる未来社会},
  year      = {2019},
  address   = {大阪大学豊中キャンパス, 大阪},
  month     = aug,
  day       = {1},
  url       = {http://www.scj.go.jp/ja/event/index.html},
  abstract  = {AIと人がつくる未来社会について多角的に論じるとともに、未来社会構築における学術の貢献について議論する。},
}
Hiroshi Ishiguro, "Studies on Interactive Robots", In Living Machines 2019, Kasugano International Forum, Nara, July, 2019.
Abstract: In this talk, he will introduce various interactive personal robots and androids and explain how to study the technologies and scientific issues by using them. Especially, he will focus on embodiment, emotion and intention/desire of the robots and androids. And further, he will discuss on our future society where we have symbiotic relationships with them.
BibTeX:
@InProceedings{Ishiguro2019b,
  author    = {Hiroshi Ishiguro},
  booktitle = {Living Machines 2019},
  title     = {Studies on Interactive Robots},
  year      = {2019},
  address   = {Kasugano International Forum, Nara},
  day       = {9-12},
  month     = jul,
  url       = {http://livingmachinesconference.eu/2019/plenarytalks/},
  abstract  = {In this talk, he will introduce various interactive personal robots and androids and explain how to study the technologies and scientific issues by using them. Especially, he will focus on embodiment, emotion and intention/desire of the robots and androids. And further, he will discuss on our future society where we have symbiotic relationships with them.},
}
石黒浩, "知能ロボットと共生する社会の実現に向けて", 札幌市立大学学長公開講座 AIとロボットの未来, 札幌市立大学桑園キャンパス, 北海道, July, 2019.
Abstract: 次世代においてAIを活用して社会生活を送ることは不可欠になっていく。このような時代を迎えるにあたっての知識を得るために、ロボットの第一人者を迎え、その観点からAI技術とロボット技術の現在と未来の可能性について報告ならびに提言してもらう。
BibTeX:
@InProceedings{石黒浩2019e,
  author    = {石黒浩},
  booktitle = {札幌市立大学学長公開講座 AIとロボットの未来},
  title     = {知能ロボットと共生する社会の実現に向けて},
  year      = {2019},
  address   = {札幌市立大学桑園キャンパス, 北海道},
  day       = {12},
  month     = jul,
  url       = {http://www.scu.ac.jp/news/pressrelease/53371/},
  abstract  = {次世代においてAIを活用して社会生活を送ることは不可欠になっていく。このような時代を迎えるにあたっての知識を得るために、ロボットの第一人者を迎え、その観点からAI技術とロボット技術の現在と未来の可能性について報告ならびに提言してもらう。},
}
Hidenobu Sumioka, "Robotics For Elderly Society", In Long term care system & scientific technology in Japan aging society, 大阪大学, 大阪, July, 2019.
Abstract: In this talk, I present current elderly care with communication robots in Japan
BibTeX:
@InProceedings{Sumioka2019b,
  author    = {Hidenobu Sumioka},
  booktitle = {Long term care system \& scientific technology in Japan aging society},
  title     = {Robotics For Elderly Society},
  year      = {2019},
  address   = {大阪大学, 大阪},
  day       = {22},
  month     = jul,
  abstract  = {In this talk, I present current elderly care with communication robots in Japan},
}
石黒浩, "ロボットによる生活(くらし)・学習(まなび)支援", みえアカデミックセミナー2019, 三重県総合文化センター, 三重, July, 2019.
Abstract: 人と対話するのが苦手な人でもロボットであれば対話できるという事例が数多く報告されています。我々の研究においても、高齢者に対する対話サービスを行うロボット、テレノイドや、自閉症児に対する対話サービスを行うコミュー、支援学校で教育者と児童の間での対話を支援するハグビーを開発してきました。本講演ではこれらのロボットを紹介しながら、ロボットが児童の生活支援や学習支援においてどのように役立つかをお話します。
BibTeX:
@InProceedings{石黒浩2019d,
  author    = {石黒浩},
  booktitle = {みえアカデミックセミナー2019},
  title     = {ロボットによる生活(くらし)・学習(まなび)支援},
  year      = {2019},
  address   = {三重県総合文化センター, 三重},
  day       = {7},
  month     = jul,
  url       = {https://www.center-mie.or.jp/manabi/event/sponsor/detail/27508},
  abstract  = {人と対話するのが苦手な人でもロボットであれば対話できるという事例が数多く報告されています。我々の研究においても、高齢者に対する対話サービスを行うロボット、テレノイドや、自閉症児に対する対話サービスを行うコミュー、支援学校で教育者と児童の間での対話を支援するハグビーを開発してきました。本講演ではこれらのロボットを紹介しながら、ロボットが児童の生活支援や学習支援においてどのように役立つかをお話します。},
}
石黒浩, "人と関わるロボットの研究開発", NICTオープンハウス2019, 国立研究開発法人情報通信研究機構, 東京, June, 2019.
Abstract: ロボット研究の成果の紹介を交え、研究開発について語る。
BibTeX:
@Inproceedings{石黒浩2019c,
  author    = {石黒浩},
  title     = {人と関わるロボットの研究開発},
  booktitle = {NICTオープンハウス2019},
  year      = {2019},
  address   = {国立研究開発法人情報通信研究機構, 東京},
  month     = jun,
  day       = {21},
  abstract  = {ロボット研究の成果の紹介を交え、研究開発について語る。},
}
住岡英信, "ロボットとの対話を用いた健康支援", 第19回日本抗加齢医学会総会, パシフィコ横浜, 神奈川, June, 2019.
Abstract: 近年,対話ロボットによるコミュニケーション支援の研究が盛んに行われている.しかし一方でその効果については質問紙やインタビューによる主観的な評価が多く,脳科学的,生理学的な検証は不十分と言える.本発表では,著者らが近年進めている脳活動やホルモン検査を用いた対話ロボットメディアの効果検証について紹介する.
BibTeX:
@InProceedings{住岡英信2019b,
  author    = {住岡英信},
  booktitle = {第19回日本抗加齢医学会総会},
  title     = {ロボットとの対話を用いた健康支援},
  year      = {2019},
  address   = {パシフィコ横浜, 神奈川},
  day       = {14-16},
  month     = jun,
  url       = {http://www.c-linkage.co.jp/19jaam/index.html},
  abstract  = {近年,対話ロボットによるコミュニケーション支援の研究が盛んに行われている.しかし一方でその効果については質問紙やインタビューによる主観的な評価が多く,脳科学的,生理学的な検証は不十分と言える.本発表では,著者らが近年進めている脳活動やホルモン検査を用いた対話ロボットメディアの効果検証について紹介する.},
}
石黒浩, "アンドロイドの研究", 第36回日本呼吸器外科学会学術集会, 大阪国際会議場, 大阪, May, 2019.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2019b,
  author    = {石黒浩},
  title     = {アンドロイドの研究},
  booktitle = {第36回日本呼吸器外科学会学術集会},
  year      = {2019},
  address   = {大阪国際会議場, 大阪},
  month     = may,
  day       = {16},
  url       = {http://www.c-linkage.co.jp/jacs36/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "アンドロイドの研究開発", 第35回日本臨床皮膚科医会総会・臨床学術大会, 松山全日空ホテル, 愛媛, April, 2019.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2019a,
  author    = {石黒浩},
  title     = {アンドロイドの研究開発},
  booktitle = {第35回日本臨床皮膚科医会総会・臨床学術大会},
  year      = {2019},
  address   = {松山全日空ホテル, 愛媛},
  month     = apr,
  day       = {20},
  url       = {http://jocd35.jp/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石井カルロス寿憲, "対話音声に伴うパラ言語・非言語情報の抽出および表出", 日本音響学会2019年春季研究発表会 (ASJ2019 Spring), 電気通信大学, 東京, pp. 1347-1348, March, 2019.
Abstract: 著者の研究背景として、自然発話音声の韻律および声質特徴と、これらがもたらす意図・態度・感情のパラ言語情報について研究を進めてきた。 一方で、人型ロボットとの対話インタラクションにおける研究にも携わっており、ロボットが人並みに社会で活躍できる目標に向かって、人らしい動作生成技術について研究開発を進めている。本発表では、これまでの研究活動について紹介する。
BibTeX:
@Inproceedings{石井カルロス寿憲2019,
  author    = {石井カルロス寿憲},
  title     = {対話音声に伴うパラ言語・非言語情報の抽出および表出},
  booktitle = {日本音響学会2019年春季研究発表会 (ASJ2019 Spring)},
  year      = {2019},
  pages     = {1347-1348},
  address   = {電気通信大学, 東京},
  month     = Mar,
  day       = {5-7},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {著者の研究背景として、自然発話音声の韻律および声質特徴と、これらがもたらす意図・態度・感情のパラ言語情報について研究を進めてきた。 一方で、人型ロボットとの対話インタラクションにおける研究にも携わっており、ロボットが人並みに社会で活躍できる目標に向かって、人らしい動作生成技術について研究開発を進めている。本発表では、これまでの研究活動について紹介する。},
}
Hiroshi Ishiguro, "Studies on Interactive Robots", In PerCom2019, Kyoto International Conference Center, Kyoto, March, 2019.
Abstract: We, humans, have innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interaction. The speaker, Ishiguro, has developed various types of interactive robots and androids so fare. These robots can be used for studying on the technologies and understanding human natures. He has contributed to establish the research area of Human-Robot Interaction with the robots. Geminoid that is a teleoperated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people often hesitate to talk with adult humans and the adult androids. A question is what the ideal medium for everybody is. In order to investigate it, the speaker proposes the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot judge the age and gender. Elderly people like to talk with the Telenoid very much. In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans. Further, he, Ishiguro, is developing and studying autonomous conversational robots and androids recently. Especially, he focuses on embodiment, emotion and intention/desire of the robots and androids. In addition to these robotics studies, he will discuss on our future society where we have symbiotic relationships with them in this talk.
BibTeX:
@Inproceedings{Ishiguro2019a,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Interactive Robots},
  booktitle = {PerCom2019},
  year      = {2019},
  address   = {Kyoto International Conference Center, Kyoto},
  month     = mar,
  day       = {13},
  url       = {http://www.percom.org/},
  abstract  = {We, humans, have innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interaction. The speaker, Ishiguro, has developed various types of interactive robots and androids so fare. These robots can be used for studying on the technologies and understanding human natures. He has contributed to establish the research area of Human-Robot Interaction with the robots.

Geminoid that is a teleoperated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid.

However, the geminoid is not the ideal medium for everybody. For example, elderly people often hesitate to talk with adult humans and the adult androids. A question is what the ideal medium for everybody is. In order to investigate it, the speaker proposes the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot judge the age and gender. Elderly people like to talk with the Telenoid very much. In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.

Further, he, Ishiguro, is developing and studying autonomous conversational robots and androids recently. Especially, he focuses on embodiment, emotion and intention/desire of the robots and androids.

In addition to these robotics studies, he will discuss on our future society where we have symbiotic relationships with them in this talk.},
}
住岡英信, "対話ロボットを用いたブレインヘルスケア-ロボットとの長期的な対話がもたらす脳への健康効果-", 2018年第4回B3C会議, TKP東京駅セントラルカンファレンスセンター, 東京, February, 2019.
Abstract: 本発表では、対話ロボットとの長期的な対話が脳に与える効果について検証した実験結果について報告する。
BibTeX:
@InProceedings{住岡英信2019c,
  author    = {住岡英信},
  booktitle = {2018年第4回B3C会議},
  title     = {対話ロボットを用いたブレインヘルスケア-ロボットとの長期的な対話がもたらす脳への健康効果-},
  year      = {2019},
  address   = {TKP東京駅セントラルカンファレンスセンター, 東京},
  day       = {28},
  etitle    = {Hidenobu Sumioka},
  month     = feb,
  abstract  = {本発表では、対話ロボットとの長期的な対話が脳に与える効果について検証した実験結果について報告する。},
}
住岡英信, "触れ合いを伴うロボットとの共生", アメニティフォーラム23, びわ湖大津プリンスホテル, 滋賀, February, 2019.
Abstract: 本発表では、人とロボットの触れ合いに関する研究を紹介し、ロボットによる社会的弱者支援について議論する
BibTeX:
@Inproceedings{住岡英信2019,
  author    = {住岡英信},
  title     = {触れ合いを伴うロボットとの共生},
  booktitle = {アメニティフォーラム23},
  year      = {2019},
  address   = {びわ湖大津プリンスホテル, 滋賀},
  month     = Feb,
  day       = {8-10},
  url       = {http://amenity-forum-shiga.blogspot.com/},
  abstract  = {本発表では、人とロボットの触れ合いに関する研究を紹介し、ロボットによる社会的弱者支援について議論する},
}
Hidenobu Sumioka, "Robotics for Elderly and Stressful Society", In The Kansai Resilience Forum 2019, The Hyogo Prefectural Museum of Art, 兵庫, February, 2019.
Abstract: The Kansai Resilience Forum 2019 is an event organised by The Government of Japan in collaboration with The International Academic Forum (IAFOR), which re-examines resilience from interdisciplinary perspectives and paradigms, from the abstract concept to the concrete, with contributions from thought leaders in academia, business and government.
BibTeX:
@InProceedings{Sumioka2019,
  author    = {Hidenobu Sumioka},
  booktitle = {The Kansai Resilience Forum 2019},
  title     = {Robotics for Elderly and Stressful Society},
  year      = {2019},
  address   = {The Hyogo Prefectural Museum of Art, 兵庫},
  day       = {22},
  month     = feb,
  url       = {https://kansai-resilience-forum.jp/},
  abstract  = {The Kansai Resilience Forum 2019 is an event organised by The Government of Japan in collaboration with The International Academic Forum (IAFOR), which re-examines resilience from interdisciplinary perspectives and paradigms, from the abstract concept to the concrete, with contributions from thought leaders in academia, business and government.},
}
住岡英信, "人に近づくロボット", 京都工学院高等学校特別講義, 京都, 京都工学院高等学校 , 京都, February, 2019.
Abstract: 本講演では、現在研究の進む人と共存するロボットについて紹介する。
BibTeX:
@InProceedings{住岡英信2019a,
  author    = {住岡英信},
  booktitle = {京都工学院高等学校特別講義},
  title     = {人に近づくロボット},
  year      = {2019},
  address   = {京都工学院高等学校 , 京都},
  day       = {6},
  month     = Feb,
  publisher = {京都},
  url       = {http://cms.edu.city.kyoto.jp/weblog/index.php?id=300254},
  abstract  = {本講演では、現在研究の進む人と共存するロボットについて紹介する。},
}
Hiroshi Ishiguro, "State-of-the-art and different approaches to robotics research and development", In Roboethics: Humans, Machines and Health, New Synod Hall, Vatican, February, 2019.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2019,
  author    = {Hiroshi Ishiguro},
  title     = {State-of-the-art and different approaches to robotics research and development},
  booktitle = {Roboethics: Humans, Machines and Health},
  year      = {2019},
  address   = {New Synod Hall, Vatican},
  month     = Feb,
  day       = {25},
  url       = {http://www.academyforlife.va/content/pav/en/news/2018/humans--machines-and-health--workshop-2019.html},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
石黒浩, "人間型ロボットと未来社会", 第16回パナソニックOBいちょう会, ホテル・アゴーラ大阪守口, 大阪, December, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018k,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {第16回パナソニックOBいちょう会},
  year      = {2018},
  address   = {ホテル・アゴーラ大阪守口, 大阪},
  month     = Dec,
  day       = {2},
  url       = {https://panasonicobichokai.jimdo.com/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "人間型ロボットと未来社会", 第54回日本赤十字社医学会総会, 名古屋国際会議場, 愛知, November, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018o,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {第54回日本赤十字社医学会総会},
  year      = {2018},
  address   = {名古屋国際会議場, 愛知},
  month     = Nov,
  day       = {15},
  url       = {http://www.congre.co.jp/jrcms54/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hiroshi Ishiguro, "Humanoid Robots and Our Future Society", In 18th ACM International Conference on Intelligent Virtual Agents, Sydney, Australia, November, 2018.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2018f,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoid Robots and Our Future Society},
  booktitle = {18th ACM International Conference on Intelligent Virtual Agents},
  year      = {2018},
  address   = {Sydney, Australia},
  month     = Nov,
  day       = {7},
  url       = {https://iva2018.westernsydney.edu.au/},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
石黒浩, "ロボットと未来社会", Display Innovation CHINA 2018, Crowne Plaza Beijing Lido, China, October, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018m,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {Display Innovation CHINA 2018},
  year      = {2018},
  address   = {Crowne Plaza Beijing Lido, China},
  month     = Oct,
  day       = {24},
  url       = {https://project.nikkeibp.co.jp/fpd/displaychina2018/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "未来の愛:変化する愛の形と幸福の行方とは?", Innovation City Forum 2018, 六本木アカデミーヒルズ, 東京, October, 2018.
Abstract: 情報技術やバイオテクノロジーの革命によってもたらされる我々の愛のありかたの変化は、何をもたらすのでしょうか?愛の多様化は、家族の形態さえも、今とは違ったものにするのでしょうか?人はその時他者とどのような関係を結ぶのでしょうか?科学技術のもたらす人間の変革と愛の未来を議論します。
BibTeX:
@Inproceedings{石黒浩2018l,
  author    = {石黒浩},
  title     = {未来の愛:変化する愛の形と幸福の行方とは?},
  booktitle = {Innovation City Forum 2018},
  year      = {2018},
  address   = {六本木アカデミーヒルズ, 東京},
  month     = Oct,
  day       = {18},
  url       = {http://icf.academyhills.com/},
  abstract  = {情報技術やバイオテクノロジーの革命によってもたらされる我々の愛のありかたの変化は、何をもたらすのでしょうか?愛の多様化は、家族の形態さえも、今とは違ったものにするのでしょうか?人はその時他者とどのような関係を結ぶのでしょうか?科学技術のもたらす人間の変革と愛の未来を議論します。},
}
石黒浩, "ロボットで変わる未来社会", ICTビジネスフォーラム2018, グランフロント大阪, 大阪, October, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018n,
  author    = {石黒浩},
  title     = {ロボットで変わる未来社会},
  booktitle = {ICTビジネスフォーラム2018},
  year      = {2018},
  address   = {グランフロント大阪, 大阪},
  month     = Oct,
  day       = {31},
  url       = {https://www.starnet.ad.jp/ict-forum/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "遺伝子とアンドロイド 未来は誰のもの?", 春秋会60周年記念講演, 大阪弁護士会館, 大阪, September, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018i,
  author    = {石黒浩},
  title     = {遺伝子とアンドロイド 未来は誰のもの?},
  booktitle = {春秋会60周年記念講演},
  year      = {2018},
  address   = {大阪弁護士会館, 大阪},
  month     = Sep,
  day       = {18},
  url       = {http://osaka-shunjyu-kai.com/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hiorshi Ishiguro, "I robot faranno parte della nostra società?", In Anteprima del Forum di Cernobbio, Villa d'Este Via Regina, Italy, September, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018e,
  author    = {Hiorshi Ishiguro},
  title     = {I robot faranno parte della nostra società?},
  booktitle = {Anteprima del Forum di Cernobbio},
  year      = {2018},
  address   = {Villa d'Este Via Regina, Italy},
  month     = Sep,
  day       = {6},
  url       = {https://www.aggiornamentopermanente.it/it/incontri/view/7583},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
石黒浩, "存在の本質", SOCIAL INNOVATION WEEK SHIBUYA, EDGEof, 東京, September, 2018.
Abstract: ロボット研究の成果の紹介を交え、根源的な命題「存在」の本質とは何かについて語る。
BibTeX:
@Inproceedings{石黒浩2018j,
  author    = {石黒浩},
  title     = {存在の本質},
  booktitle = {SOCIAL INNOVATION WEEK SHIBUYA},
  year      = {2018},
  address   = {EDGEof, 東京},
  month     = Sep,
  day       = {16},
  url       = {https://www.social-innovation.jp/events/event/sonzai-no-honshitsu},
  abstract  = {ロボット研究の成果の紹介を交え、根源的な命題「存在」の本質とは何かについて語る。},
}
Hiroshi Ishiguro, "Androids, AI and the Future of Human Creativity", In ALIFE 2018, Miraikan, Tokyo, July, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018c,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, AI and the Future of Human Creativity},
  booktitle = {ALIFE 2018},
  year      = {2018},
  address   = {Miraikan, Tokyo},
  month     = Jul,
  day       = {26},
  url       = {http://2018.alife.org/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
石黒浩, "人間型ロボットと未来社会", FORTINET SECURITY WORLD 2018 OSAKA, ホテル阪急インターナショナル, 大阪, July, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018h,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {FORTINET SECURITY WORLD 2018 OSAKA},
  year      = {2018},
  address   = {ホテル阪急インターナショナル, 大阪},
  month     = Jul,
  day       = {3},
  url       = {https://www.sbbit.jp/eventinfo/45980/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hiroshi Ishiguro, "Fundamental Issues in Symbiotic Human-Robot Interaction", In Robotics: Science and Systems 2018, Carnegie Music Hall, USA, June, 2018.
Abstract: The focus of robotics research is shifting from industrial robots to robots working in daily situations and one of the most important issues is to develop autonomous social robots capable to interact with and live together with humans, i.e., symbiotic robots with humans. The aim of this workshop is to introduce research activities in "Symbiotic Human-Robot Interaction," and discuss the future challenges in this research area. One of the goals of this research is providing communication support for people, such as communication care support robot for elderly people, which is equally important as physical support in elderly care. Another aim is to offer a framework for understanding what human is by using robots as a communication stimulus input device in the actual situations. In this workshop, we will introduce the research activities using communication robots, along with a demonstration of an android, one of the most advancing communication robots. We will discuss the future of everyday robots, key technologies required to make them able to be true companions living together with us, and ethical and social issues related to this topic.
BibTeX:
@Inproceedings{Ishiguro2018d,
  author    = {Hiroshi Ishiguro},
  title     = {Fundamental Issues in Symbiotic Human-Robot Interaction},
  booktitle = {Robotics: Science and Systems 2018},
  year      = {2018},
  address   = {Carnegie Music Hall, USA},
  month     = Jun,
  day       = {30},
  url       = {http://www.roboticsconference.org/},
  abstract  = {The focus of robotics research is shifting from industrial robots to robots working in daily situations and one of the most important issues is to develop autonomous social robots capable to interact with and live together with humans, i.e., symbiotic robots with humans. The aim of this workshop is to introduce research activities in "Symbiotic Human-Robot Interaction," and discuss the future challenges in this research area. One of the goals of this research is providing communication support for people, such as communication care support robot for elderly people, which is equally important as physical support in elderly care. Another aim is to offer a framework for understanding what human is by using robots as a communication stimulus input device in the actual situations. In this workshop, we will introduce the research activities using communication robots, along with a demonstration of an android, one of the most advancing communication robots. We will discuss the future of everyday robots, key technologies required to make them able to be true companions living together with us, and ethical and social issues related to this topic.},
}
Hiroshi Ishiguro, "Androids, AI and the Future of Human Creativity", In Cannes Lions 2018, Palais des Festivals, Cannes, June, 2018.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.
BibTeX:
@Inproceedings{Ishiguro2018b,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, AI and the Future of Human Creativity},
  booktitle = {Cannes Lions 2018},
  year      = {2018},
  address   = {Palais des Festivals, Cannes},
  month     = Jun,
  day       = {18},
  url       = {https://www.canneslions.com},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and discuss about our future life.},
}
石黒浩, "人と関わるロボットの研究開発", 第8回 CiNetシンポジウム, ナレッジキャピタル, 大阪, June, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018g,
  author    = {石黒浩},
  title     = {人と関わるロボットの研究開発},
  booktitle = {第8回 CiNetシンポジウム},
  year      = {2018},
  address   = {ナレッジキャピタル, 大阪},
  month     = Jun,
  day       = {27},
  url       = {Remark 	https://cinet.jp/nict180627/#ttl01},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hiroshi Ishiguro, "Connecting with robots", In and& festival, Leuven, Belgium, May, 2018.
Abstract: Hiroshi believes that since we are hardwired to interact with and place our faith in humans, the more humanlike we can make a robot appear, the more open we'll be to sharing our lives with it. Toward this end, his teams are pioneering a young field of research called human-robot interaction, a hybrid discipline that combines engineering, AI, social psychology and cognitive science. Would you trust robots to play a significant role in our future cities? Analyzing and cultivating our evolving relationship with robots, Hiroshi seeks to understand why and when we're willing to interact with, and maybe even feel affection for, a machine. And with each android he produces, Ishiguro believes he is moving closer to building that trust.
BibTeX:
@Inproceedings{Ishiguro2018a,
  author    = {Hiroshi Ishiguro},
  title     = {Connecting with robots},
  booktitle = {and\& festival},
  year      = {2018},
  address   = {Leuven, Belgium},
  month     = May,
  day       = {3},
  url       = {https://www.andleuven.com/en/program/summit/prof-hiroshi-ishiguro},
  abstract  = {Hiroshi believes that since we are hardwired to interact with and place our faith in humans, the more humanlike we can make a robot appear, the more open we'll be to sharing our lives with it. Toward this end, his teams are pioneering a young field of research called human-robot interaction, a hybrid discipline that combines engineering, AI, social psychology and cognitive science. 
Would you trust robots to play a significant role in our future cities? Analyzing and cultivating our evolving relationship with robots, Hiroshi seeks to understand why and when we're willing to interact with, and maybe even feel affection for, a machine. And with each android he produces, Ishiguro believes he is moving closer to building that trust.},
}
Hidenobu Sumioka, "Social touch in human-human telecommunication mediated by a robot", In IoT Enabling Sensing/Network/AI and Photonics Conference 2018 (IoT-SNAP2018), Pacifico Yokohama, Kanagawa, April, 2018.
Abstract: We present how virtual physical contact mediated by an artificial entity affects our quality of life through human-human telecommunication, focusing on elderly care and education.
BibTeX:
@Inproceedings{Sumioka2018,
  author    = {Hidenobu Sumioka},
  title     = {Social touch in human-human telecommunication mediated by a robot},
  booktitle = {IoT Enabling Sensing/Network/AI and Photonics Conference 2018 (IoT-SNAP2018)},
  year      = {2018},
  address   = {Pacifico Yokohama, Kanagawa},
  month     = Apr,
  day       = {24-27},
  url       = {http://iot-snap.opicon.jp/},
  abstract  = {We present how virtual physical contact mediated by an artificial entity affects our quality of life through human-human telecommunication, focusing on elderly care and education.},
}
石黒浩, "CVEM特別講演1", 第22回日本心血管内分泌代謝学会学術総会(CVEM2018), フェニックス・シーガイア・リゾート, 宮崎, April, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018e,
  author    = {石黒浩},
  title     = {CVEM特別講演1},
  booktitle = {第22回日本心血管内分泌代謝学会学術総会(CVEM2018)},
  year      = {2018},
  address   = {フェニックス・シーガイア・リゾート, 宮崎},
  month     = Apr,
  day       = {28},
  url       = {http://www.c-linkage.co.jp/cvem2018/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "ロボットと未来社会", 関西NEC C&Cシステムユーザー会 2018年度総会, ホテルモントレ ラ・スール大阪, 大阪, April, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2018f,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {関西NEC C\&Cシステムユーザー会 2018年度総会},
  year      = {2018},
  address   = {ホテルモントレ ラ・スール大阪, 大阪},
  month     = Apr,
  day       = {13},
  url       = {https://jpn.nec.com/nua/kansai/kaigou/2018/180413/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
石黒浩, "人と関わるロボットと未来社会", 近江の国ミライ会議, 希望が丘 青年の城, 滋賀, February, 2018.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018a,
  author    = {石黒浩},
  title     = {人と関わるロボットと未来社会},
  booktitle = {近江の国ミライ会議},
  year      = {2018},
  address   = {希望が丘 青年の城, 滋賀},
  month     = Feb,
  day       = {24},
  url       = {https://ssckaname.wixsite.com/miraikaigi2018},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
Hiorshi Ishiguro, "Studies on Interactive Robots", In International Research Conference Robophilosophy 2018, Vienna, Austria, February, 2018.
Abstract: In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.
BibTeX:
@Inproceedings{Ishiguro2018,
  author    = {Hiorshi Ishiguro},
  title     = {Studies on Interactive Robots},
  booktitle = {International Research Conference Robophilosophy 2018},
  year      = {2018},
  address   = {Vienna, Austria},
  month     = Feb,
  day       = {15},
  url       = {http://conferences.au.dk/robo-philosophy-2018-at-the-university-of-vienna/},
  abstract  = {In this talk, he will introduce interactive and communicative personal robots and androids and discuss the technologies and scientific issues. Especially, he will discuss on intention/desire, experiences, emotion and consciousness of the robots and androids.},
}
石黒浩, "ロボットが拓く未来社会", スズケン市民講座, NHK文化センター梅田教室, 大阪, January, 2018.
Abstract: 株式会社スズケンとNHK文化センターとの共催で「スズケン市民講座」を開催。ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。
BibTeX:
@Inproceedings{石黒浩2018,
  author    = {石黒浩},
  title     = {ロボットが拓く未来社会},
  booktitle = {スズケン市民講座},
  year      = {2018},
  address   = {NHK文化センター梅田教室, 大阪},
  month     = Jan,
  day       = {28},
  url       = {https://www.nhk-cul.co.jp/programs/program_1138087.html},
  abstract  = {株式会社スズケンとNHK文化センターとの共催で「スズケン市民講座」を開催。ロボット研究の成果の紹介を交え、ロボットと未来社会について語る。},
}
石黒浩, "ロボットから見えてくる「人らしさ」", ケアとソリューション 東京フォーラム「ケアとテクノロジー ~人間らしい思いやりの技術~」, FORUM 8, 東京, January, 2018.
Abstract: 介護・介助や子育てなど、ケアの現場にテクノロジーが入りつつあり、「ケア」という気づかいや思いやりの行為が人の手から離れていくかもしれない現代だからこそ「人らしさ」が問いなおされています。ロボットと人の研究- 人の存在感とは一体何か、人とは何か- から見えてくる「人らしさ」ついて再考します。
BibTeX:
@Inproceedings{石黒浩2018b,
  author    = {石黒浩},
  title     = {ロボットから見えてくる「人らしさ」},
  booktitle = {ケアとソリューション 東京フォーラム「ケアとテクノロジー ~人間らしい思いやりの技術~」},
  year      = {2018},
  address   = {FORUM 8, 東京},
  month     = Jan,
  day       = {13},
  url       = {http://tanpoponoye.org/news/carecare/2017/12/00036958/},
  abstract  = {介護・介助や子育てなど、ケアの現場にテクノロジーが入りつつあり、「ケア」という気づかいや思いやりの行為が人の手から離れていくかもしれない現代だからこそ「人らしさ」が問いなおされています。ロボットと人の研究- 人の存在感とは一体何か、人とは何か- から見えてくる「人らしさ」ついて再考します。},
}
石黒浩, "ロボットと未来社会", JREA関西支部講演会, グランフロント大阪, 大阪, January, 2018.
Abstract: 鉄道におけるロボット/アンドロイド活用について語る。
BibTeX:
@Inproceedings{石黒浩2018c,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {JREA関西支部講演会},
  year      = {2018},
  address   = {グランフロント大阪, 大阪},
  month     = Jan,
  day       = {22},
  url       = {http://www.jrea.or.jp/katsudou/shibu_katsudou/kansai.html},
  abstract  = {鉄道におけるロボット/アンドロイド活用について語る。},
}
住岡英信, "触れ合い対話型ロボットを用いたコミュニケーション支援", 第22回関西大学先端科学技術シンポジウム, 関西大学千里山キャンパス100周年記念会館, 大阪, January, 2018.
Abstract: 本学の教育研究活動並びに社会連携事業の推進に本シンポジウムは、関西大学先端科学技術推進機構における1年間の研究成果の発表の場として開催されており、研究者のみならず、多くの企業関係者が参加している。 今年度は「人工知能との共創 -知・人・社会-」をメインテーマに特別講演及び当機構内研究部門による講演が行われ、2日間にわたり24以上のセッションのプログラムが予定されている。 その場において、対話型ロボットを用いたコミュニケーション支援に関する講演を実施する。
BibTeX:
@Inproceedings{住岡英信2018,
  author    = {住岡英信},
  title     = {触れ合い対話型ロボットを用いたコミュニケーション支援},
  booktitle = {第22回関西大学先端科学技術シンポジウム},
  year      = {2018},
  address   = {関西大学千里山キャンパス100周年記念会館, 大阪},
  month     = Jan,
  day       = {18-19},
  url       = {http://www.kansai-u.ac.jp/ordist/symposium/},
  abstract  = {本学の教育研究活動並びに社会連携事業の推進に本シンポジウムは、関西大学先端科学技術推進機構における1年間の研究成果の発表の場として開催されており、研究者のみならず、多くの企業関係者が参加している。 今年度は「人工知能との共創 -知・人・社会-」をメインテーマに特別講演及び当機構内研究部門による講演が行われ、2日間にわたり24以上のセッションのプログラムが予定されている。 その場において、対話型ロボットを用いたコミュニケーション支援に関する講演を実施する。},
}
Hiroshi Ishiguro, "Conversational Robots and the Fundamental Issues", In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2017), Okinawa, Japan, December, 2017.
Abstract: This talk introduces the robots and discusses on fundamental issues. Especially, it focuses on feeling of presence, so-called "sonzaikan" in Japanese and dialogue as the fundamental issues.
BibTeX:
@Inproceedings{Ishiguro2017k,
  author    = {Hiroshi Ishiguro},
  title     = {Conversational Robots and the Fundamental Issues},
  booktitle = {2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2017)},
  year      = {2017},
  address   = {Okinawa, Japan},
  month     = Dec,
  day       = {20},
  url       = {https://asru2017.org/default.asp},
  abstract  = {This talk introduces the robots and discusses on fundamental issues. Especially, it focuses on feeling of presence, so-called "sonzaikan" in Japanese and dialogue as the fundamental issues.},
}
石黒浩, "人と関わるロボットと未来社会", 2017 国際ロボット展, 東京ビッグサイト, 東京, November, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2017v,
  author    = {石黒浩},
  title     = {人と関わるロボットと未来社会},
  booktitle = {2017 国際ロボット展},
  year      = {2017},
  address   = {東京ビッグサイト, 東京},
  month     = Nov,
  day       = {29},
  url       = {http://biz.nikkan.co.jp/eve/irex/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Humanoid Robots and Our Future Society", In INCmty, Monterrey N.L., Mexico, November, 2017.
Abstract: Hiroshi Ishiguro is an innovator like no other in the world of robotics, redefining standards of quality and creativity in the field. His passion and dedication for the subject has led him to create robots called androids that resemble humans both physically and mentally, giving them a sense of realism like never before. The Intelligent Robotics Laboratory of the School of Engineering Sciences of Osaka University is the place where Ishiguro's ideas are born, developed and turned into reality.
BibTeX:
@Inproceedings{Ishiguro2017j,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoid Robots and Our Future Society},
  booktitle = {INCmty},
  year      = {2017},
  address   = {Monterrey N.L., Mexico},
  month     = Nov,
  day       = {16},
  url       = {http://incmty.com/},
  abstract  = {Hiroshi Ishiguro is an innovator like no other in the world of robotics, redefining standards of quality and creativity in the field. His passion and dedication for the subject has led him to create robots called androids that resemble humans both physically and mentally, giving them a sense of realism like never before. The Intelligent Robotics Laboratory of the School of Engineering Sciences of Osaka University is the place where Ishiguro's ideas are born, developed and turned into reality.},
}
石黒浩, ""Connected Industries"時代における技術進化と人間の幸せ", G1経営者会議 2017, グロービス経営大学院, 東京, November, 2017.
Abstract: アンドロイド(人間酷似型ロボット)研究の世界的な権威、石黒氏は、「アンドロイドは人の心を映す鏡」だという。石黒氏の研究は、認知科学や脳科学、哲学にまで研究の幅を広げ「人間とはなにか」という真理の探究にも等しい。技術の進化、時代の変化に伴い、従来の常識、規則などあらゆる枠組みは壊され、新たな創造を繰り返す一方、「人は人を知るために生きている」ということは普遍ともいえる。“Connected Industries"時代における「人間とはなにか、人間の幸せとはなにか」を考え自分に問うことの重要性を語る。
BibTeX:
@Inproceedings{石黒浩2017t,
  author    = {石黒浩},
  title     = {"Connected Industries"時代における技術進化と人間の幸せ},
  booktitle = {G1経営者会議 2017},
  year      = {2017},
  address   = {グロービス経営大学院, 東京},
  month     = Nov,
  day       = {3},
  url       = {http://g1summit.com/g1executive/},
  abstract  = {アンドロイド(人間酷似型ロボット)研究の世界的な権威、石黒氏は、「アンドロイドは人の心を映す鏡」だという。石黒氏の研究は、認知科学や脳科学、哲学にまで研究の幅を広げ「人間とはなにか」という真理の探究にも等しい。技術の進化、時代の変化に伴い、従来の常識、規則などあらゆる枠組みは壊され、新たな創造を繰り返す一方、「人は人を知るために生きている」ということは普遍ともいえる。“Connected Industries"時代における「人間とはなにか、人間の幸せとはなにか」を考え自分に問うことの重要性を語る。},
}
石黒浩, "人と関わるロボットと未来社会", Converge 2017, ザ・ガーデンルーム 恵比寿, 東京, November, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2017s,
  author    = {石黒浩},
  title     = {人と関わるロボットと未来社会},
  booktitle = {Converge 2017},
  year      = {2017},
  address   = {ザ・ガーデンルーム 恵比寿, 東京},
  month     = Nov,
  day       = {22},
  url       = {http://event.converge2017.com/japan},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
石黒浩, "ロボットとの対話から生まれるビジネスチャンスとは?", TREND EXPO TOKYO 2017, ベルサール東京日本橋, 東京, November, 2017.
Abstract: マツコロイドなど人間に外見が酷似した数々のアンドロイドを開発・監修し、「ロボットとの対話が人間の意思決定に及ぼす影響」を研究テーマの一つに掲げている大阪大学の石黒浩教授。本講演では、石黒研究室が開発したロボットとの対話システムや、その対話システムを活用したさまざまな企業との実証実験から得られた知見を紹介する。実際にロボットが接客業務を担当したときに顧客の心情はどのように変化するのか? ロボットを通じて人間の心理を知ることは、新しいビジネスの創造にたくさんの示唆をもたらすはずだ。
BibTeX:
@Inproceedings{石黒浩2017u,
  author    = {石黒浩},
  title     = {ロボットとの対話から生まれるビジネスチャンスとは?},
  booktitle = {TREND EXPO TOKYO 2017},
  year      = {2017},
  address   = {ベルサール東京日本橋, 東京},
  month     = Nov,
  day       = {3},
  url       = {http://trendy.nikkeibp.co.jp/atcl/pickup/15/1008498/091500917/},
  abstract  = {マツコロイドなど人間に外見が酷似した数々のアンドロイドを開発・監修し、「ロボットとの対話が人間の意思決定に及ぼす影響」を研究テーマの一つに掲げている大阪大学の石黒浩教授。本講演では、石黒研究室が開発したロボットとの対話システムや、その対話システムを活用したさまざまな企業との実証実験から得られた知見を紹介する。実際にロボットが接客業務を担当したときに顧客の心情はどのように変化するのか? ロボットを通じて人間の心理を知ることは、新しいビジネスの創造にたくさんの示唆をもたらすはずだ。},
}
港隆史, "ヒューマノイドロボットと共生する社会へ", 兵庫県立加古川東高校SSH講演会, 加古川市民会館, 兵庫, November, 2017.
Abstract: 石黒ERATOプロジェクトで取り組んでいる自律型対話アンドロイドの研究開発等の活動について紹介する.
BibTeX:
@Inproceedings{港隆史2017,
  author    = {港隆史},
  title     = {ヒューマノイドロボットと共生する社会へ},
  booktitle = {兵庫県立加古川東高校SSH講演会},
  year      = {2017},
  address   = {加古川市民会館, 兵庫},
  month     = Nov,
  day       = {27},
  url       = {http://www.hyogo-c.ed.jp/~kakohigashi-hs/},
  abstract  = {石黒ERATOプロジェクトで取り組んでいる自律型対話アンドロイドの研究開発等の活動について紹介する.},
}
住岡英信, "介護現場で働くコミュニケーションロボット", 隆生・HANAKO国際交流セミナー, リーガロイヤルホテル, 大阪, November, 2017.
Abstract: 本講演では、介護現場で対話型ロボットがどのように役立つかについていくつかの研究成果を元に紹介する。
BibTeX:
@Inproceedings{住岡英信2017c,
  author    = {住岡英信},
  title     = {介護現場で働くコミュニケーションロボット},
  booktitle = {隆生・HANAKO国際交流セミナー},
  year      = {2017},
  address   = {リーガロイヤルホテル, 大阪},
  month     = Nov,
  day       = {22},
  url       = {http://www.smile-yume.com/corporateblog/%E9%9A%86%E7%94%9F%E3%83%BBhanako-%E5%9B%BD%E9%9A%9B%E4%BA%A4%E6%B5%81%E3%82%BB%E3%83%9F%E3%83%8A%E3%83%BC%E3%82%92%E9%96%8B%E5%82%AC%E3%81%84%E3%81%9F%E3%81%97%E3%81%BE%E3%81%99/},
  abstract  = {本講演では、介護現場で対話型ロボットがどのように役立つかについていくつかの研究成果を元に紹介する。},
}
石黒浩, "汎用人工知能の現状", AI and Society Symposium, 虎ノ門ヒルズ, 東京, October, 2017.
Abstract: これから人類が人工知能の能力を向上させていく道筋について考え、社会にどのようなパラダイムの転換が引き起こされるのかについて考察します。そして、汎用人工知能を使うことで、人類にとって有益な未来を実現していくには何が大事なのかを議論します。AIが社会に浸透していく過程で、人間性の欠如などの課題はあるが、AIは同時に新たな機会をもたらします。このセッションでは、思いやりのあるロボットから道徳の問題や生きることの価値に対する考え方の再評価など、さまざまな視点からAIが社会にもたらす新しい可能性について探ります。
BibTeX:
@Inproceedings{石黒浩2017r,
  author    = {石黒浩},
  title     = {汎用人工知能の現状},
  booktitle = {AI and Society Symposium},
  year      = {2017},
  address   = {虎ノ門ヒルズ, 東京},
  month     = Oct,
  day       = {11},
  abstract  = {これから人類が人工知能の能力を向上させていく道筋について考え、社会にどのようなパラダイムの転換が引き起こされるのかについて考察します。そして、汎用人工知能を使うことで、人類にとって有益な未来を実現していくには何が大事なのかを議論します。AIが社会に浸透していく過程で、人間性の欠如などの課題はあるが、AIは同時に新たな機会をもたらします。このセッションでは、思いやりのあるロボットから道徳の問題や生きることの価値に対する考え方の再評価など、さまざまな視点からAIが社会にもたらす新しい可能性について探ります。},
}
石黒浩, "人と関わるロボットと未来社会", 立命館大学経済学部同窓会 講演会, ホテルグランヴィア京都, 京都, October, 2017.
BibTeX:
@Inproceedings{石黒浩2017q,
  author    = {石黒浩},
  title     = {人と関わるロボットと未来社会},
  booktitle = {立命館大学経済学部同窓会 講演会},
  year      = {2017},
  address   = {ホテルグランヴィア京都, 京都},
  month     = Oct,
  day       = {21},
  url       = {http://www.ritsumei.ac.jp/acd/cg/ec/dousoukaihp/web/event.html},
}
石黒浩, "コミュニケーションロボットがもたらすイノベーションの可能性", ICTイノベーションフォーラム2017, 幕張メッセ, 千葉, October, 2017.
Abstract: 人と関わるロボットの研究開発がどのようなイノベーションへの影響をもたらすかについて語る。
BibTeX:
@Inproceedings{石黒浩2017p,
  author    = {石黒浩},
  title     = {コミュニケーションロボットがもたらすイノベーションの可能性},
  booktitle = {ICTイノベーションフォーラム2017},
  year      = {2017},
  address   = {幕張メッセ, 千葉},
  month     = Oct,
  day       = {3},
  url       = {http://www.soumu.go.jp/menu_news/s-news/01tsushin03_02000221.html},
  abstract  = {人と関わるロボットの研究開発がどのようなイノベーションへの影響をもたらすかについて語る。},
}
石黒浩, "人と関わるロボットの研究開発は医療分野に何をもたらすのか?", 第56回全国自治体病院学会, 幕張メッセ, 千葉, October, 2017.
Abstract: 人と関わるロボットの研究開発が医療分野おいてどういった影響をもたらすかについて語る。
BibTeX:
@Inproceedings{石黒浩2017o,
  author    = {石黒浩},
  title     = {人と関わるロボットの研究開発は医療分野に何をもたらすのか?},
  booktitle = {第56回全国自治体病院学会},
  year      = {2017},
  address   = {幕張メッセ, 千葉},
  month     = Oct,
  day       = {20},
  url       = {http://www2.c-linkage.co.jp/56jmha/},
  abstract  = {人と関わるロボットの研究開発が医療分野おいてどういった影響をもたらすかについて語る。},
}
Hiroshi Ishiguro, "Robotics for understanding humans", In 第114回医学物理学会学術大会, 8th Japan-Korea Joint Meeting on Medical Physics, 大阪大学コンベンションセンター, 大阪, September, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{Ishiguro2017i,
  author    = {Hiroshi Ishiguro},
  title     = {Robotics for understanding humans},
  booktitle = {第114回医学物理学会学術大会, 8th Japan-Korea Joint Meeting on Medical Physics},
  year      = {2017},
  address   = {大阪大学コンベンションセンター, 大阪},
  month     = Sep,
  day       = {16},
  url       = {http://www.jsmp.org/conf/114/index.html},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Androids, Robots, and Our Future Life", In 2970°The Boiling Point, The Arts Centre Gold Coast, Australia, September, 2017.
Abstract: In this talk, the speaker will talk about the basic ideas on interactive robots and show the demonstration with the robot.
BibTeX:
@Inproceedings{Ishiguro2017g,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, Robots, and Our Future Life},
  booktitle = {2970°The Boiling Point},
  year      = {2017},
  address   = {The Arts Centre Gold Coast, Australia},
  month     = Sep,
  day       = {9},
  url       = {http://www.2970degrees.com.au/},
  abstract  = {In this talk, the speaker will talk about the basic ideas on interactive robots and show the demonstration with the robot.},
}
Hiroshi Ishiguro, "Studies on Interactive Robots - Principles of conversation", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver Convention Centre, Canada, September, 2017.
Abstract: This talk introduces the robots and androids and discusses on our future society supported by them. In addition, this talk discusses on the fundamentals of human-robot interaction and conversation focusing on the feeling of presence given by robots and androids and conversations with two robots and touch panels.
BibTeX:
@Inproceedings{Ishiguro2017h,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Interactive Robots - Principles of conversation},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  year      = {2017},
  address   = {Vancouver Convention Centre, Canada},
  month     = Sep,
  day       = {26},
  url       = {http://www.iros2017.org/},
  abstract  = {This talk introduces the robots and androids and discusses on our future society supported by them. In addition, this talk discusses on the fundamentals of human-robot interaction and conversation focusing on the feeling of presence given by robots and androids and conversations with two robots and touch panels.},
}
石黒浩, "未来社会を支える知的システムの実現", 第18回アジア太平洋フォーラム・淡路会議, 兵庫県立淡路夢舞台国際会議場メインホール, 兵庫, August, 2017.
Abstract: 「テクノロジー・カルチャー・フューチャー」をメインテーマに、「技術」と「文化」を軸に真の豊かさを兼ね備えたアジア太平洋地域の未来を切り拓くにはどうすればよいのか、技術と文化の力はどのような未来社会を構築し得るものなのか幅広い観点から考える。
BibTeX:
@Inproceedings{石黒浩2017m,
  author    = {石黒浩},
  title     = {未来社会を支える知的システムの実現},
  booktitle = {第18回アジア太平洋フォーラム・淡路会議},
  year      = {2017},
  address   = {兵庫県立淡路夢舞台国際会議場メインホール, 兵庫},
  month     = Aug,
  day       = {4},
  url       = {http://www.hemri21.jp/awaji-conf/project/symposium/2017/index_announce.html},
  abstract  = {「テクノロジー・カルチャー・フューチャー」をメインテーマに、「技術」と「文化」を軸に真の豊かさを兼ね備えたアジア太平洋地域の未来を切り拓くにはどうすればよいのか、技術と文化の力はどのような未来社会を構築し得るものなのか幅広い観点から考える。},
}
住岡英信, "脳情報とホルモン", 2017年第1回B3C会議, JST東京本部別館, 東京, July, 2017.
Abstract: 本研究では脳情報とホルモンについて対話ロボットを用いた実験の結果にもとづきながら紹介する。
BibTeX:
@Inproceedings{住岡英信2017b,
  author    = {住岡英信},
  title     = {脳情報とホルモン},
  booktitle = {2017年第1回B3C会議},
  year      = {2017},
  address   = {JST東京本部別館, 東京},
  month     = Jul,
  day       = {7},
  abstract  = {本研究では脳情報とホルモンについて対話ロボットを用いた実験の結果にもとづきながら紹介する。},
  file      = {住岡英信2017b.pptx:pdf/住岡英信2017b.pptx:PowerPoint 2007+},
}
石黒浩, "AI・ロボット・クラウドはバズワードを脱皮できるか/したか?", IT連携フォーラムOACIS 第32回シンポジウム「情報技術が生み出す人間と機械の共創」, 大阪大学中之島センター, 大阪, July, 2017.
Abstract: 「機械/情報と人間の共生」というテーマで4名の講演者(産業界2名,大学・教育機関2名)がパネル討論を行う。
BibTeX:
@Inproceedings{石黒浩2017l,
  author    = {石黒浩},
  title     = {AI・ロボット・クラウドはバズワードを脱皮できるか/したか?},
  booktitle = {IT連携フォーラムOACIS 第32回シンポジウム「情報技術が生み出す人間と機械の共創」},
  year      = {2017},
  address   = {大阪大学中之島センター, 大阪},
  month     = Jul,
  day       = {7},
  url       = {http://www.oacis.jp/symposium/symposium170707.htm},
  abstract  = {「機械/情報と人間の共生」というテーマで4名の講演者(産業界2名,大学・教育機関2名)がパネル討論を行う。},
}
石黒浩, "アンドロイドと近未来社会", 夕学五十講, 丸ビルホール, 東京, June, 2017.
Abstract: 本講演ではまず、人が人やロボットに関する存在感の基本問題と対話の本質について議論を行い、ロボットと人との関わりに関する理解を深める。次に、これらの理解に基づき開発したロボットの具体的な応用について議論をする。特に児童の生活支援や学習支援におけるロボットの応用可能性について、実証実験の結果を交えながら紹介する。最後に、今後5年以内に実現できるであろう対話型ロボットを紹介して、来たるロボット社会が我々に何をもたらすかを議論する。
BibTeX:
@Inproceedings{石黒浩2017f,
  author    = {石黒浩},
  title     = {アンドロイドと近未来社会},
  booktitle = {夕学五十講},
  year      = {2017},
  address   = {丸ビルホール, 東京},
  month     = Jun,
  day       = {29},
  url       = {https://www.sekigaku.net/Sekigaku/Default/Schedule/LectureList.aspx},
  abstract  = {本講演ではまず、人が人やロボットに関する存在感の基本問題と対話の本質について議論を行い、ロボットと人との関わりに関する理解を深める。次に、これらの理解に基づき開発したロボットの具体的な応用について議論をする。特に児童の生活支援や学習支援におけるロボットの応用可能性について、実証実験の結果を交えながら紹介する。最後に、今後5年以内に実現できるであろう対話型ロボットを紹介して、来たるロボット社会が我々に何をもたらすかを議論する。},
}
石黒浩, "ノンタイトル", ITisKANSAI 5周年 スペシャルトーク, 中央会計株式会社, 大阪, June, 2017.
Abstract: イノヴェイションの仕掛人にしてビジネスデザイナー、monogotoのCEO・濱口秀司氏と、「世界の100人の生きている天才のランキング」で日本人最高位の26位に選出されたあの世界的権威石黒浩教授の二人の天才による対談。
BibTeX:
@Inproceedings{石黒浩2017j,
  author    = {石黒浩},
  title     = {ノンタイトル},
  booktitle = {ITisKANSAI 5周年 スペシャルトーク},
  year      = {2017},
  address   = {中央会計株式会社, 大阪},
  month     = Jun,
  day       = {17},
  url       = {http://itiskansai.com/v47/},
  abstract  = {イノヴェイションの仕掛人にしてビジネスデザイナー、monogotoのCEO・濱口秀司氏と、「世界の100人の生きている天才のランキング」で日本人最高位の26位に選出されたあの世界的権威石黒浩教授の二人の天才による対談。},
}
Hiroshi Ishiguro, "Studies on humanlike robots", In Computer Graphics International 2017 (CGI2017), Keio University Hiyoshi Campus, Yokohama, June, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2017e,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on humanlike robots},
  booktitle = {Computer Graphics International 2017 (CGI2017)},
  year      = {2017},
  address   = {Keio University Hiyoshi Campus, Yokohama},
  month     = Jun,
  url       = {http://fj.ics.keio.ac.jp/cgi17/},
  abstract  = {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
}
石黒浩, "コミュニケーションロボットの可能性", 第2回 次世代の人工知能技術に関する合同シンポジウム, 大阪大学コンベンションセンター, 大阪, May, 2017.
Abstract: 人工知能技術の研究開発と社会実装を加速化するために政府の司令塔として設置した「人工知能技術戦略会議」にて取りまとめた「人工知能の研究開発目標と産業化のロードマップ」の検討を踏まえて、最近の人工知能技術の動向や、研究開発、社会実装、人材育成、データ整備、ベンチャー支援等に関して議論する。
BibTeX:
@Inproceedings{石黒浩2017k,
  author    = {石黒浩},
  title     = {コミュニケーションロボットの可能性},
  booktitle = {第2回 次世代の人工知能技術に関する合同シンポジウム},
  year      = {2017},
  address   = {大阪大学コンベンションセンター, 大阪},
  month     = May,
  day       = {22},
  url       = {https://www.d-wks.net/nict170522/},
  abstract  = {人工知能技術の研究開発と社会実装を加速化するために政府の司令塔として設置した「人工知能技術戦略会議」にて取りまとめた「人工知能の研究開発目標と産業化のロードマップ」の検討を踏まえて、最近の人工知能技術の動向や、研究開発、社会実装、人材育成、データ整備、ベンチャー支援等に関して議論する。},
}
石黒浩, "人間型ロボットと未来の社会", 第55回 IBMユーザー・シンポジウム, 国立京都国際会館, 京都, May, 2017.
Abstract: 近年の人口減少・少子高齢化など社会の急激な変化に伴い、教育や福祉・介護などの分野でのロボットの活用に、一層注目が集まっている。特に介護の分野では、アンドロイドによる認知症改善などの効果が期待されており、このように、人間とロボットが共生する社会実現への期待が日増しに膨らむ中、今回、アンドロイド研究の第一人者で世界的にも注目を集める、石黒浩・大阪大学大学院基礎工学研究科教授が「人間型ロボットと未来の社会」をテーマに語る。
BibTeX:
@Inproceedings{石黒浩2017i,
  author    = {石黒浩},
  title     = {人間型ロボットと未来の社会},
  booktitle = {第55回 IBMユーザー・シンポジウム},
  year      = {2017},
  address   = {国立京都国際会館, 京都},
  month     = May,
  day       = {19},
  url       = {http://www.uken.or.jp/symp/symp55/program/closing.shtml},
  abstract  = {近年の人口減少・少子高齢化など社会の急激な変化に伴い、教育や福祉・介護などの分野でのロボットの活用に、一層注目が集まっている。特に介護の分野では、アンドロイドによる認知症改善などの効果が期待されており、このように、人間とロボットが共生する社会実現への期待が日増しに膨らむ中、今回、アンドロイド研究の第一人者で世界的にも注目を集める、石黒浩・大阪大学大学院基礎工学研究科教授が「人間型ロボットと未来の社会」をテーマに語る。},
}
石黒浩, "人間型ロボットと未来社会", CONTOUR 2017, 講談社講堂, 東京, April, 2017.
Abstract: コントゥールは「輪郭」「アウトライン」などを意味する英語・フランス語の言葉。「リベラルアーツ」をテーマに「テクノロジー」について語り、「知」の輪郭を創る。
BibTeX:
@Inproceedings{石黒浩2017e,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {CONTOUR 2017},
  year      = {2017},
  address   = {講談社講堂, 東京},
  month     = Apr,
  day       = {15},
  url       = {http://courrierjapon026.peatix.com/},
  abstract  = {コントゥールは「輪郭」「アウトライン」などを意味する英語・フランス語の言葉。「リベラルアーツ」をテーマに「テクノロジー」について語り、「知」の輪郭を創る。},
}
Hiroshi Ishiguro, "Studies on Humanlike Robots", In Academia Film Olomouc (AFO52), Olomouc, Czech, April, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2017f,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Humanlike Robots},
  booktitle = {Academia Film Olomouc (AFO52)},
  year      = {2017},
  address   = {Olomouc, Czech},
  month     = Apr,
  day       = {28},
  url       = {http://www.afo.cz/programme/3703/},
  abstract  = {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
}
石黒浩, "アンドロイドと未来社会", 初等社 創立40周年特別講演会, 国際文化会館, 東京, April, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2017h,
  author    = {石黒浩},
  title     = {アンドロイドと未来社会},
  booktitle = {初等社 創立40周年特別講演会},
  year      = {2017},
  address   = {国際文化会館, 東京},
  month     = Apr,
  day       = {21},
  url       = {http://www.shotousha.com/news/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Humans and Robots in a Free-for-All Discussion", In The South by Southwest (SXSW) Conference & Festivals 2017, Austin Convention Center, USA, March, 2017.
Abstract: Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?
BibTeX:
@Inproceedings{Ishiguro2017c,
  author    = {Hiroshi Ishiguro},
  title     = {Humans and Robots in a Free-for-All Discussion},
  booktitle = {The South by Southwest (SXSW) Conference \& Festivals 2017},
  year      = {2017},
  address   = {Austin Convention Center, USA},
  month     = Mar,
  day       = {12},
  url       = {http://schedule.sxsw.com/2017/events/PP95381},
  abstract  = {Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?},
}
Hiroshi Ishiguro, "AI, Labour, Creativity and Authorship", In AI in Asia: AI for Social Good, Waseda University, Tokyo, March, 2017.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.
BibTeX:
@Inproceedings{Ishiguro2017a,
  author    = {Hiroshi Ishiguro},
  title     = {AI, Labour, Creativity and Authorship},
  booktitle = {AI in Asia: AI for Social Good},
  year      = {2017},
  address   = {Waseda University, Tokyo},
  month     = Mar,
  day       = {6},
  url       = {https://www.digitalasiahub.org/2017/02/27/ai-in-asia-ai-for-social-good/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.},
}
Hiroshi Ishiguro, "Androids, Robots, and Our Future Life", In CeBIT 2017, Hannover, Germany, March, 2017.
Abstract: We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.
BibTeX:
@Inproceedings{Ishiguro2017b,
  author    = {Hiroshi Ishiguro},
  title     = {Androids, Robots, and Our Future Life},
  booktitle = {CeBIT 2017},
  year      = {2017},
  address   = {Hannover, Germany},
  month     = Mar,
  day       = {21},
  url       = {http://www.cebit.de/en/},
  abstract  = {We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.},
}
Hiroshi Ishiguro, "Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans", In USC Visions and Voices, Doheny Memorial Library, USA, March, 2017.
Abstract: A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.
BibTeX:
@Inproceedings{Ishiguro2017d,
  author    = {Hiroshi Ishiguro},
  title     = {Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans},
  booktitle = {USC Visions and Voices},
  year      = {2017},
  address   = {Doheny Memorial Library, USA},
  month     = Mar,
  day       = {23},
  url       = {https://calendar.usc.edu/event/uncanny_valleys_thinking_and_feeling_in_the_age_of_synthetic_humans#.WNDWQz96pGZ},
  abstract  = {A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.},
}
石黒浩, "ロボット・AIは、テレビと生活者の関係をどう変えるのか!?", クリエイティブテクノロジーラボ, 汐留・日本テレビタワー, 東京, March, 2017.
Abstract: ロボットやAIがあたりまえのように活用される時代に、テレビは情報発信をどう進化させることができるのか?また生活者はそれをどう受け止めるのか?マツコロイドの開発でもお馴染み、大阪大学石黒教授が研究する「人間とロボットの対話」からそのヒントを探り、メディアのロボット・AI活用について学びます。
BibTeX:
@Inproceedings{石黒浩2017c,
  author    = {石黒浩},
  title     = {ロボット・AIは、テレビと生活者の関係をどう変えるのか!?},
  booktitle = {クリエイティブテクノロジーラボ},
  year      = {2017},
  address   = {汐留・日本テレビタワー, 東京},
  month     = Mar,
  day       = {7},
  url       = {http://www.ntv.co.jp/ctl/},
  abstract  = {ロボットやAIがあたりまえのように活用される時代に、テレビは情報発信をどう進化させることができるのか?また生活者はそれをどう受け止めるのか?マツコロイドの開発でもお馴染み、大阪大学石黒教授が研究する「人間とロボットの対話」からそのヒントを探り、メディアのロボット・AI活用について学びます。},
}
石黒浩, "人間型ロボットと未来の社会", フォーリン・プレスセンター(FPCJ)プレス・ブリーフィング, 公益財団法人フォーリン・プレスセンター, 東京, February, 2017.
Abstract: 近年の人口減少・少子高齢化など社会の急激な変化に伴い、教育や福祉・介護などの分野でのロボットの活用に、一層注目が集まっている。特に介護の分野では、アンドロイドによる認知症改善などの効果が期待されている。 そこで、期待が膨らむ人間とロボットが共生する社会実現をテーマに語る。
BibTeX:
@Inproceedings{石黒浩2017b,
  author    = {石黒浩},
  title     = {人間型ロボットと未来の社会},
  booktitle = {フォーリン・プレスセンター(FPCJ)プレス・ブリーフィング},
  year      = {2017},
  address   = {公益財団法人フォーリン・プレスセンター, 東京},
  month     = Feb,
  day       = {2},
  url       = {http://fpcj.jp/assistance/briefings_notice/p=51061/},
  abstract  = {近年の人口減少・少子高齢化など社会の急激な変化に伴い、教育や福祉・介護などの分野でのロボットの活用に、一層注目が集まっている。特に介護の分野では、アンドロイドによる認知症改善などの効果が期待されている。 そこで、期待が膨らむ人間とロボットが共生する社会実現をテーマに語る。},
}
石黒浩, "ロボットと協創する未来", 第1回けいはんなRC異分野交流セミナー, サントリーワールドリサーチセンター, 京都, February, 2017.
Abstract: 日々、進化を続けるロボットたちとプロデュースしたい未来とは? けいはんなRCで協創しうる“ロボットと人間の未来イメージ"を考えるセミナー
BibTeX:
@Inproceedings{石黒浩2017d,
  author    = {石黒浩},
  title     = {ロボットと協創する未来},
  booktitle = {第1回けいはんなRC異分野交流セミナー},
  year      = {2017},
  address   = {サントリーワールドリサーチセンター, 京都},
  month     = Feb,
  day       = {22},
  url       = {http://keihanna-rc.jp/news/20170222hubseminar1st_robo/},
  abstract  = {日々、進化を続けるロボットたちとプロデュースしたい未来とは? けいはんなRCで協創しうる“ロボットと人間の未来イメージ"を考えるセミナー},
}
石黒浩, "対話型ロボットの基本問題", NTT R&Dフォーラム 2017, NTT武蔵野研究開発センタ, 東京, February, 2017.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2017a,
  author    = {石黒浩},
  title     = {対話型ロボットの基本問題},
  booktitle = {NTT R\&Dフォーラム 2017},
  year      = {2017},
  address   = {NTT武蔵野研究開発センタ, 東京},
  month     = Feb,
  day       = {17},
  url       = {https://labevent.ecl.ntt.co.jp/forum2017/info/lecture.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
Hiroshi Ishiguro, "Studies on humanlike robots", In IVA seminar, IVA Konferenscenter, Sweden, January, 2017.
Abstract: Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?
BibTeX:
@Inproceedings{Ishiguro2017,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on humanlike robots},
  booktitle = {IVA seminar},
  year      = {2017},
  address   = {IVA Konferenscenter, Sweden},
  month     = Jan,
  day       = {24},
  url       = {http://www.iva.se/en/tidigare-event/social-and-humanlike-robots/},
  abstract  = {Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?},
}
石黒浩, "人と関わるロボットと未来社会", 第1回 ロボデックス, 東京ビッグサイト, 東京, January, 2017.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2017,
  author    = {石黒浩},
  title     = {人と関わるロボットと未来社会},
  booktitle = {第1回 ロボデックス},
  year      = {2017},
  address   = {東京ビッグサイト, 東京},
  month     = Jan,
  day       = {20},
  url       = {http://www.robodex.jp/seminar/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
石黒浩, "アンロドイドと未来社会", OSAKAビジネスフェアものづくり展2016, マイドームおおさか, 大阪, November, 2016.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2016at,
  author    = {石黒浩},
  title     = {アンロドイドと未来社会},
  booktitle = {OSAKAビジネスフェアものづくり展2016},
  year      = {2016},
  address   = {マイドームおおさか, 大阪},
  month     = Nov,
  day       = {22},
  url       = {http://www.cgc-osaka.jp/event/29},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Humanlike robots and our future society", In ROMAEUROPA FESTIVAL 2016, Auditorium MACRO, Italy, November, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016i,
  author    = {Hiroshi Ishiguro},
  title     = {Humanlike robots and our future society},
  booktitle = {ROMAEUROPA FESTIVAL 2016},
  year      = {2016},
  address   = {Auditorium MACRO, Italy},
  month     = Nov,
  day       = {24},
  url       = {http://romaeuropa.net/festival-2016/ishiguro/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
石黒浩, "対話型ロボットと福祉", 第20回大分大学福祉フォーラム, 大分オアシスタワーホテル, 大分, November, 2016.
Abstract: ロボット研究の成果の紹介を交え、ロボットと福祉について講演する。
BibTeX:
@Inproceedings{石黒浩2016as,
  author    = {石黒浩},
  title     = {対話型ロボットと福祉},
  booktitle = {第20回大分大学福祉フォーラム},
  year      = {2016},
  address   = {大分オアシスタワーホテル, 大分},
  month     = Nov,
  day       = {12},
  url       = {http://www.hwrc.oita-u.ac.jp/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと福祉について講演する。},
}
石黒浩, "人間型ロボットと未来社会", ビジネスEXPO「第30回 北海道 技術・ビジネス交流会」, アクセスサッポロ, 北海道, November, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016ao,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {ビジネスEXPO「第30回 北海道 技術・ビジネス交流会」},
  year      = {2016},
  address   = {アクセスサッポロ, 北海道},
  month     = Nov,
  day       = {11},
  url       = {http://www.business-expo.jp/},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "ロボットと未来社会", 山梨テクノICTメッセ2016, アイメッセ山梨, 山梨, November, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016an,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {山梨テクノICTメッセ2016},
  year      = {2016},
  address   = {アイメッセ山梨, 山梨},
  month     = Nov,
  day       = {10},
  url       = {http://yamanashi-technoict.jp/overview},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
Hiroshi Ishiguro, "What can we learn from very human-like robots & androids?", In Creative Innovation Asia Pacific 2016, Sofitel Melbourne on Collins, Australia, November, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@Inproceedings{Ishiguro2016e,
  author    = {Hiroshi Ishiguro},
  title     = {What can we learn from very human-like robots \& androids?},
  booktitle = {Creative Innovation Asia Pacific 2016},
  year      = {2016},
  address   = {Sofitel Melbourne on Collins, Australia},
  month     = Nov,
  day       = {9},
  url       = {http://www.creativeinnovationglobal.com.au/Ci2016/},
  abstract  = {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
}
Hiroshi Ishiguro, "Robotics", In Microsoft Research Asia Faculty Summit 2016, Yonsei University, Korea, November, 2016.
Abstract: his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.
BibTeX:
@Inproceedings{Ishiguro2016k,
  author    = {Hiroshi Ishiguro},
  title     = {Robotics},
  booktitle = {Microsoft Research Asia Faculty Summit 2016},
  year      = {2016},
  address   = {Yonsei University, Korea},
  month     = Nov,
  day       = {5},
  url       = {https://www.microsoft.com/en-us/research/event/asia-faculty-summit-2016/},
  abstract  = {his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.},
}
石黒浩, "AIとVRによって本格化するロボットの普及", 第1回 NVCCテクノロジーセミナー「ロボットが街を歩く未来」, Global Business Hub Tokyo, 東京, October, 2016.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2016ar,
  author    = {石黒浩},
  title     = {AIとVRによって本格化するロボットの普及},
  booktitle = {第1回 NVCCテクノロジーセミナー「ロボットが街を歩く未来」},
  year      = {2016},
  address   = {Global Business Hub Tokyo, 東京},
  month     = Oct,
  day       = {27},
  url       = {http://1027nvcc.peatix.com/},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
石黒浩, "人間はアンドロイドと恋愛できるか?", 朝日地球会議 2016, イイノホール, 東京, October, 2016.
Abstract: 見た目は人間と変わらないアンドロイドが日常に溶け込むように存在する――ロボットとのコミュニケーションは社会に何をもたらすのか。人間とロボットとの境界を考えることは、人間とはいったい何かといった根源的、哲学的な課題を内包している。
BibTeX:
@Inproceedings{石黒浩2016ap,
  author    = {石黒浩},
  title     = {人間はアンドロイドと恋愛できるか?},
  booktitle = {朝日地球会議 2016},
  year      = {2016},
  address   = {イイノホール, 東京},
  month     = Oct,
  day       = {2},
  url       = {http://www.asahi.com/eco/awf/},
  abstract  = {見た目は人間と変わらないアンドロイドが日常に溶け込むように存在する――ロボットとのコミュニケーションは社会に何をもたらすのか。人間とロボットとの境界を考えることは、人間とはいったい何かといった根源的、哲学的な課題を内包している。},
}
石黒浩, "ロボット社会の到来とその可能性 ~ロボットと未来社会~", 平成28年度(第55回)公務能率研究会議 ~進化する自治体経営~ 第4分科会「IoT・コラボレーション」, NOMAホール, 東京, October, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016ak,
  author    = {石黒浩},
  title     = {ロボット社会の到来とその可能性 ~ロボットと未来社会~},
  booktitle = {平成28年度(第55回)公務能率研究会議 ~進化する自治体経営~ 第4分科会「IoT・コラボレーション」},
  year      = {2016},
  address   = {NOMAホール, 東京},
  month     = Oct,
  day       = {21},
  url       = {https://www.noma-tokyo-gyosei.jp/seminar/konoken/#hi01},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "人間型ロボットと未来社会", Hitachi Social Innovation Forum 2016 TOKYO, 東京国際フォーラム, 東京, October, 2016.
Abstract: パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016aq,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {Hitachi Social Innovation Forum 2016 TOKYO},
  year      = {2016},
  address   = {東京国際フォーラム, 東京},
  month     = Oct,
  day       = {28},
  url       = {http://hsiftokyo.hitachi/outline/index.html},
  abstract  = {パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。},
}
石黒浩, "人間型ロボットと未来社会", 第20回 実験社会科学カンファレンス, 同志社大学今出川キャンパス, 京都, October, 2016.
Abstract: ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。
BibTeX:
@Inproceedings{石黒浩2016ag,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {第20回 実験社会科学カンファレンス},
  year      = {2016},
  address   = {同志社大学今出川キャンパス, 京都},
  month     = Oct,
  day       = {29},
  url       = {備考
http://www.geocities.jp/staguchi74/expss2016/201610expss20th_v4.pdf},
  abstract  = {ロボット研究の成果の紹介を交え、ロボットと未来社会について講演する。},
}
Hiroshi Ishiguro, "Studies on Humanoids and Androids", In CEDI 2016, University of Salamanca, Spain, September, 2016.
Abstract: Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@Inproceedings{Ishiguro2016h,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on Humanoids and Androids},
  booktitle = {CEDI 2016},
  year      = {2016},
  address   = {University of Salamanca, Spain},
  month     = Sep,
  day       = {13},
  url       = {http://www.congresocedi.es/en/ponentes-invitados},
  abstract  = {Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
}
Hiroshi Ishiguro, "Interactive robots and our future life", In MarkeThing, Alten Teppichfabrik Berlin, Germany, September, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016g,
  author    = {Hiroshi Ishiguro},
  title     = {Interactive robots and our future life},
  booktitle = {MarkeThing},
  year      = {2016},
  address   = {Alten Teppichfabrik Berlin, Germany},
  month     = Sep,
  day       = {28},
  url       = {http://www.markething.de/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
石黒浩, "アンドロイド(知能ロボット)と未来社会", 未来を学ぶサイエンスフォーラム, 佐賀市文化会館中ホール, 佐賀, August, 2016.
Abstract: 最先端のロボット研究者として世界的に注目されている工学博士の石黒浩氏(大阪大学大学院教授)が「アンドロイド(知能ロボット)と未来社会」をテーマに、科学や人類の進歩、ものづくりの魅力などを語る。
BibTeX:
@Inproceedings{石黒浩2016am,
  author    = {石黒浩},
  title     = {アンドロイド(知能ロボット)と未来社会},
  booktitle = {未来を学ぶサイエンスフォーラム},
  year      = {2016},
  address   = {佐賀市文化会館中ホール, 佐賀},
  month     = Aug,
  day       = {27},
  url       = {http://www.saga-s.co.jp/android.html},
  abstract  = {最先端のロボット研究者として世界的に注目されている工学博士の石黒浩氏(大阪大学大学院教授)が「アンドロイド(知能ロボット)と未来社会」をテーマに、科学や人類の進歩、ものづくりの魅力などを語る。},
}
石黒浩, "人と関わるロボットとその基本問題", IEEE Metro Area Workshop in Kansai, 2016, 同志社大学 今出川キャンパス, 京都, August, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016ai,
  author    = {石黒浩},
  title     = {人と関わるロボットとその基本問題},
  booktitle = {IEEE Metro Area Workshop in Kansai, 2016},
  year      = {2016},
  address   = {同志社大学 今出川キャンパス, 京都},
  month     = Aug,
  day       = {5},
  url       = {http://www.ieee-jp.org/section/kansai/maw2016/},
  etitle    = {Interactive robots and the fundamental issues},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "正気と狂気", ITisKANSAI vol.39 夏祭りスペシャル対談, 中央会計株式会社, 大阪, August, 2016.
Abstract: USBフラッシュメモリの発明、日本初の商用イントラネットの開発、イオンドライヤーの発明…。世界中で“0から1"のスイッチを押し続けることを仕事にしているイノヴェイションの仕掛人にしてビジネスデザイナー、monogotoのCEO・濱口秀司氏と、「世界の100人の生きている天才のランキング」で日本人最高位の26位に選出されたあの世界的権威石黒浩教授の二人の天才による対談。
BibTeX:
@Inproceedings{石黒浩2016al,
  author    = {石黒浩},
  title     = {正気と狂気},
  booktitle = {ITisKANSAI vol.39 夏祭りスペシャル対談},
  year      = {2016},
  address   = {中央会計株式会社, 大阪},
  month     = Aug,
  day       = {25},
  url       = {http://itiskansai.com/vol-39/},
  abstract  = {USBフラッシュメモリの発明、日本初の商用イントラネットの開発、イオンドライヤーの発明…。世界中で“0から1"のスイッチを押し続けることを仕事にしているイノヴェイションの仕掛人にしてビジネスデザイナー、monogotoのCEO・濱口秀司氏と、「世界の100人の生きている天才のランキング」で日本人最高位の26位に選出されたあの世界的権威石黒浩教授の二人の天才による対談。},
}
住岡英信, "存在感メディアによる触れ合いの効果", 日本ハグ協会 ハグの日イベント2016 8月9日はハグの日ですよ, 名古屋逓信会館, 愛知, August, 2016.
Abstract: 本発表では存在感メディアハグビーがもたらすふれあいの効果について紹介する
BibTeX:
@Inproceedings{住岡英信2016,
  author    = {住岡英信},
  title     = {存在感メディアによる触れ合いの効果},
  booktitle = {日本ハグ協会 ハグの日イベント2016 8月9日はハグの日ですよ},
  year      = {2016},
  address   = {名古屋逓信会館, 愛知},
  month     = Aug,
  day       = {9},
  url       = {http://hug.sc/event/2016%E5%B9%B4/%E8%AC%9B%E6%BC%94-%E3%83%91%E3%83%BC%E3%83%86%E3%82%A3-%E3%83%8F%E3%82%B0%E3%81%AE%E6%97%A5%E3%82%A4%E3%83%99%E3%83%B3%E3%83%882016/},
  abstract  = {本発表では存在感メディアハグビーがもたらすふれあいの効果について紹介する},
}
石黒浩, "ロボットと未来社会", SoftBank World 2016 特別講演, ザ・プリンス パークタワー東京, 東京, July, 2016.
Abstract: パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。 このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。 本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016af,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {SoftBank World 2016 特別講演},
  year      = {2016},
  address   = {ザ・プリンス パークタワー東京, 東京},
  month     = Jul,
  day       = {22},
  url       = {https://softbankworld.com/keynote/},
  abstract  = {パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。
このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。
本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。},
}
石黒浩, "愛のあるアートの未来~人を動かす力とは~", 川口ダム自然エネルギーミュージアム トークイベント, 大塚ヴェガホール, 徳島, July, 2016.
Abstract: 川口ダム自然エネルギーミュージアムのオープンを記念して、ロボット学者の石黒浩氏、ウルトラテクノロジスト集団チームラボの猪子寿之氏、モデレーターとして日本科学未来館キュレーターの内田まほろ氏を招いたトークイベントを開催。 テーマは「愛のあるアートの未来~人を動かす力とは~」。 ミュージアムに展示された「コミュニケーションロボット」や「お絵かきスマートタウン」にはじまり、アートや最先端技術に寄せる二人の思いを紹介する。
BibTeX:
@Inproceedings{石黒浩2016aj,
  author    = {石黒浩},
  title     = {愛のあるアートの未来~人を動かす力とは~},
  booktitle = {川口ダム自然エネルギーミュージアム トークイベント},
  year      = {2016},
  address   = {大塚ヴェガホール, 徳島},
  month     = Jul,
  day       = {30},
  url       = {https://www.kre-museum.jp/archives/category/_event},
  abstract  = {川口ダム自然エネルギーミュージアムのオープンを記念して、ロボット学者の石黒浩氏、ウルトラテクノロジスト集団チームラボの猪子寿之氏、モデレーターとして日本科学未来館キュレーターの内田まほろ氏を招いたトークイベントを開催。 テーマは「愛のあるアートの未来~人を動かす力とは~」。 ミュージアムに展示された「コミュニケーションロボット」や「お絵かきスマートタウン」にはじまり、アートや最先端技術に寄せる二人の思いを紹介する。},
}
Hiroshi Ishiguro, "Communication Robots", In International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@Inproceedings{Ishiguro2016f,
  author    = {Hiroshi Ishiguro},
  title     = {Communication Robots},
  booktitle = {International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality},
  year      = {2016},
  address   = {PACIFICO Yokohama, Yokohama},
  month     = Jul,
  day       = {23},
  url       = {http://darwin.c.u-tokyo.ac.jp/empathysymposium2016/ja/},
  abstract  = {Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
}
石黒浩, "今こそ、ロボットとコンテンツが結びつく時!", 第2回 先端コンテンツ技術展 専門セミナー (コンテンツ東京2016), 東京ビッグサイト, 東京, July, 2016.
Abstract: コミュニケーションロボットは、あらゆる情報を現実の空間で発信する「メディア」である。ロボットの普及のためには、ロボットにストーリーを与える事ができるコンテンツクリエイターの力が必要だ。自身のアンドロイドやマツコロイドを産み出した世界的ロボット研究者が、コンテンツ業界に求めることを語る。
BibTeX:
@Inproceedings{石黒浩2016w,
  author    = {石黒浩},
  title     = {今こそ、ロボットとコンテンツが結びつく時!},
  booktitle = {第2回 先端コンテンツ技術展 専門セミナー (コンテンツ東京2016)},
  year      = {2016},
  address   = {東京ビッグサイト, 東京},
  month     = Jul,
  day       = {1},
  url       = {http://www.ct-next.jp/Conference_Event/seminar-event02/},
  abstract  = {コミュニケーションロボットは、あらゆる情報を現実の空間で発信する「メディア」である。ロボットの普及のためには、ロボットにストーリーを与える事ができるコンテンツクリエイターの力が必要だ。自身のアンドロイドやマツコロイドを産み出した世界的ロボット研究者が、コンテンツ業界に求めることを語る。},
}
石黒浩, "ロボットと人が共存する未来の医療社会", Centricity LIVE Tokyo 2016 GE ヘルスケア IT リーダーシップ・ミーティング 『今、ここにある医療ITの未来をお客様と共に語る会』, 紀尾井カンファレンス, 東京, July, 2016.
Abstract: 今後、さらに進む高齢化社会において医療と介護が益々融合していくと言われています。また、増え続ける高齢者に対して肉体に限らず認知機能をできる限り維持させることや、施設や在宅において介護労働力の不足やコミュニケーションをサポートするためのロボットの活用も非常に重要な課題となっています。人間としての見かけやコミュニケーションができるヒューマン型ロボットには人間は親近感と安心感を抱くことができ、高齢者の認知症の症状を改善や問題行動の抑制にもつながると言われています。ロボットと人が豊かな関係を築き共存する未来への医療社会の展望を開く一助となればと考えております。
BibTeX:
@Inproceedings{石黒浩2016ah,
  author    = {石黒浩},
  title     = {ロボットと人が共存する未来の医療社会},
  booktitle = {Centricity LIVE Tokyo 2016 GE ヘルスケア IT リーダーシップ・ミーティング 『今、ここにある医療ITの未来をお客様と共に語る会』},
  year      = {2016},
  address   = {紀尾井カンファレンス, 東京},
  month     = Jul,
  day       = {23},
  url       = {http://seminar.jp/C-LIVE2016/},
  abstract  = {今後、さらに進む高齢化社会において医療と介護が益々融合していくと言われています。また、増え続ける高齢者に対して肉体に限らず認知機能をできる限り維持させることや、施設や在宅において介護労働力の不足やコミュニケーションをサポートするためのロボットの活用も非常に重要な課題となっています。人間としての見かけやコミュニケーションができるヒューマン型ロボットには人間は親近感と安心感を抱くことができ、高齢者の認知症の症状を改善や問題行動の抑制にもつながると言われています。ロボットと人が豊かな関係を築き共存する未来への医療社会の展望を開く一助となればと考えております。},
}
Hiroshi Ishiguro, "Adaptation to Teleoperate Robots", In The 31st International Congress of Psychology, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2016d,
  author    = {Hiroshi Ishiguro},
  title     = {Adaptation to Teleoperate Robots},
  booktitle = {The 31st International Congress of Psychology},
  year      = {2016},
  address   = {PACIFICO Yokohama, Yokohama},
  month     = Jul,
  day       = {24},
  url       = {http://www.icp2016.jp/index.html},
  abstract  = {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
}
石黒浩, "アンドロイドと人が共存する未来の社会 ~人型ロボットはどこまで進化するのか~", 第57回 全国IE年次大会, 名古屋国際会議場, 愛知, July, 2016.
Abstract: 自分自身のコピーロボットである「ジェミノイド」や、人間の体系を模した「テレノイド」などを開発した経緯や、これからの時代においてロボットが社会に果たす役割などについて講話する。
BibTeX:
@Inproceedings{石黒浩2016y,
  author    = {石黒浩},
  title     = {アンドロイドと人が共存する未来の社会 ~人型ロボットはどこまで進化するのか~},
  booktitle = {第57回 全国IE年次大会},
  year      = {2016},
  address   = {名古屋国際会議場, 愛知},
  month     = Jul,
  day       = {13},
  url       = {www.cpc.or.jp/pdf/2016ietk26.pdf},
  abstract  = {自分自身のコピーロボットである「ジェミノイド」や、人間の体系を模した「テレノイド」などを開発した経緯や、これからの時代においてロボットが社会に果たす役割などについて講話する。},
}
石黒浩, "ロボットによる生活支援・学習支援", 第5回日本小児診療多職種研究会, パシフィコ横浜, 神奈川, July, 2016.
Abstract: 人と対話するのが苦手な人でもロボットであれば対話できるという事例が数多く報告されている。我々の研究においても、高齢者に対する対話サービスを行うロボット、テレノイドや、自閉症児に対する対話サービスを行うコミュー、支援学校で教育者と児童の間での対話を支援するハグビーを開発してきた。本講演ではこれらのロボットを紹介しながら、ロボットが児童の生活支援や学習支援においてどのように役立つかを議論する。
BibTeX:
@Inproceedings{石黒浩2016x,
  author    = {石黒浩},
  title     = {ロボットによる生活支援・学習支援},
  booktitle = {第5回日本小児診療多職種研究会},
  year      = {2016},
  address   = {パシフィコ横浜, 神奈川},
  month     = Jul,
  day       = {31},
  url       = {http://web.apollon.nta.co.jp/tashokusyu2016/gaiyou_program.html},
  abstract  = {人と対話するのが苦手な人でもロボットであれば対話できるという事例が数多く報告されている。我々の研究においても、高齢者に対する対話サービスを行うロボット、テレノイドや、自閉症児に対する対話サービスを行うコミュー、支援学校で教育者と児童の間での対話を支援するハグビーを開発してきた。本講演ではこれらのロボットを紹介しながら、ロボットが児童の生活支援や学習支援においてどのように役立つかを議論する。},
}
石黒浩, "ロボットと未来社会", JISA関西イベント「デジタル革命時代の想像と創造」, グランフロント大阪ナレッジキャピタル, 大阪, July, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016ac,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {JISA関西イベント「デジタル革命時代の想像と創造」},
  year      = {2016},
  address   = {グランフロント大阪ナレッジキャピタル, 大阪},
  month     = Jul,
  day       = {26},
  url       = {http://www.jisa.or.jp/event/tabid/152/pdid/902/Default.aspx},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "ロボットと未来社会 -ヒトと共生するロボットの研究開発-", 中産連会員総会特別会員懇話会, 名古屋東急ホテル, 愛知, June, 2016.
Abstract: 第三次AI(人工知能)ブームに沸くなか、囲碁AIがプロ棋士に圧勝したことは、昨年ビジネス誌に掲載された「機械に奪われそうな仕事ランキング」と相まって、ヒトとロボットの関係はどうあるべきかを考える機会となりました。そこで、今回は高齢者から子供まで社会的状況で自然に関われる自律型ロボットの実現をめざし、身振り手振り、表情、視線、触れ合いなど、人間のように多様な情報伝達手段を用いて対話できる共生ヒューマンロボットインタラクション(人間とロボットの相互作用)の研究に取り組む大阪大学の石黒先生を招き、ロボット化社会の可能性とその社会において、われわれ人間が何を学ぶのか考察します。
BibTeX:
@Inproceedings{石黒浩2016ae,
  author    = {石黒浩},
  title     = {ロボットと未来社会 -ヒトと共生するロボットの研究開発-},
  booktitle = {中産連会員総会特別会員懇話会},
  year      = {2016},
  address   = {名古屋東急ホテル, 愛知},
  month     = Jun,
  day       = {15},
  url       = {http://www.chusanren.or.jp/sc/sdata/3891.html},
  abstract  = {第三次AI(人工知能)ブームに沸くなか、囲碁AIがプロ棋士に圧勝したことは、昨年ビジネス誌に掲載された「機械に奪われそうな仕事ランキング」と相まって、ヒトとロボットの関係はどうあるべきかを考える機会となりました。そこで、今回は高齢者から子供まで社会的状況で自然に関われる自律型ロボットの実現をめざし、身振り手振り、表情、視線、触れ合いなど、人間のように多様な情報伝達手段を用いて対話できる共生ヒューマンロボットインタラクション(人間とロボットの相互作用)の研究に取り組む大阪大学の石黒先生を招き、ロボット化社会の可能性とその社会において、われわれ人間が何を学ぶのか考察します。},
}
石黒浩, "アンドロイド開発を通した人間理解", 日本生理人類学会第73回大会, 大阪市立大学 学術情報総合センター, 大阪, June, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、今後の生活の中で問われる人間とロボットとの共存を考えることに焦点をあて、「人間とは何か」という基本問題を議論する。
BibTeX:
@Inproceedings{石黒浩2016t,
  author    = {石黒浩},
  title     = {アンドロイド開発を通した人間理解},
  booktitle = {日本生理人類学会第73回大会},
  year      = {2016},
  address   = {大阪市立大学 学術情報総合センター, 大阪},
  month     = Jun,
  day       = {4},
  url       = {http://jspa.net/congress_73},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、今後の生活の中で問われる人間とロボットとの共存を考えることに焦点をあて、「人間とは何か」という基本問題を議論する。},
}
Hiroshi Ishiguro, "Humanoids: Future Robots for Service", In RoboBusiness Europe 2016, Odense Congress Center, Denmark, June, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@Inproceedings{Ishiguro2016,
  author    = {Hiroshi Ishiguro},
  title     = {Humanoids: Future Robots for Service},
  booktitle = {RoboBusiness Europe 2016},
  year      = {2016},
  address   = {Odense Congress Center, Denmark},
  month     = Jun,
  day       = {2},
  url       = {http://www.robobusiness.eu/rb/},
  abstract  = {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
}
石黒浩, "ロボットと未来社会", 四国生産性本部 設立60周年記念講演会, JRホテルクレメント高松, 香川, June, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016z,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {四国生産性本部 設立60周年記念講演会},
  year      = {2016},
  address   = {JRホテルクレメント高松, 香川},
  month     = Jun,
  day       = {6},
  url       = {http://www.spc21.jp/business/general/detail.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "人間型ロボットと未来社会", 原子力安全技術研究所 サイエンス・フォーラム, 御前崎市民会館, 静岡, June, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロ ボット化社会の可能性とその社会において人間が学ぶことを議 論する。
BibTeX:
@Inproceedings{石黒浩2016ab,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {原子力安全技術研究所 サイエンス・フォーラム},
  year      = {2016},
  address   = {御前崎市民会館, 静岡},
  month     = Jun,
  day       = {11},
  url       = {https://www.chuden.co.jp/corporate/publicity/pub_osh
irase/topics/3260133_21498.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロ
ボット化社会の可能性とその社会において人間が学ぶことを議
論する。},
}
Hiroshi Ishiguro, "The Power of Presence", In The Power of Presence:Preconference of International Communication Association 2016 in Japan, Kyoto Research Park, Kyoto, June, 2016.
Abstract: a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)
BibTeX:
@Inproceedings{Ishiguro2016c,
  author    = {Hiroshi Ishiguro},
  title     = {The Power of Presence},
  booktitle = {The Power of Presence:Preconference of International Communication Association 2016 in Japan},
  year      = {2016},
  address   = {Kyoto Research Park, Kyoto},
  month     = Jun,
  day       = {8},
  url       = {https://ispr.info/presence-conferences/the-power-of-presence-preconference-of-international-communication-association-2016-in-japan/},
  abstract  = {a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)},
}
石黒浩, "ロボットと未来社会", TIRIクロスミーティング2016, 東京都立産業技術研究センター本部, 東京, June, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016s,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {TIRIクロスミーティング2016},
  year      = {2016},
  address   = {東京都立産業技術研究センター本部, 東京},
  month     = Jun,
  day       = {9},
  url       = {https://www.iri-tokyo.jp/joho/event/h28/0608-10crossmeeting.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "ロボットと未来社会", LS研総合発表会2016, ホテルグランパシフィックLE DAIBA, 東京, June, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016r,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {LS研総合発表会2016},
  year      = {2016},
  address   = {ホテルグランパシフィックLE DAIBA, 東京},
  month     = Jun,
  day       = {9},
  url       = {http://jp.fujitsu.com/family/lsken/activity/annual/16/index.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が学ぶことを議論する。},
}
石黒浩, "人とロボットの未来 ~ロボットは、人間にどこまで近づけるか~", 神戸商工会議所 5月経営トップセミナー, 神戸商工会議所, 兵庫, May, 2016.
Abstract: 自分自身のコピーロボットである「ジェミノイド」や、人間の体系を模した「テレノイド」などを開発した経緯や、これからの時代においてロボットが社会に果たす役割などについて講話する。
BibTeX:
@Inproceedings{石黒浩2016v,
  author    = {石黒浩},
  title     = {人とロボットの未来 ~ロボットは、人間にどこまで近づけるか~},
  booktitle = {神戸商工会議所 5月経営トップセミナー},
  year      = {2016},
  address   = {神戸商工会議所, 兵庫},
  month     = May,
  day       = {10},
  url       = {https://www.kobe-cci.or.jp/category/news/event/},
  abstract  = {自分自身のコピーロボットである「ジェミノイド」や、人間の体系を模した「テレノイド」などを開発した経緯や、これからの時代においてロボットが社会に果たす役割などについて講話する。},
}
石黒浩, "ロボットと未来社会", JB Group IT Forum 2016, ホテル阪急インターナショナル, 大阪, May, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。
BibTeX:
@Inproceedings{石黒浩2016q,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {JB Group IT Forum 2016},
  year      = {2016},
  address   = {ホテル阪急インターナショナル, 大阪},
  month     = May,
  day       = {24},
  url       = {http://www.jbgroup.jp/it16/index.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。},
}
石黒浩, "ロボットと未来社会", JB Group IT Forum 2016, ザ・プリンスパークタワー東京, 東京, May, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。
BibTeX:
@Inproceedings{石黒浩2016p,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {JB Group IT Forum 2016},
  year      = {2016},
  address   = {ザ・プリンスパークタワー東京, 東京},
  month     = May,
  day       = {20},
  url       = {http://www.jbgroup.jp/it16/index.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。},
}
石黒浩, "ロボットと未来社会", JB Group IT Forum 2016, ヒルトン名古屋, 愛知, May, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。
BibTeX:
@Inproceedings{石黒浩2016o,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {JB Group IT Forum 2016},
  year      = {2016},
  address   = {ヒルトン名古屋, 愛知},
  month     = May,
  day       = {18},
  url       = {http://www.jbgroup.jp/it16/index.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。},
}
石黒浩, "ウェアラブルとロボットの拓くミライ ~人類はほんとうにしあわせになれるのか~", 地域ICT推進協議会 総会, ホテルモントレ神戸, 兵庫, May, 2016.
Abstract: 塚本昌彦氏(神戸大学大学院工学研究科教授)と石黒浩氏(大阪大学教授(特別教授))によるトークセッション。
BibTeX:
@Inproceedings{石黒浩2016u,
  author    = {石黒浩},
  title     = {ウェアラブルとロボットの拓くミライ ~人類はほんとうにしあわせになれるのか~},
  booktitle = {地域ICT推進協議会 総会},
  year      = {2016},
  address   = {ホテルモントレ神戸, 兵庫},
  month     = May,
  day       = {13},
  url       = {http://www.copli.jp/},
  abstract  = {塚本昌彦氏(神戸大学大学院工学研究科教授)と石黒浩氏(大阪大学教授(特別教授))によるトークセッション。},
}
Hiroshi Ishiguro, "AI(Artificial Intelligence) & Humanoid robot", In Soeul Forum 2016, Seoul Shilla Hotel, Korea, May, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@Inproceedings{Ishiguro2016b,
  author    = {Hiroshi Ishiguro},
  title     = {AI(Artificial Intelligence) \& Humanoid robot},
  booktitle = {Soeul Forum 2016},
  year      = {2016},
  address   = {Seoul Shilla Hotel, Korea},
  month     = May,
  day       = {12},
  url       = {http://www.seoulforum.kr/eng/},
  abstract  = {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
}
石黒浩, "人間型ロボットと未来社会", サービスロボット開発技術展, インテックス大阪, 大阪, May, 2016.
Abstract: パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。
BibTeX:
@Inproceedings{石黒浩2016ad,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {サービスロボット開発技術展},
  year      = {2016},
  address   = {インテックス大阪, 大阪},
  month     = May,
  day       = {26},
  url       = {http://www.srobo.jp/seminar/index.html},
  abstract  = {パーソナルコンピュータとスマートフォンに続いて、新たな情報メディアとなるのが、パーソナルロボットである。このパーソナルロボットは、パーソナルコンピュータが情報化社会をもたらしたのと同様に、ロボット化社会をもたらす可能性がある。本講演では講演者のこれまでの研究を紹介しながら、ロボット化社会の可能性と、その社会において我々人間が学ぶことを議論する。},
}
石黒浩, "人間型ロボットと未来社会", 第116回日本外科学会定期学術集会, 大阪国際会議場/リーガロイヤルホテル大阪, 大阪, April, 2016.
Abstract: 石黒浩特別研究所の研究を紹介する
BibTeX:
@Inproceedings{石黒浩2016b,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {第116回日本外科学会定期学術集会},
  year      = {2016},
  address   = {大阪国際会議場/リーガロイヤルホテル大阪, 大阪},
  month     = Apr,
  day       = {14},
  url       = {http://www.jssoc.or.jp/jss116/index.html},
  abstract  = {石黒浩特別研究所の研究を紹介する},
}
Shuichi Nishio, "Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark", In 2016 MOST&JST Workshop on ICT for Accessibility and Support of Older People, Tainan, Taiwan, April, 2016.
BibTeX:
@Inproceedings{Nishio2016,
  author    = {Shuichi Nishio},
  title     = {Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark},
  booktitle = {2016 MOST\&JST Workshop on ICT for Accessibility and Support of Older People},
  year      = {2016},
  address   = {Tainan, Taiwan},
  month     = Apr,
  day       = {11},
}
石黒浩, "ロボットと人間の結びつき~人間とは何か、心とは何か", 第83期 経営ビジョン構想懇話会, ロイヤルパークホテル, 東京, April, 2016.
Abstract: 日常生活でかかわるロボットと人とのインタラクションの研究に世界で先駆けて取り組まれてきた石黒氏が、人とロボットとの関わり合いとは何か、人とロボットがかかわることで何が生まれ、そして変わるのかを語る。
BibTeX:
@Inproceedings{石黒浩2016d,
  author    = {石黒浩},
  title     = {ロボットと人間の結びつき~人間とは何か、心とは何か},
  booktitle = {第83期 経営ビジョン構想懇話会},
  year      = {2016},
  address   = {ロイヤルパークホテル, 東京},
  month     = Apr,
  day       = {19},
  abstract  = {日常生活でかかわるロボットと人とのインタラクションの研究に世界で先駆けて取り組まれてきた石黒氏が、人とロボットとの関わり合いとは何か、人とロボットがかかわることで何が生まれ、そして変わるのかを語る。},
}
石黒浩, "テクノロジーとエンターテイメントのスリリングな未来", 企画展「GAME ON ~ゲームってなんでおもしろい?~」特別シンポジウム, 日本科学未来館, 東京, April, 2016.
Abstract: 企画展「GAME ON ~ゲームってなんでおもしろい?~」の開催にあわせて、テクノロジーとエンターテインメントに関わる、ビジネス、コンテンツ、研究分野からスペシャルゲストを迎え、特別シンポジウムを行う。本シンポジウムでは、ゲームをきっかけに、人工知能、ロボティクス、仮想現実などをテーマについて語る。
BibTeX:
@Inproceedings{石黒浩2016aa,
  author    = {石黒浩},
  title     = {テクノロジーとエンターテイメントのスリリングな未来},
  booktitle = {企画展「GAME ON ~ゲームってなんでおもしろい?~」特別シンポジウム},
  year      = {2016},
  address   = {日本科学未来館, 東京},
  month     = Apr,
  day       = {29},
  url       = {http://www.miraikan.jst.go.jp/event/1603241519665.html},
  abstract  = {企画展「GAME ON ~ゲームってなんでおもしろい?~」の開催にあわせて、テクノロジーとエンターテインメントに関わる、ビジネス、コンテンツ、研究分野からスペシャルゲストを迎え、特別シンポジウムを行う。本シンポジウムでは、ゲームをきっかけに、人工知能、ロボティクス、仮想現実などをテーマについて語る。},
}
石黒浩, "新しいねむりに目を覚まそう‐人類進化と眠りの多様性を求めて‐", 睡眠文化シンポジウム, 京眠大学百周年時計台記念ホール, 京都, April, 2016.
Abstract: 京都大学の霊長類研究者の第一人者である山極寿一氏、文化人類学者で睡眠文化研究会理事の重田眞義氏・座馬耕一郎氏、そしてロボット研究で世界的に活躍する石黒浩氏らを迎えて、ヒトの睡眠の進化と多様性を読み解き、未来を語るシンポジウム。
BibTeX:
@Inproceedings{石黒浩2016c,
  author    = {石黒浩},
  title     = {新しいねむりに目を覚まそう‐人類進化と眠りの多様性を求めて‐},
  booktitle = {睡眠文化シンポジウム},
  year      = {2016},
  address   = {京眠大学百周年時計台記念ホール, 京都},
  month     = Apr,
  day       = {10},
  url       = {http://sleepculture.net/nemuriten.html#symposium},
  abstract  = {京都大学の霊長類研究者の第一人者である山極寿一氏、文化人類学者で睡眠文化研究会理事の重田眞義氏・座馬耕一郎氏、そしてロボット研究で世界的に活躍する石黒浩氏らを迎えて、ヒトの睡眠の進化と多様性を読み解き、未来を語るシンポジウム。},
}
Hiroshi Ishiguro, "Androids and Future Life", In South by Southwest 2016 Music, Film and Interactive Festivals(SXSW), Austin Convention Center, USA, March, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@Inproceedings{Ishiguro2016a,
  author    = {Hiroshi Ishiguro},
  title     = {Androids and Future Life},
  booktitle = {South by Southwest 2016 Music, Film and Interactive Festivals(SXSW)},
  year      = {2016},
  address   = {Austin Convention Center, USA},
  month     = Mar,
  day       = {13},
  url       = {http://schedule.sxsw.com/2016/events/event_PP50105},
  abstract  = {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
}
石黒浩, "ロボットが変える産業と生活", 第8回G1サミット, パネルディスカッション, 沖縄, March, 2016.
Abstract: ロボットは、人間の未来をどのように変えるのか。人とロボットが共生する社会とは。石黒浩特別研究所の研究紹介を交えながら、パネリスト数名で近未来について議論する。
BibTeX:
@Inproceedings{石黒浩2016a,
  author    = {石黒浩},
  title     = {ロボットが変える産業と生活},
  booktitle = {第8回G1サミット, パネルディスカッション},
  year      = {2016},
  address   = {沖縄},
  month     = Mar,
  day       = {20},
  url       = {https://g1summit.com/g1summit/},
  abstract  = {ロボットは、人間の未来をどのように変えるのか。人とロボットが共生する社会とは。石黒浩特別研究所の研究紹介を交えながら、パネリスト数名で近未来について議論する。},
}
石黒浩, "テクノロジーが未来をどう変えていくのか -ロボット研究の視点から-", Hewlett Packard Enterprise Day 2016 東京, ザ・プリンス パークタワー, 東京, March, 2016.
Abstract: テクノロジーが変えていく未来の姿を石黒浩ロボット研究の視点から議論する。
BibTeX:
@Inproceedings{石黒浩2016n,
  author    = {石黒浩},
  title     = {テクノロジーが未来をどう変えていくのか -ロボット研究の視点から-},
  booktitle = {Hewlett Packard Enterprise Day 2016 東京},
  year      = {2016},
  address   = {ザ・プリンス パークタワー, 東京},
  month     = Mar,
  day       = {4},
  url       = {http://h50146.www5.hp.com/events/seminars/info/hpeday2016.html},
  abstract  = {テクノロジーが変えていく未来の姿を石黒浩ロボット研究の視点から議論する。},
}
石黒浩, "人を理解するためのロボット学", 実践ソリューションフェア2016 名古屋会場, ヒルトン名古屋, 愛知, March, 2016.
Abstract: 石黒浩ロボット研究で開発した様々なロボットを紹介しながら、ロボットの研究から何が学べるか、人を理解するためにロボットはどう役に立つのか議論する。
BibTeX:
@Inproceedings{石黒浩2016m,
  author    = {石黒浩},
  title     = {人を理解するためのロボット学},
  booktitle = {実践ソリューションフェア2016 名古屋会場},
  year      = {2016},
  address   = {ヒルトン名古屋, 愛知},
  month     = Mar,
  day       = {4},
  url       = {http://www.otsuka-shokai.co.jp/event/jsf/nagoya/?02=86_jsf16_top_menu},
  abstract  = {石黒浩ロボット研究で開発した様々なロボットを紹介しながら、ロボットの研究から何が学べるか、人を理解するためにロボットはどう役に立つのか議論する。},
}
石黒浩, "脳ロボティクス", 科学技術振興機構 公開シンポジウム, 品川 THE GRAND HALL, 東京, March, 2016.
Abstract: 統括技術責任者による関連領域の概観とプログラム内の取り組みの紹介として講演する
BibTeX:
@Inproceedings{石黒浩2016l,
  author    = {石黒浩},
  title     = {脳ロボティクス},
  booktitle = {科学技術振興機構 公開シンポジウム},
  year      = {2016},
  address   = {品川 THE GRAND HALL, 東京},
  month     = Mar,
  day       = {1},
  url       = {http://www.jst.go.jp/impact/hp_yamakawa/symposium/index.html},
  abstract  = {統括技術責任者による関連領域の概観とプログラム内の取り組みの紹介として講演する},
}
石黒浩, "アンドロイド開発を通した人間理解", ひょうご夢実現プロジェクト教育フォーラム2015, 兵庫県民会館, 兵庫, February, 2016.
Abstract: 兵庫県進路選択支援機構によるフォーラムにて、高校生へのメッセージという形で石黒浩ロボット研究について紹介する。 査読 Review なし
BibTeX:
@Inproceedings{石黒浩2016i,
  author    = {石黒浩},
  title     = {アンドロイド開発を通した人間理解},
  booktitle = {ひょうご夢実現プロジェクト教育フォーラム2015},
  year      = {2016},
  address   = {兵庫県民会館, 兵庫},
  month     = Feb,
  day       = {6},
  url       = {https://www.sinro.or.jp/kyoiku_form/file/forum.pdf},
  abstract  = {兵庫県進路選択支援機構によるフォーラムにて、高校生へのメッセージという形で石黒浩ロボット研究について紹介する。
査読
Review 	なし},
}
石黒浩, "人を理解するためのロボット学", 実践ソリューションフェア2016, ザ・プリンス パークタワー, 東京, February, 2016.
Abstract: 石黒浩ロボット研究で開発した様々なロボットを紹介しながら、ロボットの研究から何が学べるか、人を理解するためにロボットはどう役に立つのか議論する。
BibTeX:
@Inproceedings{石黒浩2016h,
  author    = {石黒浩},
  title     = {人を理解するためのロボット学},
  booktitle = {実践ソリューションフェア2016},
  year      = {2016},
  address   = {ザ・プリンス パークタワー, 東京},
  month     = Feb,
  day       = {3},
  url       = {http://www.otsuka-shokai.co.jp/event/jsf/tokyo/?02=86_jsf16_top_menu},
  abstract  = {石黒浩ロボット研究で開発した様々なロボットを紹介しながら、ロボットの研究から何が学べるか、人を理解するためにロボットはどう役に立つのか議論する。},
}
石黒浩, "10年後と100年後と1000年後の未来", シンギュラリティサロン@東京 第7回公開講演会, ジーニアスセミナールーム, 東京, February, 2016.
Abstract: 10年、100年、1000年先の人間とロボットの関係について議論する
BibTeX:
@Inproceedings{石黒浩2016j,
  author    = {石黒浩},
  title     = {10年後と100年後と1000年後の未来},
  booktitle = {シンギュラリティサロン@東京 第7回公開講演会},
  year      = {2016},
  address   = {ジーニアスセミナールーム, 東京},
  month     = Feb,
  day       = {13},
  url       = {http://singularity.jp/news160128/},
  abstract  = {10年、100年、1000年先の人間とロボットの関係について議論する},
}
石黒浩, "ロボットと未来社会", 第314回オムロンけいはんな文化フォーラム, けいはんなプラザ, 京都, February, 2016.
Abstract: 石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。
BibTeX:
@Inproceedings{石黒浩2016k,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {第314回オムロンけいはんな文化フォーラム},
  year      = {2016},
  address   = {けいはんなプラザ, 京都},
  month     = Feb,
  day       = {27},
  url       = {https://www.keihanna-plaza.co.jp/event/forum/post-540.html},
  abstract  = {石黒浩ロボット研究のこれまでの研究成果を紹介しながら、ロボット化社会の可能性とその社会において人間が何を学ぶのか考察する。},
}
石黒浩, "人とロボットが共生する未来社会", 第123回全国経営者大会, 帝国ホテル, 東京, January, 2016.
Abstract: アンドロイドの研究開発の視点から、どうすれば「人」を創れるかヒューマノイド・アンドロイド・次世代ロボットによる革命とは何か講演する。
BibTeX:
@Inproceedings{石黒浩2016g,
  author    = {石黒浩},
  title     = {人とロボットが共生する未来社会},
  booktitle = {第123回全国経営者大会},
  year      = {2016},
  address   = {帝国ホテル, 東京},
  month     = Jan,
  day       = {21},
  url       = {http://www.kmcanet.com/keieisha-taikai/taikai123_0121},
  abstract  = {アンドロイドの研究開発の視点から、どうすれば「人」を創れるかヒューマノイド・アンドロイド・次世代ロボットによる革命とは何か講演する。},
}
石黒浩, "人間型ロボットとロボット社会", 戦略的情報通信研究開発セミナー2016, 機械振興会館, 東京, January, 2016.
Abstract: 「戦略的情報通信研究開発推進事業(SCOPE)」について研究開発成果の発表・講演を行う。
BibTeX:
@Inproceedings{石黒浩2016f,
  author    = {石黒浩},
  title     = {人間型ロボットとロボット社会},
  booktitle = {戦略的情報通信研究開発セミナー2016},
  year      = {2016},
  address   = {機械振興会館, 東京},
  month     = Jan,
  day       = {15},
  url       = {http://www.soumu.go.jp/soutsu/kanto/press/27/1211re1.html},
  abstract  = {「戦略的情報通信研究開発推進事業(SCOPE)」について研究開発成果の発表・講演を行う。},
}
石黒浩, "アンドロイドと未来社会", ネプコンジャパン2016, 東京ビックサイト, 東京, January, 2016.
Abstract: ネプコンジャパン45周年記念講演にて到来するロボット社会で日本の先端技術が切り開く未来像とはどういったものか、石黒浩特別研究所の研究とともに紹介する
BibTeX:
@Inproceedings{石黒浩2016e,
  author    = {石黒浩},
  title     = {アンドロイドと未来社会},
  booktitle = {ネプコンジャパン2016},
  year      = {2016},
  address   = {東京ビックサイト, 東京},
  month     = Jan,
  day       = {15},
  url       = {http://www.nepcon.jp/},
  abstract  = {ネプコンジャパン45周年記念講演にて到来するロボット社会で日本の先端技術が切り開く未来像とはどういったものか、石黒浩特別研究所の研究とともに紹介する},
}
Dylan F. Glas, "ERICA: The ERATO Intelligent Conversational Android", In Symposium on Human-Robot Interaction, Stanford University, USA, November, 2015.
Abstract: Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.
BibTeX:
@Inproceedings{Glas2015,
  author    = {Dylan F. Glas},
  title     = {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {Symposium on Human-Robot Interaction},
  year      = {2015},
  address   = {Stanford University, USA},
  month     = Nov,
  abstract  = {Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.},
  file      = {Glas2015.pdf:pdf/Glas2015.pdf:PDF},
}
山崎竜二, "「テレノイド」ロボット:その特異な存在", In ケアとソリューション 大阪フォーラム ケアとテクノロジー, 大阪, October, 2015.
BibTeX:
@Inproceedings{山崎竜二2015,
  author    = {山崎竜二},
  title     = {「テレノイド」ロボット:その特異な存在},
  booktitle = {ケアとソリューション 大阪フォーラム ケアとテクノロジー},
  year      = {2015},
  address   = {大阪},
  month     = Oct,
  file      = {山崎竜二2015.pdf:pdf/山崎竜二2015.pdf:PDF},
}
西尾修一, "人の存在を伝達する遠隔操作型アンドロイドの未来 ~人間社会の新たな可能性を探る~", 第10回スマートウェルネス研究会, グランフロント大阪タワー ナレッジキャピタル, 大阪, October, 2015.
Abstract: 近年、介護者を支援する等のロボティクス介護が注目されていますが、今回のセミナーでは、それとは一線を画し、人の疑似存在としての“アンドロイド"にヘルスケアの視点からフォーカスします。人間としての必要最低限の見かけと動きの要素だけからなる、人間のミニマルデザインを取り入れたアンドロイドによる最先端の心のケアサービスモデルの可能性についても学びます。さらに、新たなメディアとしての可能性や新たなコミュニケーションスタイル、ビジネスモデル創出の可能性についても探ってみたいと思います。
BibTeX:
@Inproceedings{西尾修一2015a,
  author    = {西尾修一},
  title     = {人の存在を伝達する遠隔操作型アンドロイドの未来 ~人間社会の新たな可能性を探る~},
  booktitle = {第10回スマートウェルネス研究会},
  year      = {2015},
  address   = {グランフロント大阪タワー ナレッジキャピタル, 大阪},
  month     = Oct,
  abstract  = {近年、介護者を支援する等のロボティクス介護が注目されていますが、今回のセミナーでは、それとは一線を画し、人の疑似存在としての“アンドロイド"にヘルスケアの視点からフォーカスします。人間としての必要最低限の見かけと動きの要素だけからなる、人間のミニマルデザインを取り入れたアンドロイドによる最先端の心のケアサービスモデルの可能性についても学びます。さらに、新たなメディアとしての可能性や新たなコミュニケーションスタイル、ビジネスモデル創出の可能性についても探ってみたいと思います。},
}
西尾修一, "テレノイドによる認知症高齢者とのコミュニケーション", 認知症カフェスペシャル「身体コミュニケーションの可能性 -ダンスとロボット-」, 大阪, March, 2015.
BibTeX:
@Inproceedings{西尾修一2015,
  author    = {西尾修一},
  title     = {テレノイドによる認知症高齢者とのコミュニケーション},
  booktitle = {認知症カフェスペシャル「身体コミュニケーションの可能性 -ダンスとロボット-」},
  year      = {2015},
  address   = {大阪},
  month     = Mar,
  file      = {西尾修一2015a.pdf:pdf/西尾修一2015a.pdf:PDF},
}
Hiroshi Ishiguro, "Minimum design of interactive robots", In International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」, 東京, March, 2015.
BibTeX:
@Inproceedings{Ishiguro2015,
  author    = {Hiroshi Ishiguro},
  title     = {Minimum design of interactive robots},
  booktitle = {International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」},
  year      = {2015},
  address   = {東京},
  month     = Mar,
  file      = {Ishiguro2015a.pdf:pdf/Ishiguro2015a.pdf:PDF},
}
Shuichi Nishio, "Teleoperated android robots - Fundamentals, applications and future", In China International Advanced Manufacturing Conference 2014, Mianyang, China, October, 2014.
Abstract: I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.
BibTeX:
@Inproceedings{Nishio2014a,
  author    = {Shuichi Nishio},
  title     = {Teleoperated android robots - Fundamentals, applications and future},
  booktitle = {China International Advanced Manufacturing Conference 2014},
  year      = {2014},
  address   = {Mianyang, China},
  month     = Oct,
  abstract  = {I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.},
}
西尾修一, "遠隔操作アンドロイドを通じた他者の認識", 第16回日本感性工学会大会, 東京, September, 2014.
BibTeX:
@Inproceedings{西尾修一,
  author    = {西尾修一},
  title     = {遠隔操作アンドロイドを通じた他者の認識},
  booktitle = {第16回日本感性工学会大会},
  year      = {2014},
  address   = {東京},
  month     = SEP,
  file      = {Nishio2014a.pdf:pdf/Nishio2014a.pdf:PDF},
}
Hiroshi Ishiguro, "Android Philosophy", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 3, August, 2014.
BibTeX:
@Inproceedings{Ishiguro2014b,
  author    = {Hiroshi Ishiguro},
  title     = {Android Philosophy},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {3},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-3},
  url       = {http://ebooks.iospress.nl/volumearticle/38527},
}
石黒浩, "「共生ヒューマンロボットインタラクション ~人と共生するロボットをホントに作る!~」", iRooBOイベント第2弾 『いまこそ、ロボットの話をしよう  ~iRooBO流ロボットビジネスの作り方、考え方』, 大阪, August, 2014.
Abstract: ATR主催、ロボット開発関連企業、IoT関連企業、ロボット及びIoT関連サービスの導入を検討されている企業等を対象をしたイベント『いまこそ、ロボットの話をしよう』にて、高齢者から子供までが自然に関わることの出来る自律型ロボットとはどのようなものになるべきなのか、社会・経済はどのように変わっていくのかをテーマに講演を行う。
BibTeX:
@Inproceedings{石黒浩2014i,
  author    = {石黒浩},
  title     = {「共生ヒューマンロボットインタラクション ~人と共生するロボットをホントに作る!~」},
  booktitle = {iRooBOイベント第2弾 『いまこそ、ロボットの話をしよう  ~iRooBO流ロボットビジネスの作り方、考え方』},
  year      = {2014},
  address   = {大阪},
  month     = Aug,
  abstract  = {ATR主催、ロボット開発関連企業、IoT関連企業、ロボット及びIoT関連サービスの導入を検討されている企業等を対象をしたイベント『いまこそ、ロボットの話をしよう』にて、高齢者から子供までが自然に関わることの出来る自律型ロボットとはどのようなものになるべきなのか、社会・経済はどのように変わっていくのかをテーマに講演を行う。},
}
石黒浩, "アンドロイドと生きる未来 ~技術と芸術の融合~", 国立情報学研究所市民講座 未来を紡ぐ情報学, 学術総合センター. 東京, July, 2014.
Abstract: 国立情報学研究所の研究者が「情報学」の先端を一般向けに解説するプログラムにおいて、講演を行う。国立情報学研究所の客員教授として参加。
BibTeX:
@Inproceedings{石黒浩2014j,
  author    = {石黒浩},
  title     = {アンドロイドと生きる未来 ~技術と芸術の融合~},
  booktitle = {国立情報学研究所市民講座 未来を紡ぐ情報学},
  year      = {2014},
  address   = {学術総合センター. 東京},
  month     = JUL,
  url       = {http://www.nii.ac.jp/event/shimin/},
  abstract  = {国立情報学研究所の研究者が「情報学」の先端を一般向けに解説するプログラムにおいて、講演を行う。国立情報学研究所の客員教授として参加。},
  file      = {石黒浩2014j.pdf:pdf/石黒浩2014j.pdf:PDF},
}
石黒浩, "ロボットと脳", 応用脳科学コンソーシアム2014年度キックオフミーティング, 東京, June, 2014.
BibTeX:
@Inproceedings{石黒浩2014,
  author    = {石黒浩},
  title     = {ロボットと脳},
  booktitle = {応用脳科学コンソーシアム2014年度キックオフミーティング},
  year      = {2014},
  address   = {東京},
  month     = Jun,
}
石黒浩, "ロボットと未来社会", 第31回吹田産業フェア, 大阪, May, 2014.
BibTeX:
@Inproceedings{石黒浩2014f,
  author    = {石黒浩},
  title     = {ロボットと未来社会},
  booktitle = {第31回吹田産業フェア},
  year      = {2014},
  address   = {大阪},
  month     = May,
}
Hiroshi Ishiguro, "Telenoid : A Teleoperated Android with a Minimalistic Human Design", In Robo Business Europe, Billund, Denmark, May, 2014.
BibTeX:
@Inproceedings{Ishiguro2014a,
  author    = {Hiroshi Ishiguro},
  title     = {Telenoid : A Teleoperated Android with a Minimalistic Human Design},
  booktitle = {Robo Business Europe},
  year      = {2014},
  address   = {Billund, Denmark},
  month     = May,
  day       = {26-28},
}
石黒浩, "人間型ロボットと未来社会", 電気設備学会関西支部総会, 大阪, May, 2014.
BibTeX:
@Inproceedings{石黒浩2014e,
  author    = {石黒浩},
  title     = {人間型ロボットと未来社会},
  booktitle = {電気設備学会関西支部総会},
  year      = {2014},
  address   = {大阪},
  month     = May,
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In The Global Mobile Internet Conference Beijing, Beijing, China, May, 2014.
BibTeX:
@Inproceedings{Ishiguro2014,
  author    = {Hiroshi Ishiguro},
  title     = {The Future Life Supported by Robotic Avatars},
  booktitle = {The Global Mobile Internet Conference Beijing},
  year      = {2014},
  address   = {Beijing, China},
  month     = May,
  day       = {5-6},
  file      = {ishiguro2014a.pdf:pdf/ishiguro2014a.pdf:PDF},
}
石黒浩, "人と関わるロボットの実現", ニコニコ超会議3, 千葉, April, 2014.
BibTeX:
@Inproceedings{石黒浩2014c,
  author    = {石黒浩},
  title     = {人と関わるロボットの実現},
  booktitle = {ニコニコ超会議3},
  year      = {2014},
  address   = {千葉},
  month     = Apr,
  day       = {26-27},
}
石黒浩, "人を知るためのロボット研究", 日本大阪大学石黒浩教授学術報告会, 中国, April, 2014.
BibTeX:
@Inproceedings{石黒浩2014d,
  author    = {石黒浩},
  title     = {人を知るためのロボット研究},
  booktitle = {日本大阪大学石黒浩教授学術報告会},
  year      = {2014},
  address   = {中国},
  month     = Apr,
  day       = {15},
}
Ryuji Yamazaki, "Teleoperated Android in Elderly Care", In Patient@home seminar, Denmark, February, 2014.
Abstract: We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.
BibTeX:
@Inproceedings{Yamazaki2014b,
  author    = {Ryuji Yamazaki},
  title     = {Teleoperated Android in Elderly Care},
  booktitle = {Patient@home seminar},
  year      = {2014},
  address   = {Denmark},
  month     = Feb,
  day       = {5},
  abstract  = {We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.},
}
Shuichi Nishio, "The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan", In International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Delmenhorst, Germany, February, 2014.
BibTeX:
@Inproceedings{Nishio2014,
  author    = {Shuichi Nishio},
  title     = {The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year      = {2014},
  address   = {Delmenhorst, Germany},
  month     = Feb,
  day       = {13-15},
}
石黒浩, "遠隔操作型ロボットと未来社会", JUAS FUTURE ASPECT 2014 「ワクワクする未来へ これからの社会をデザインしよう ~2020年、そしてその先へ~」, 東京, January, 2014.
BibTeX:
@Inproceedings{石黒浩2014a,
  author    = {石黒浩},
  title     = {遠隔操作型ロボットと未来社会},
  booktitle = {JUAS FUTURE ASPECT 2014 「ワクワクする未来へ これからの社会をデザインしよう ~2020年、そしてその先へ~」},
  year      = {2014},
  address   = {東京},
  month     = Jan,
  day       = {30},
}
石黒浩, "人を知るためのロボット研究", 第24回日本頭頸部外科学会総会ならびに学術講演会, 香川, January, 2014.
BibTeX:
@Inproceedings{石黒浩2014b,
  author    = {石黒浩},
  title     = {人を知るためのロボット研究},
  booktitle = {第24回日本頭頸部外科学会総会ならびに学術講演会},
  year      = {2014},
  address   = {香川},
  month     = Jan,
  day       = {30},
}
石黒浩, "子どもと類人猿とロボットにおける共感と協調と「心の理論」", 日本心理学会 第78回大会, 2014.
BibTeX:
@Inproceedings{石黒浩2014g,
  author    = {石黒浩},
  title     = {子どもと類人猿とロボットにおける共感と協調と「心の理論」},
  booktitle = {日本心理学会 第78回大会},
  year      = {2014},
}
石黒浩, "感情の表現-ロボットによる感情の表現と想起-", 日本情動学会第3回大会, 京都, December, 2013.
BibTeX:
@Inproceedings{石黒浩2013h,
  author    = {石黒浩},
  title     = {感情の表現-ロボットによる感情の表現と想起-},
  booktitle = {日本情動学会第3回大会},
  year      = {2013},
  address   = {京都},
  month     = Dec,
  day       = {7},
}
住岡英信, "ホルモンと認知とストレス課題", 第2回コンフォータブルブレイン研究会, 京都, December, 2013.
BibTeX:
@Inproceedings{住岡英信2013c,
  author    = {住岡英信},
  title     = {ホルモンと認知とストレス課題},
  booktitle = {第2回コンフォータブルブレイン研究会},
  year      = {2013},
  address   = {京都},
  month     = Dec,
  file      = {住岡英信2013c.pdf:pdf/住岡英信2013c.pdf:PDF},
  funding   = {{CREST}},
}
港隆史, 中西惇也, 桑村海光, 西尾修一, 石黒浩, "ロボットメディアとの身体的相互作用による感情喚起", 信学技報(クラウドネットワークロボット研究会), no. CNR2013-23, 東京, pp. 13-18, December, 2013.
Abstract: 本研究では,ユーザの感情を喚起することでコミュニケーションを支援するメディアの実現に向けて,ロボットメディアとの相互作用における身体的状態が感情変化をもたらすメカニズムを明らかにする研究に着手している.これまでに,人型ロボットメディアを抱擁しながら対話することが,対話相手への関心や好意を高めることを確かめる実験をいくつか行ってきたので,本報告ではそれらの実験を紹介する.
BibTeX:
@Inproceedings{港隆史2013,
  author          = {港隆史 and 中西惇也 and 桑村海光 and 西尾修一 and 石黒浩},
  title           = {ロボットメディアとの身体的相互作用による感情喚起},
  booktitle       = {信学技報(クラウドネットワークロボット研究会)},
  year            = {2013},
  number          = {CNR2013-23},
  pages           = {13-18},
  address         = {東京},
  month           = {Dec},
  day             = {20},
  url             = {http://www.ieice.org/ken/program/index.php?tgs_regid=5802ef7c9533d904b64bd7870b994f877e54bc26c06845254e7fff21647f7176&tgid=IEICE-CNR&lang=},
  etitle          = {Emotional Arousal by Physical Interaction with Robotic Media},
  abstract        = {本研究では,ユーザの感情を喚起することでコミュニケーションを支援するメディアの実現に向けて,ロボットメディアとの相互作用における身体的状態が感情変化をもたらすメカニズムを明らかにする研究に着手している.これまでに,人型ロボットメディアを抱擁しながら対話することが,対話相手への関心や好意を高めることを確かめる実験をいくつか行ってきたので,本報告ではそれらの実験を紹介する.},
  eabstract       = {We are studying a mechanism of an emotional arousal owing to person's bodily state (e.g., body posture and motion) in human-robotic media interaction towards a development of robotic media to support users' communication by controlling their emotion. This paper shows several experimental results in which persons show an interest or affection in their communication partner by talking while hugging a robotic media.},
  file            = {港隆史2013.pdf:pdf/港隆史2013.pdf:PDF},
  funding         = {{CREST}},
  keywords        = {ロボットメディア; 身体的相互作用; 感情喚起; 抱擁},
}
石黒浩, "遠隔操作型ロボットとロボット社会", 組込みシステムシンポジウム2013, 東京, October, 2013.
BibTeX:
@Inproceedings{石黒浩2013c,
  author       = {石黒浩},
  title        = {遠隔操作型ロボットとロボット社会},
  booktitle    = {組込みシステムシンポジウム2013},
  year         = {2013},
  address      = {東京},
  month        = Oct,
  organization = {情報処理学会 組込みシステム研究会},
}
石黒浩, "知能ロボット技術の将来", 日本食品工業倶楽部会 大阪例会, 大阪, October, 2013.
BibTeX:
@Inproceedings{石黒浩2013f,
  author    = {石黒浩},
  title     = {知能ロボット技術の将来},
  booktitle = {日本食品工業倶楽部会 大阪例会},
  year      = {2013},
  address   = {大阪},
  month     = Oct,
  day       = {17},
}
石黒浩, "ものとヒトの関係", コンフォータブルブレイン研究会, 東京, October, 2013.
BibTeX:
@Inproceedings{石黒浩2013g,
  author    = {石黒浩},
  title     = {ものとヒトの関係},
  booktitle = {コンフォータブルブレイン研究会},
  year      = {2013},
  address   = {東京},
  month     = Oct,
  day       = {16},
}
石黒浩, "デンマークと日本における存在感対話メディアの実証的研究", 情報学による未来社会のデザイン 第2回シンポジウム, 東京, October, 2013.
BibTeX:
@Inproceedings{石黒浩2013e,
  author       = {石黒浩},
  title        = {デンマークと日本における存在感対話メディアの実証的研究},
  booktitle    = {情報学による未来社会のデザイン 第2回シンポジウム},
  year         = {2013},
  address      = {東京},
  month        = Oct,
  day          = {15},
  funding      = {CREST},
  organization = {独立行政法人科学技術振興機構, 日本学術会議},
}
住岡英信, "ホルモンの評価事例:抱き枕型通信メディア「ハグビー」によるストレス軽減", 第1回コンフォータブルブレイン研究会, 東京, October, 2013.
BibTeX:
@Inproceedings{住岡英信2013b,
  author    = {住岡英信},
  title     = {ホルモンの評価事例:抱き枕型通信メディア「ハグビー」によるストレス軽減},
  booktitle = {第1回コンフォータブルブレイン研究会},
  year      = {2013},
  address   = {東京},
  month     = Oct,
  day       = {16},
  file      = {住岡英信2013b.pdf:pdf/住岡英信2013b.pdf:PDF},
}
Hiroshi Ishiguro, "Studies on very humanlike robots", In International Conference on Instrumentation, Control, Information Technology and System Integration, Aichi, September, 2013.
Abstract: Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.
BibTeX:
@Inproceedings{Ishiguro2013a,
  author    = {Hiroshi Ishiguro},
  title     = {Studies on very humanlike robots},
  booktitle = {International Conference on Instrumentation, Control, Information Technology and System Integration},
  year      = {2013},
  address   = {Aichi},
  month     = Sep,
  day       = {14},
  abstract  = {Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.},
}
石黒浩, "ヒューマノイド・アンドロイド研究と未来社会", 第2回KECテクノフォーラム, 大阪, September, 2013.
Abstract: 工場の外で人と関わりながら、様々なサービスを提供するロボットの実用は既にアメリカを中心に始まりつつある。講演者はこの人と関わるロボットの研究開発において世界を先導してきた。本講演では、人と関わるロボットの現状の研究開発を紹介するとともに、今後我々がどのような未来社会を迎えるかを議論する。
BibTeX:
@Inproceedings{石黒浩2013d,
  author       = {石黒浩},
  title        = {ヒューマノイド・アンドロイド研究と未来社会},
  booktitle    = {第2回KECテクノフォーラム},
  year         = {2013},
  address      = {大阪},
  month        = Sep,
  day          = {17},
  abstract     = {工場の外で人と関わりながら、様々なサービスを提供するロボットの実用は既にアメリカを中心に始まりつつある。講演者はこの人と関わるロボットの研究開発において世界を先導してきた。本講演では、人と関わるロボットの現状の研究開発を紹介するとともに、今後我々がどのような未来社会を迎えるかを議論する。},
  organization = {一般社団法人KEC関西電子工業振興センター研究専門委員会},
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In Global Future 2045 International Congress, NY, USA, June, 2013.
Abstract: Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).
BibTeX:
@Inproceedings{Ishiguro2013,
  author    = {Hiroshi Ishiguro},
  title     = {The Future Life Supported by Robotic Avatars},
  booktitle = {Global Future 2045 International Congress},
  year      = {2013},
  address   = {NY, USA},
  month     = Jun,
  abstract  = {Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).},
}
石黒浩, "Feel the Telenoid - 人型メディアが人間とビジネスを変える", DMNワークショップ2013, 東京, May, 2013.
BibTeX:
@Inproceedings{石黒浩2013a,
  author    = {石黒浩},
  title     = {Feel the Telenoid - 人型メディアが人間とビジネスを変える},
  booktitle = {{DMN}ワークショップ2013},
  year      = {2013},
  address   = {東京},
  month     = May,
  day       = {30},
}
山崎 竜二, "遠隔操作型ロボットを介した コミュニケーションの可能性:石川県宮竹小学校の授業を通して考える", 第30回臨床哲学研究会, 大阪, October, 2012.
BibTeX:
@Inproceedings{山崎竜二2012,
  author    = {山崎 竜二},
  title     = {遠隔操作型ロボットを介した コミュニケーションの可能性:石川県宮竹小学校の授業を通して考える},
  booktitle = {第30回臨床哲学研究会},
  year      = {2012},
  address   = {大阪},
  month     = Oct,
  day       = {21},
  file      = {山崎竜二2012.pdf:pdf/山崎竜二2012.pdf:PDF},
}
山崎 竜二, "認知症高齢者の地域住居(aging in place)と情報機器", 情報処理学会関西支部大会, 大阪, September, 2011.
Abstract: 日本の高齢化は世界に例を見ない速度で進行し、空前の 長寿社会が到来している。高齢化の進展に伴って認知症を 抱える人も急増し、認知症高齢者が住み慣れた地域住居で 暮らす(aging in place)仕組みをどのように構築できるの かということが切迫した課題となる。認知症高齢者の独居 生活にも対応できるケアシステムが必要とされており、孤 立の果てに死にまで至る問題を未然に防ぐことが重要性を 増している。高齢者の孤立化の問題が深刻化する現況に対 して認知症ケアのシステムを構築するため、情報通信技術 を活用するアプローチがどのような問題に直面し、新たな 役割を果たしうるのかを検討する。
BibTeX:
@Inproceedings{山崎竜二2011a,
  author          = {山崎 竜二},
  title           = {認知症高齢者の地域住居(aging in place)と情報機器},
  booktitle       = {情報処理学会関西支部大会},
  year            = {2011},
  address         = {大阪},
  month           = Sep,
  day             = {22},
  etitle          = {Aging in Place and Assistive Technology for the Elderly with Dementia},
  abstract        = {日本の高齢化は世界に例を見ない速度で進行し、空前の 長寿社会が到来している。高齢化の進展に伴って認知症を 抱える人も急増し、認知症高齢者が住み慣れた地域住居で 暮らす(aging in place)仕組みをどのように構築できるの かということが切迫した課題となる。認知症高齢者の独居 生活にも対応できるケアシステムが必要とされており、孤 立の果てに死にまで至る問題を未然に防ぐことが重要性を 増している。高齢者の孤立化の問題が深刻化する現況に対 して認知症ケアのシステムを構築するため、情報通信技術 を活用するアプローチがどのような問題に直面し、新たな 役割を果たしうるのかを検討する。},
  file            = {山崎竜二2011a.pdf:山崎竜二2011a.pdf:PDF},
}
Mari Velonaki, David C. Rye, Steve Scheding, Karl F. MacDorman, Stephen J. Cowley, Hiroshi Ishiguro, Shuichi Nishio, "Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, California, USA, pp. 141-147, March, 2008. (2008.3.26)
BibTeX:
@Inproceedings{Nishio2008b,
  author    = {Mari Velonaki and David C. Rye and Steve Scheding and Karl F. MacDorman and Stephen J. Cowley and Hiroshi Ishiguro and Shuichi Nishio},
  title     = {Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?},
  booktitle = {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  year      = {2008},
  pages     = {141-147},
  address   = {California, USA},
  month     = Mar,
  url       = {http://www.aaai.org/Library/Symposia/Spring/2008/ss08-04-022.php},
  file      = {Rye_Panel.pdf:http\://psychometrixassociates.com/Rye_Panel.pdf:PDF},
  note      = {2008.3.26},
}
論文
Nobuo Yamato, Hidenobu Sumioka, Hiroshi Ishiguro, Masahiro Shiomi, Youji Kohda, "Technology Acceptance Models from Different Viewpoints of Caregiver, Receiver, and Care Facility Administrator: Lessons from Long-Term Implementation Using Baby-Like Interactive Robot for Nursing Home Residents with Dementia", Journal of Technology in Human Services, vol. 41, pp. 296-321, December, 2023.
Abstract: The introduction of companion robots into nursing homes has positive effects on older people with dementia (PwD) but increases the physical and psychological burden on the nursing staff, such as learning how to use them, fear of breakdowns, and concern about hygiene, and the concerns of the nursing home administrator, such as increased turnover and reduced quality of care due to this. To solve this problem, it is necessary to investigate the acceptability of robots from the viewpoints of all stakeholders: PwD as receivers, nursing staff as caregivers, and nursing home administrator as a care facility administrator. However, there is still missing hypothesis about how their acceptability is structured and involved with each other. This study proposes three technology acceptance model (TAMs) from the perspectives of PwD, nursing staff, and nursing home administrator. The models are conceptualized based on the qualitative and quantitative analysis of the results of our two experiments involving a baby-like interactive robot to stimulate PwD in the same nursing home (one with low acceptance of all stakeholders and the other with their high acceptance) in addition to the comparison with other companion robots. Based on the proposed models, we discuss an integrated TAM for the acceptance of companion robots in long-term care facilities. We also discuss the possibility of applying our approach, which examines the perspectives of various stakeholders on technology acceptance, to other areas such as health care and education, followed by the ethical consideration of introducing a baby-like robot and some limitations.
BibTeX:
@Article{Yamato2023,
  author   = {Nobuo Yamato and Hidenobu Sumioka and Hiroshi Ishiguro and Masahiro Shiomi and Youji Kohda},
  journal  = {Journal of Technology in Human Services},
  title    = {Technology Acceptance Models from Different Viewpoints of Caregiver, Receiver, and Care Facility Administrator: Lessons from Long-Term Implementation Using Baby-Like Interactive Robot for Nursing Home Residents with Dementia},
  year     = {2023},
  abstract = {The introduction of companion robots into nursing homes has positive effects on older people with dementia (PwD) but increases the physical and psychological burden on the nursing staff, such as learning how to use them, fear of breakdowns, and concern about hygiene, and the concerns of the nursing home administrator, such as increased turnover and reduced quality of care due to this. To solve this problem, it is necessary to investigate the acceptability of robots from the viewpoints of all stakeholders: PwD as receivers, nursing staff as caregivers, and nursing home administrator as a care facility administrator. However, there is still missing hypothesis about how their acceptability is structured and involved with each other. This study proposes three technology acceptance model (TAMs) from the perspectives of PwD, nursing staff, and nursing home administrator. The models are conceptualized based on the qualitative and quantitative analysis of the results of our two experiments involving a baby-like interactive robot to stimulate PwD in the same nursing home (one with low acceptance of all stakeholders and the other with their high acceptance) in addition to the comparison with other companion robots. Based on the proposed models, we discuss an integrated TAM for the acceptance of companion robots in long-term care facilities. We also discuss the possibility of applying our approach, which examines the perspectives of various stakeholders on technology acceptance, to other areas such as health care and education, followed by the ethical consideration of introducing a baby-like robot and some limitations.},
  day      = {24},
  doi      = {10.1080/15228835.2023.2292058},
  month    = dec,
  pages    = {296-321},
  url      = {https://www.tandfonline.com/doi/full/10.1080/15228835.2023.2292058},
  volume   = {41},
  issue    = {4},
  keywords = {TAM, BPSD, robot therapy, interactive doll therapy, dementia},
}
Satomi Doi, Aya Isumi, Yui Yamaoka, Shiori Noguchi, Juri Yamazaki, Kanako Ito, Masahiro Shiomi, Hidenobu Sumioka, Takeo Fujiwara, "The effect of breathing relaxation using a huggable human-shaped device on sleep quality among people with sleep problems: A randomized controlled trial", Sleep and Breathing, pp. 1-11, July, 2023.
Abstract: 研究に参加した外来患者67名(ハグビー介入群:29名、対照群:38名)が解析対象となりました。ピッツバーグ睡眠質問票という睡眠障害の程度を評価するツール(PSQI)を使って、介入前、介入開始から2週間後、介入開始から4週間後に睡眠の問題を評価しました。 統計解析の結果、対照群と比べて、介入群のPSQI合計得点が低下していることが示されました。PSQIには複数の下位項目があますが、なかでも主観的な睡眠の質に関する得点が低下していました。つまり、ハグビーを用いた呼吸法によって、睡眠の質が著名に改善することが明らかになりました。また、睡眠改善の効果は、介入開始から2週間後にすでに現れていることも示されました。
BibTeX:
@Article{Doi2023,
  author   = {Satomi Doi and Aya Isumi and Yui Yamaoka and Shiori Noguchi and Juri Yamazaki and Kanako Ito and Masahiro Shiomi and Hidenobu Sumioka and Takeo Fujiwara},
  journal  = {Sleep and Breathing},
  title    = {The effect of breathing relaxation using a huggable human-shaped device on sleep quality among people with sleep problems: A randomized controlled trial},
  year     = {2023},
  abstract = {研究に参加した外来患者67名(ハグビー介入群:29名、対照群:38名)が解析対象となりました。ピッツバーグ睡眠質問票という睡眠障害の程度を評価するツール(PSQI)を使って、介入前、介入開始から2週間後、介入開始から4週間後に睡眠の問題を評価しました。 統計解析の結果、対照群と比べて、介入群のPSQI合計得点が低下していることが示されました。PSQIには複数の下位項目があますが、なかでも主観的な睡眠の質に関する得点が低下していました。つまり、ハグビーを用いた呼吸法によって、睡眠の質が著名に改善することが明らかになりました。また、睡眠改善の効果は、介入開始から2週間後にすでに現れていることも示されました。},
  day      = {10},
  doi      = {https://doi.org/10.1007/s11325-023-02858-5},
  month    = jul,
  pages    = {1-11},
  url      = {https://link.springer.com/article/10.1007/s11325-023-02858-5},
  keywords = {Sleep quality, Breathing relaxation, Huggable human-shaped device, Hugvie, Adverse childhood experience},
}
大和信夫, 住岡英信, 石黒浩, 神田陽治, 塩見昌裕, "認知症高齢者向け赤ちゃん型対話ロボット -介護施設での長期導入の実現-", 情報処理学会論文誌 デジタルプラクティス, vol. 3, no. 4, November, 2022.
Abstract: 認知症高齢者の暴言や暴行,徘徊といった問題行動,妄想や意欲低減といった心理的な症状(BPSD)は介護者の負担にとどまらず,社会全体の経済的な負担増として大きな社会課題となっている.BPSD への対処では,非薬理学的手法が推奨される.我々は,非薬理学療法の一つである人形療法を参考に,人形にインタラクティブな機能を持たせた赤ちゃん型対話ロボットを開発してきた.このロボットは,ミニマルデザイン発想で,構造・機能ともに非常にシンプルに設計されており,利用者・運用者にとって取り扱いが容易で,低廉さの実現を目指した.これまでに短期間の実証実験を行ってきたが,今回,介護施設での長期導入実験を実施し,認知症高齢者の生活の質の向上やBPSD 対策だけでなく,介護職員,介護施設への影響調査も行った.その結果,我々が開発したミニマルデザインの赤ちゃん型対話ロボットでも対話仕様と運用の工夫により長期運用の可能性が確認できた.また,ロボットとは直接には関わっていなかった認知症高齢者や介護職員にも受動的に影響を与えるパッシブソーシャルな状況も観測された.導入実験の過程でコロナ禍となり,通常以上に多忙で負担の多い介護現場での実験となったにも関わらず,介護職員のみで継続的に運用されたということの意義は大きい.最後に,この実験を通してわかったこと,今後の課題について述べる.
BibTeX:
@Article{大和信夫2022,
  author   = {大和信夫 and 住岡英信 and 石黒浩 and 神田陽治 and 塩見昌裕},
  journal  = {情報処理学会論文誌 デジタルプラクティス},
  title    = {認知症高齢者向け赤ちゃん型対話ロボット -介護施設での長期導入の実現-},
  year     = {2022},
  abstract = {認知症高齢者の暴言や暴行,徘徊といった問題行動,妄想や意欲低減といった心理的な症状(BPSD)は介護者の負担にとどまらず,社会全体の経済的な負担増として大きな社会課題となっている.BPSD への対処では,非薬理学的手法が推奨される.我々は,非薬理学療法の一つである人形療法を参考に,人形にインタラクティブな機能を持たせた赤ちゃん型対話ロボットを開発してきた.このロボットは,ミニマルデザイン発想で,構造・機能ともに非常にシンプルに設計されており,利用者・運用者にとって取り扱いが容易で,低廉さの実現を目指した.これまでに短期間の実証実験を行ってきたが,今回,介護施設での長期導入実験を実施し,認知症高齢者の生活の質の向上やBPSD 対策だけでなく,介護職員,介護施設への影響調査も行った.その結果,我々が開発したミニマルデザインの赤ちゃん型対話ロボットでも対話仕様と運用の工夫により長期運用の可能性が確認できた.また,ロボットとは直接には関わっていなかった認知症高齢者や介護職員にも受動的に影響を与えるパッシブソーシャルな状況も観測された.導入実験の過程でコロナ禍となり,通常以上に多忙で負担の多い介護現場での実験となったにも関わらず,介護職員のみで継続的に運用されたということの意義は大きい.最後に,この実験を通してわかったこと,今後の課題について述べる.},
  day      = {15},
  month    = nov,
  number   = {4},
  url      = {https://www.ipsj.or.jp/dp/index.html},
  volume   = {3},
  keywords = {認知症,BPSD,ロボット,介護},
}
Yoshiki Ohira, Takahisa Uchida, takashi Minato, Hiroshi Ishiguro, "A Dialogue System That Models User's Opinion Based on Information Content", Multimodal Technologies and Interaction, vol. 6, Issue 10, no. 91, pp. 1-33, October, 2022.
Abstract: When designing rule-based dialogue systems, the need for the creation of an elaboratedesign by the designer is a challenge. One way to reduce the cost of creating content is to generateutterances from data collected in an objective and reproducible manner. This study focuses onrule-based dialogue systems using survey data and, more specifically, on opinion dialogue in whichthe system models the user. In the field of opinion dialogue, there has been little study on the topic oftransition methods for modeling users while maintaining their motivation to engage in dialogue. Tomodel them, we adopted information content. Our contribution includes the design of a rule-baseddialogue system that does not require an elaborate design. We also reported an appropriate topictransition method based on information content. This is confirmed by the influence of the user’spersonality characteristics. The content of the questions gives the user a sense of the system’s intentionto understand them. We also reported the possibility that the system’s rational intention contributesto the user’s motivation to engage in dialogue with the system.
BibTeX:
@Article{Ohira2022,
  author   = {Yoshiki Ohira and Takahisa Uchida and takashi Minato and Hiroshi Ishiguro},
  journal  = {Multimodal Technologies and Interaction},
  title    = {A Dialogue System That Models User's Opinion Based on Information Content},
  year     = {2022},
  abstract = {When designing rule-based dialogue systems, the need for the creation of an elaboratedesign by the designer is a challenge. One way to reduce the cost of creating content is to generateutterances from data collected in an objective and reproducible manner. This study focuses onrule-based dialogue systems using survey data and, more specifically, on opinion dialogue in whichthe system models the user. In the field of opinion dialogue, there has been little study on the topic oftransition methods for modeling users while maintaining their motivation to engage in dialogue. Tomodel them, we adopted information content. Our contribution includes the design of a rule-baseddialogue system that does not require an elaborate design. We also reported an appropriate topictransition method based on information content. This is confirmed by the influence of the user’spersonality characteristics. The content of the questions gives the user a sense of the system’s intentionto understand them. We also reported the possibility that the system’s rational intention contributesto the user’s motivation to engage in dialogue with the system.},
  day      = {13},
  doi      = {10.3390/mti6100091},
  month    = oct,
  number   = {91},
  pages    = {1-33},
  url      = {https://www.mdpi.com/2414-4088/6/10/91},
  volume   = {6, Issue 10},
  keywords = {opinion model; user modeling; information content; dialogue strategy; dialogue system},
}
Takashi Minato, Kurima Sakai, Takahisa Uchida, Hiroshi Ishiguro, "A study of interactive robot architecture through the practical implementation of conversational android", Frontiers in Robotics and AI, vol. 9, no. 905030, pp. 1-25, October, 2022.
Abstract: This study shows an autonomous android robot that can have a natural daily dialogue with humans. The dialogue system for daily dialogue is different from a task-oriented dialogue system in that it is not given a clear purpose or the necessary information. That is, it needs to generate an utterance in a situation where there is no clear request from humans. Therefore, to continue a dialogue with a consistent content, it is necessary to essentially change the design policy of dialogue management compared with the existing dialogue system. The purpose of our study is to constructively find out the dialogue system architecture for realizing daily dialogue through implementing an autonomous dialogue robot capable of daily natural dialogue. We defined the android’s desire necessary for daily dialogue and the dialogue management system in which the android changes its internal (mental) states in accordance to the desire and partner’s behavior and chooses a dialogue topic suitable for the current situation. The developed android could continue daily dialogue for about 10 min in the scene where the robot and partner met for the first time in the experiment. Moreover, a multimodal Turing test has shown that half of the participants had felt that the android was remotely controlled to some degree, that is, the android’s behavior was humanlike. This result suggests that the system construction method assumed in this study is an effective approach to realize daily dialogue, and the study discusses the system architecture for daily dialogue.
BibTeX:
@Article{Minato2022,
  author   = {Takashi Minato and Kurima Sakai and Takahisa Uchida and Hiroshi Ishiguro},
  journal  = {Frontiers in Robotics and AI},
  title    = {A study of interactive robot architecture through the practical implementation of conversational android},
  year     = {2022},
  abstract = {This study shows an autonomous android robot that can have a natural daily dialogue with humans. The dialogue system for daily dialogue is different from a task-oriented dialogue system in that it is not given a clear purpose or the necessary information. That is, it needs to generate an utterance in a situation where there is no clear request from humans. Therefore, to continue a dialogue with a consistent content, it is necessary to essentially change the design policy of dialogue management compared with the existing dialogue system. The purpose of our study is to constructively find out the dialogue system architecture for realizing daily dialogue through implementing an autonomous dialogue robot capable of daily natural dialogue. We defined the android’s desire necessary for daily dialogue and the dialogue management system in which the android changes its internal (mental) states in accordance to the desire and partner’s behavior and chooses a dialogue topic suitable for the current situation. The developed android could continue daily dialogue for about 10 min in the scene where the robot and partner met for the first time in the experiment. Moreover, a multimodal Turing test has shown that half of the participants had felt that the android was remotely controlled to some degree, that is, the android’s behavior was humanlike. This result suggests that the system construction method assumed in this study is an effective approach to realize daily dialogue, and the study discusses the system architecture for daily dialogue.},
  day      = {11},
  doi      = {10.3389/frobt.2022.905030},
  month    = oct,
  number   = {905030},
  pages    = {1-25},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2022.905030/full},
  volume   = {9},
  keywords = {conversational robot, android, daily dialogue, multimodal turing test, architecture},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An Improved CycleGAN-based Emotional Speech Conversion Model by Augmenting Receptive Field with Transformer", Speech Communication, vol. 144, pp. 110-121, September, 2022.
Abstract: Emotional voice conversion (EVC) is a task that converts the spectrogram and prosody of speech to a target emotion. Recently, some researchers leverage deep learning methods to improve the performance of EVC, such as deep neural network (DNN), sequence-to-sequence model (seq2seq), long-short-term memory network (LSTM), convolutional neural network (CNN), as well as their combinations with the attention mechanism. However, their methods always suffer from some instability problems such as mispronunciations and skipped phonemes, because the model fails to capture temporal intra-relationships among a wide range of frames, which results in unnatural speech and discontinuous emotional expression. Considering to enhance the ability to capture intra-relations among frames by augmenting the receptive field of models, in this study, we explored the power of the transformer. Specifically, we proposed a CycleGAN-based model with the transformer and investigated its ability in the EVC task. In the training procedure, we adopted curriculum learning to gradually increase the frame length so that the model can see from the short segment throughout the entire speech. The proposed method was evaluated on a Japanese emotional speech dataset and compared to widely used EVC baselines (ACVAE, CycleGAN) with objective and subjective evaluations. The results show that our proposed model is able to convert emotion with higher emotional strength, quality, and naturalness.
BibTeX:
@Article{Fu2022a,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Speech Communication},
  title    = {An Improved CycleGAN-based Emotional Speech Conversion Model by Augmenting Receptive Field with Transformer},
  year     = {2022},
  abstract = {Emotional voice conversion (EVC) is a task that converts the spectrogram and prosody of speech to a target emotion. Recently, some researchers leverage deep learning methods to improve the performance of EVC, such as deep neural network (DNN), sequence-to-sequence model (seq2seq), long-short-term memory network (LSTM), convolutional neural network (CNN), as well as their combinations with the attention mechanism. However, their methods always suffer from some instability problems such as mispronunciations and skipped phonemes, because the model fails to capture temporal intra-relationships among a wide range of frames, which results in unnatural speech and discontinuous emotional expression. Considering to enhance the ability to capture intra-relations among frames by augmenting the receptive field of models, in this study, we explored the power of the transformer. Specifically, we proposed a CycleGAN-based model with the transformer and investigated its ability in the EVC task. In the training procedure, we adopted curriculum learning to gradually increase the frame length so that the model can see from the short segment throughout the entire speech. The proposed method was evaluated on a Japanese emotional speech dataset and compared to widely used EVC baselines (ACVAE, CycleGAN) with objective and subjective evaluations. The results show that our proposed model is able to convert emotion with higher emotional strength, quality, and naturalness.},
  day      = {20},
  doi      = {10.1016/j.specom.2022.09.002},
  month    = sep,
  pages    = {110-121},
  url      = {https://www.sciencedirect.com/science/article/abs/pii/S0167639322001224?via=ihub},
  volume   = {144},
  keywords = {Emotional voice conversion, CycleGAN, Transformer, Temporal dependency},
}
Hidenobu Sumioka, Jim Torresen, Masahiro Shiomi, Liang-Kung Chen, Atsushi Nakazawa, "Editorial: Interaction in robot-assistive elderly care", Frontiers in Robotics and AI, pp. 1-3, September, 2022.
Abstract: This Research Topic focuses on scientific and technical advances in methods, models, techniques, algorithms, and interaction design developed to understand and facilitate verbal and non-verbal interaction between older people and caregivers/artificial systems. In this collection containing seven peer-reviewed articles, the studies can be divided into two categories.
BibTeX:
@Article{Sumioka2022,
  author    = {Hidenobu Sumioka and Jim Torresen and Masahiro Shiomi and Liang-Kung Chen and Atsushi Nakazawa},
  journal   = {Frontiers in Robotics and AI},
  title     = {Editorial: Interaction in robot-assistive elderly care},
  year      = {2022},
  abstract  = {This Research Topic focuses on scientific and technical advances in methods, models, techniques, algorithms, and interaction design developed to understand and facilitate verbal and non-verbal interaction between older people and caregivers/artificial systems. In this collection containing seven peer-reviewed articles, the studies can be divided into two categories.},
  day       = {29},
  doi       = {10.3389/frobt.2022.1020103},
  month     = sep,
  pages     = {1-3},
  url       = {https://www.frontiersin.org/articles/10.3389/frobt.2022.1020103/full},
  booktitle = {Frontiers in Robotics and AI},
}
内田貴久, 船山智, 境くりま, 港隆史, 石黒浩, "他者視点取得の誘発による人間同士の関係構築促進:3者対話におけるロボットの対話戦略", ヒューマンインタフェース学会誌, vol. 24, no. 3, pp. 167-180, August, 2022.
Abstract: The purpose of this study is to promote relationship building between the users whomeet for the first time in a three members’ dialogue: one robot and two users. It is oftendifficult for people who have never met each other before to talk with each other becauseof psychological barriers caused by mutual unfamiliarity. In this study, we developed adialogue android that promotes relationship building between the users without speakingdirectly to each other. It induces the user to taking the other’s perspective by asking theuser to speak for the other’s opinion. The experimental results confirmed that the proposedmethod promotes the relationship building between them, the sense of dialogue. Italso improved the impression of the android and the dialogue with it, and the impressionon the dialogue between the three persons as a whole. These results suggest that theproposed method is an effective way to promote relationship building between first-timepeople when androids engage in three persons’ dialogue.
BibTeX:
@Article{内田貴久2022,
  author   = {内田貴久 and 船山智 and 境くりま and 港隆史 and 石黒浩},
  journal  = {ヒューマンインタフェース学会誌},
  title    = {他者視点取得の誘発による人間同士の関係構築促進:3者対話におけるロボットの対話戦略},
  year     = {2022},
  abstract = {The purpose of this study is to promote relationship building between the users whomeet for the first time in a three members’ dialogue: one robot and two users. It is oftendifficult for people who have never met each other before to talk with each other becauseof psychological barriers caused by mutual unfamiliarity. In this study, we developed adialogue android that promotes relationship building between the users without speakingdirectly to each other. It induces the user to taking the other’s perspective by asking theuser to speak for the other’s opinion. The experimental results confirmed that the proposedmethod promotes the relationship building between them, the sense of dialogue. Italso improved the impression of the android and the dialogue with it, and the impressionon the dialogue between the three persons as a whole. These results suggest that theproposed method is an effective way to promote relationship building between first-timepeople when androids engage in three persons’ dialogue.},
  day      = {25},
  doi      = {10.11184/his.24.3_167},
  etitle   = {Promotion of Relationship Building between Users by Inducing Perspective-Taking:A Dialogue Strategy for Robots in Three Members' Dialogue},
  month    = aug,
  number   = {3},
  pages    = {167-180},
  url      = {https://www.jstage.jst.go.jp/article/his/24/3/24_167/_article/-char/ja},
  volume   = {24},
  keywords = {perspective-taking, three members’ dialogue, dialogue strategy, android, dialogue robot},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An Adversarial Training Based Speech Emotion Classifier with Isolated Gaussian Regularization", IEEE Transaction of Affective Computing, vol. 14, no. 8, April, 2022.
Abstract: Speaker individual bias may cause emotion-related features to form clusters with irregular borders (non-Gaussian distributions), making the model be sensitive to local irregularities of pattern distributions and resulting in model over-fit of the in-domain dataset. This problem may cause a decrease in the validation scores in cross-domain (i.e. speaker-independent, channel-variant) implementation. To mitigate this problem, in this paper, we propose an adversarial training-based classifier, which is supposed to regularize the distribution of latent representations and smooth the boundaries among different categories. In the regularization phase, we mapped the representations into isolated Gaussian distributions in an unsupervised manner to improve the discriminative ability of latent representations. Moreover, we adopted multi-instance learning by dividing speech into a bag of segments to capture the most salient part for presenting an emotion. The model was evaluated on the IEMOCAP dataset and MELD data with in-corpus speakerindependent sittings. Besides, we investigated the accuracy with cross-corpus speaker-independent sittings to simulate the channelvariant. In the experiment, we compared the proposed model not only with baseline models but also with different configurations of our model. The results show that the proposed model is competitive with the baseline of both in-corpus validation and cross-corpus validation.
BibTeX:
@Article{Fu2022,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {IEEE Transaction of Affective Computing},
  title    = {An Adversarial Training Based Speech Emotion Classifier with Isolated Gaussian Regularization},
  year     = {2022},
  abstract = {Speaker individual bias may cause emotion-related features to form clusters with irregular borders (non-Gaussian distributions), making the model be sensitive to local irregularities of pattern distributions and resulting in model over-fit of the in-domain dataset. This problem may cause a decrease in the validation scores in cross-domain (i.e. speaker-independent, channel-variant) implementation. To mitigate this problem, in this paper, we propose an adversarial training-based classifier, which is supposed to regularize the distribution of latent representations and smooth the boundaries among different categories. In the regularization phase, we mapped the representations into isolated Gaussian distributions in an unsupervised manner to improve the discriminative ability of latent representations. Moreover, we adopted multi-instance learning by dividing speech into a bag of segments to capture the most salient part for presenting an emotion. The model was evaluated on the IEMOCAP dataset and MELD data with in-corpus speakerindependent sittings. Besides, we investigated the accuracy with cross-corpus speaker-independent sittings to simulate the channelvariant. In the experiment, we compared the proposed model not only with baseline models but also with different configurations of our model. The results show that the proposed model is competitive with the baseline of both in-corpus validation and cross-corpus validation.},
  day      = {21},
  doi      = {10.1109/TAFFC.2022.3169091},
  month    = apr,
  number   = {8},
  url      = {https://ieeexplore.ieee.org/document/9761736},
  volume   = {14},
  keywords = {Speech emotion recognition, Adversarial training, Regularization},
}
Takuto Akiyoshi, Junya Nakanishi, Hiroshi Ishiguro, Hidenobu Sumioka, Masahiro Shiomi, "A Robot that Encourages Self-Disclosure to Reduce Anger Mood", IEEE Robotics and Automation Letters (RA-L), vol. 6, Issue 4, pp. 7925-7932, August, 2021.
Abstract: Oneessential role of social robots is supportinghumanmental health by interaction with people. In this study, we focusedon making people’s moods more positive through conversationsabout their problems as our first step to achieving a robot that caresabout mental health. We employed the column method, typicalstress coping technique in Japan, and designed conversational contentsfor a robot. We implemented conversational functions basedon the column method for a social robot as well as a self-schema estimationfunction using conversational data, and proposed conversationalstrategies to support awareness of their self-schemas andautomatic thoughts, which are related to mental health support.We experimentally evaluated our system’s effectiveness and foundthat participants who used it with our proposed conversationalstrategies made more self-disclosures and experienced less angerthan those who did not use our proposed conversational strategies.Unfortunately, the strategies did not significantly increase the performanceof the self-schema estimation function.
BibTeX:
@Article{Akiyoshi2021,
  author   = {Takuto Akiyoshi and Junya Nakanishi and Hiroshi Ishiguro and Hidenobu Sumioka and Masahiro Shiomi},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {A Robot that Encourages Self-Disclosure to Reduce Anger Mood},
  year     = {2021},
  abstract = {Oneessential role of social robots is supportinghumanmental health by interaction with people. In this study, we focusedon making people’s moods more positive through conversationsabout their problems as our first step to achieving a robot that caresabout mental health. We employed the column method, typicalstress coping technique in Japan, and designed conversational contentsfor a robot. We implemented conversational functions basedon the column method for a social robot as well as a self-schema estimationfunction using conversational data, and proposed conversationalstrategies to support awareness of their self-schemas andautomatic thoughts, which are related to mental health support.We experimentally evaluated our system’s effectiveness and foundthat participants who used it with our proposed conversationalstrategies made more self-disclosures and experienced less angerthan those who did not use our proposed conversational strategies.Unfortunately, the strategies did not significantly increase the performanceof the self-schema estimation function.},
  day      = {6},
  doi      = {10.1109/LRA.2021.3102326},
  month    = aug,
  pages    = {7925-7932},
  url      = {https://ieeexplore.ieee.org/document/9508832},
  volume   = {6, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Human-robot interaction, Stress coping},
}
Hidenobu Sumioka, Hirokazu Kumazaki, Taro Muramatsu, Yuichiro Yoshikawa, Hiroshi Ishiguro, Haruhiro Higashida, Teruko Yuhi, Masaru Mumura, "A huggable device can reduce the stress of calling an unfamiliar person on the phone for individuals with ASD", PLOS ONE, vol. 16, no. 7, pp. 1-14, July, 2021.
Abstract: Individuals with autism spectrum disorders (ASD) are often not comfortable with calling unfamiliar people on a mobile phone. “Hugvie”, a pillow with a human-like shape, was designed to provide users with the tactile sensation of hugging a person during phone conversations to improve their positive feelings (e.g., comfort and trust) toward phone conversation partners. The primary aim of this study is to examine whether physical contact by hugging a Hugvie can reduce the stress of calling an unfamiliar person on the phone. In this study, 24 individuals with ASD participated. After a phone conversation using only a mobile phone or a mobile phone plus Hugvie, all participants completed questionnaires on their self-confidence in talking on the phone. In addition, participants provided salivary cortisol samples four times each day. Our analysis showed a significant effect of the communication medium, indicating that individuals with ASD who talked on the phone with an unfamiliar person while hugging a Hugvie had stronger self-confidence and lower stress than those who did not use Hugvie. Given the results of this study, we recommend that huggable devices be used as adjunctive tools to support individuals with ASD when they call unfamiliar people on mobile phones.
BibTeX:
@Article{Sumioka2021d,
  author   = {Hidenobu Sumioka and Hirokazu Kumazaki and Taro Muramatsu and Yuichiro Yoshikawa and Hiroshi Ishiguro and Haruhiro Higashida and Teruko Yuhi and Masaru Mumura},
  journal  = {PLOS ONE},
  title    = {A huggable device can reduce the stress of calling an unfamiliar person on the phone for individuals with ASD},
  year     = {2021},
  abstract = {Individuals with autism spectrum disorders (ASD) are often not comfortable with calling unfamiliar people on a mobile phone. “Hugvie”, a pillow with a human-like shape, was designed to provide users with the tactile sensation of hugging a person during phone conversations to improve their positive feelings (e.g., comfort and trust) toward phone conversation partners. The primary aim of this study is to examine whether physical contact by hugging a Hugvie can reduce the stress of calling an unfamiliar person on the phone. In this study, 24 individuals with ASD participated. After a phone conversation using only a mobile phone or a mobile phone plus Hugvie, all participants completed questionnaires on their self-confidence in talking on the phone. In addition, participants provided salivary cortisol samples four times each day. Our analysis showed a significant effect of the communication medium, indicating that individuals with ASD who talked on the phone with an unfamiliar person while hugging a Hugvie had stronger self-confidence and lower stress than those who did not use Hugvie. Given the results of this study, we recommend that huggable devices be used as adjunctive tools to support individuals with ASD when they call unfamiliar people on mobile phones.},
  day      = {23},
  doi      = {10.1371/journal.pone.0254675},
  month    = jul,
  number   = {7},
  pages    = {1-14},
  url      = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0254675},
  volume   = {16},
  keywords = {autism spectrum disorders, tactile, huggable device, self-confidence, cortisol},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Takamasa Iio, Hiroshi Ishiguro, "Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations", IEEE Robotics and Automation Letters (RA-L), vol. 6, Issue 4, pp. 6670-6677, July, 2021.
Abstract: Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.
BibTeX:
@Article{Fu2021b,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Takamasa Iio and Hiroshi Ishiguro},
  title    = {Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  year     = {2021},
  volume   = {6, Issue 4},
  pages    = {6670-6677},
  month    = jul,
  abstract = {Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.},
  day      = {7},
  url      = {https://ieeexplore.ieee.org/document/9477165},
  doi      = {10.1109/LRA.2021.3094779},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Robots, Databases, Chatbot, COVID-19, Training, Teleworking, Robot sensing system},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence", IEEE Robotics and Automation Letters (RA-L), vol. 6, no. Issue 4, pp. 6521-6528, July, 2021.
Abstract: Motivated by the fact that some human emotional expressions promote affiliating functions such as signaling, social change, and support, all of which have been established as providing social benefits, we investigated how these behaviors can be extended to Human-Robot Interaction (HRI) scenarios. We explored how to furnish an android robot with socially motivated expressions geared toward eliciting adherence to COVID-19 guidelines. We analyzed how different behaviors associated with social expressions in such situations occur in Human-Human Interaction (HHI) and designed a scenario where a robot utilizes context-inspired behaviors (polite, gentle, displeased, and angry) to enforce social compliance. We then implemented these behaviors in an android robot and subjectively evaluated how effectively it expressed them and how they were perceived in terms of their appropriateness, effectiveness, and tendency to enforce social compliance to COVID-19 guidelines. We also considered how the subjects' sense of values regarding compliance awareness would affect the robot's behavior impressions. Our evaluation results indicated that participants generally preferred polite behaviors by a robot, although participants with different levels of compliance awareness manifested different trends toward appropriateness and effectiveness for social compliance enforcement through negative expressions by the robot.
BibTeX:
@Article{Ajibo2021,
  author   = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence},
  year     = {2021},
  abstract = {Motivated by the fact that some human emotional expressions promote affiliating functions such as signaling, social change, and support, all of which have been established as providing social benefits, we investigated how these behaviors can be extended to Human-Robot Interaction (HRI) scenarios. We explored how to furnish an android robot with socially motivated expressions geared toward eliciting adherence to COVID-19 guidelines. We analyzed how different behaviors associated with social expressions in such situations occur in Human-Human Interaction (HHI) and designed a scenario where a robot utilizes context-inspired behaviors (polite, gentle, displeased, and angry) to enforce social compliance. We then implemented these behaviors in an android robot and subjectively evaluated how effectively it expressed them and how they were perceived in terms of their appropriateness, effectiveness, and tendency to enforce social compliance to COVID-19 guidelines. We also considered how the subjects' sense of values regarding compliance awareness would affect the robot's behavior impressions. Our evaluation results indicated that participants generally preferred polite behaviors by a robot, although participants with different levels of compliance awareness manifested different trends toward appropriateness and effectiveness for social compliance enforcement through negative expressions by the robot.},
  day      = {7},
  doi      = {10.1109/LRA.2021.3094783},
  month    = jul,
  number   = {Issue 4},
  pages    = {6521-6528},
  url      = {https://ieeexplore.ieee.org/document/9476976},
  volume   = {6},
  comment  = {(The contents of this paper were also selected by IROS2021 Program Committee for presentation at the Conference)},
  keywords = {Guidelines, COVID-19, Robot sensing system, Pandemics, Task analysis, Human-robot interaction, Faces},
}
Hidenobu Sumioka, Nobuo Yamato, Masahiro Shiomi, Hiroshi Ishiguro, "A Minimal Design of a Human Infant Presence: A Case Study Toward Interactive Doll Therapy for Older Adults With Dementia", Frontiers in Robotics and AI, vol. 8, no. 633378, pp. 1-12, June, 2021.
Abstract: We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.
BibTeX:
@Article{Sumioka2021a,
  author   = {Hidenobu Sumioka and Nobuo Yamato and Masahiro Shiomi and Hiroshi Ishiguro},
  title    = {A Minimal Design of a Human Infant Presence: A Case Study Toward Interactive Doll Therapy for Older Adults With Dementia},
  journal  = {Frontiers in Robotics and AI},
  year     = {2021},
  volume   = {8},
  number   = {633378},
  pages    = {1-12},
  month    = jun,
  abstract = {We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.},
  day      = {17},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2021.633378/full},
  doi      = {10.3389/frobt.2021.633378},
}
Hidenobu Sumioka, Masahiro Shiomi, Miwako Honda, Atsushi Nakazawa, "Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™", Frontiers in Robotics and AI, vol. 8, no. 650906, pp. 1-14, June, 2021.
Abstract: Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.
BibTeX:
@Article{Sumioka2021,
  author   = {Hidenobu Sumioka and Masahiro Shiomi and Miwako Honda and Atsushi Nakazawa},
  journal  = {Frontiers in Robotics and AI},
  title    = {Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™},
  year     = {2021},
  abstract = {Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.},
  day      = {2},
  doi      = {10.3389/frobt.2021.650906},
  month    = jun,
  number   = {650906},
  pages    = {1-14},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2021.650906/full},
  volume   = {8},
  keywords = {Humanitude, dementia care, social robot, human-robot interaction, skill evaluation, dementia},
}
李歆玥, 石井カルロス寿憲, 林良子, "日本語と中国語感情音声に関する声質と音響の複合的分析 -日本語母語話者と中国語を母語とする日本語学習者による発話を対象に-", 日本音声学会 学会誌「音声研究」, vol. 25, pp. 9-22, April, 2021.
Abstract: 本研究では,日本語母語話者による日本語発話および中国語を母語とする日本語学習者による日本語発話と中国語発話における8つの感情表現(「喜び」「激しい怒り」「押し殺した怒り」「悲しみ」「驚き」「恐れ」「嫌悪」「中立」)を対象として,声質の特徴および音響的特徴の相違を検討した。収録した発話のスペクトル特徴分析を行ない,Electroglottography信号によるOqを抽出し,Oq-valued VRPの解析を行なった結果,発話者の第一言語によって感情表出様式が異なることが示された。中国人学習者が発話した「押し殺した怒り」「喜び」「激しい怒り」と「悲しみ」は日本語母語話者よりtense voiceとして表出されることが観察され,母語である中国語の感情表出様式が,学習した言語である日本語の感情表出に影響を与えた可能性を示唆する結果となった。
BibTeX:
@Article{Li2020b,
  author   = {李歆玥 and 石井カルロス寿憲 and 林良子},
  journal  = {日本音声学会 学会誌「音声研究」},
  title    = {日本語と中国語感情音声に関する声質と音響の複合的分析 -日本語母語話者と中国語を母語とする日本語学習者による発話を対象に-},
  year     = {2021},
  abstract = {本研究では,日本語母語話者による日本語発話および中国語を母語とする日本語学習者による日本語発話と中国語発話における8つの感情表現(「喜び」「激しい怒り」「押し殺した怒り」「悲しみ」「驚き」「恐れ」「嫌悪」「中立」)を対象として,声質の特徴および音響的特徴の相違を検討した。収録した発話のスペクトル特徴分析を行ない,Electroglottography信号によるOqを抽出し,Oq-valued VRPの解析を行なった結果,発話者の第一言語によって感情表出様式が異なることが示された。中国人学習者が発話した「押し殺した怒り」「喜び」「激しい怒り」と「悲しみ」は日本語母語話者よりtense voiceとして表出されることが観察され,母語である中国語の感情表出様式が,学習した言語である日本語の感情表出に影響を与えた可能性を示唆する結果となった。},
  day      = {30},
  doi      = {10.24467/onseikenkyu.25.0_9},
  etitle   = {Analyses of Voice Quality and Acoustic Features in Japanese and Chinese Emotional Speech: Japanese Native Speakers and Mandarin Chinese learners},
  month    = apr,
  pages    = {9-22},
  url      = {https://www.jstage.jst.go.jp/article/onseikenkyu/25/0/25_9/_article/-char/ja},
  volume   = {25},
  keywords = {パラ言語情報, 発声様式, 第二言語習得, paralinguistic information, phonation type, EGG, open quotient, second language acquisition},
}
李歆玥, 石井カルロス寿憲, 林良子, "日本語・中国語態度音声の音響分析および声質分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-", 日本音響学会誌, vol. 77, no. 2, pp. 112-119, February, 2021.
Abstract: 本研究では,日本語母語話者による日本語態度音声と,中国語を母語とする日本語学習者による日本語および中国語態度音声を分析することで,態度のペアである「友好/敵対」,「丁寧/失礼」,「本気/冗談」,「賞賛/非難」の発話が態度および発話者群によってどのように変化するのかについて検討した。態度音声にあらわれる音響特徴量(F0mean, F0range, Duration, H1-A1, H1-A3, F1F3syn)および句末音調(平叙文と疑問文)の特徴を調べた結果,母語話者と中国人学習者では異なる態度表出パタンが見られた。さらに,強調された単語について,Electroglottography信号によるOpen Quotientを抽出し分析したところ,中国人学習者が「冗談,賞賛および失礼」の態度を日本語母語話者より緊張した発声で表出しており,中国語の態度表出方法に影響されている可能性を示した。
BibTeX:
@Article{Li2021,
  author   = {李歆玥 and 石井カルロス寿憲 and 林良子},
  journal  = {日本音響学会誌},
  title    = {日本語・中国語態度音声の音響分析および声質分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-},
  year     = {2021},
  abstract = {本研究では,日本語母語話者による日本語態度音声と,中国語を母語とする日本語学習者による日本語および中国語態度音声を分析することで,態度のペアである「友好/敵対」,「丁寧/失礼」,「本気/冗談」,「賞賛/非難」の発話が態度および発話者群によってどのように変化するのかについて検討した。態度音声にあらわれる音響特徴量(F0mean, F0range, Duration, H1-A1, H1-A3, F1F3syn)および句末音調(平叙文と疑問文)の特徴を調べた結果,母語話者と中国人学習者では異なる態度表出パタンが見られた。さらに,強調された単語について,Electroglottography信号によるOpen Quotientを抽出し分析したところ,中国人学習者が「冗談,賞賛および失礼」の態度を日本語母語話者より緊張した発声で表出しており,中国語の態度表出方法に影響されている可能性を示した。},
  day      = {1},
  etitle   = {Prosodic and Voice Quality Features of Japanese and Chinese Attitudinal Speech: Japanese native speakers and Mandarin Chinese learners},
  month    = feb,
  number   = {2},
  pages    = {112-119},
  url      = {https://acoustics.jp/journal/},
  volume   = {77},
}
Takahisa Uchida, Takashi Minato, Yutaka Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro, "Female-type Android's Drive to Quickly Understand a User's Concept of Preferences Stimulates Dialogue Satisfaction:Dialogue Strategies for Modeling User's Concept of Preferences", International Journal of Social Robotics (IJSR), January, 2021.
Abstract: This research develops a conversational robot that stimulates users’ dialogue satisfaction and motivation in non-task-oriented dialogues that include opinion and/or preference exchanges. One way to improve user satisfaction and motivation is by demonstrating the robot’s ability to understand user opinions. In this paper, we explore a method that efficiently obtains the concept of user preferences: likes and dislikes. The concept is acquired by complementing a small amount of user preference data observed in dialogues. As a method for efficient collection, we propose a dialogue strategy that creates utterances with the largest expected complementation. Our experimental results with a female-type android robot suggest that the proposed strategy efficiently obtained user preferences and enhanced dialogue satisfaction. In addition, the strength of user motivation (i.e., long-term willingness to communicate with the android) is only positively correlated with the android’s willingness to understand. Our results not only show the effectiveness of our proposed strategy but also suggest a design theory for dialogue robots to stimulate dialogue motivation, although the current results are derived only from a female-type android.
BibTeX:
@Article{Uchida2021,
  author   = {Takahisa Uchida and Takashi Minato and Yutaka Nakamura and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  journal  = {International Journal of Social Robotics (IJSR)},
  title    = {Female-type Android's Drive to Quickly Understand a User's Concept of Preferences Stimulates Dialogue Satisfaction:Dialogue Strategies for Modeling User's Concept of Preferences},
  year     = {2021},
  abstract = {This research develops a conversational robot that stimulates users’ dialogue satisfaction and motivation in non-task-oriented dialogues that include opinion and/or preference exchanges. One way to improve user satisfaction and motivation is by demonstrating the robot’s ability to understand user opinions. In this paper, we explore a method that efficiently obtains the concept of user preferences: likes and dislikes. The concept is acquired by complementing a small amount of user preference data observed in dialogues. As a method for efficient collection, we propose a dialogue strategy that creates utterances with the largest expected complementation. Our experimental results with a female-type android robot suggest that the proposed strategy efficiently obtained user preferences and enhanced dialogue satisfaction. In addition, the strength of user motivation (i.e., long-term willingness to communicate with the android) is only positively correlated with the android’s willingness to understand. Our results not only show the effectiveness of our proposed strategy but also suggest a design theory for dialogue robots to stimulate dialogue motivation, although the current results are derived only from a female-type android.},
  day      = {7},
  doi      = {10.1007/s12369-020-00731-z},
  month    = jan,
  url      = {https://www.springer.com/journal/12369/},
}
Bowen Wu, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Modeling the Conditional Distribution of Co-speech Upper Body Gesture jointly using Conditional-GAN and Unrolled-GAN", MDPI Electronics Special Issue "Human Computer Interaction and Its Future", vol. 10, Issue 3, no. 228, January, 2021.
Abstract: Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation and user studies show that the proposed model outperforms the existing deterministic model, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.
BibTeX:
@Article{Wu2020a,
  author   = {Bowen Wu and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  journal  = {MDPI Electronics Special Issue "Human Computer Interaction and Its Future"},
  title    = {Modeling the Conditional Distribution of Co-speech Upper Body Gesture jointly using Conditional-GAN and Unrolled-GAN},
  year     = {2021},
  abstract = {Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation and user studies show that the proposed model outperforms the existing deterministic model, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.},
  day      = {20},
  doi      = {10.3390/electronics10030228},
  month    = jan,
  number   = {228},
  url      = {https://www.mdpi.com/2079-9292/10/3/228},
  volume   = {10, Issue 3},
  keywords = {Gesture generation; social robots; generative model; neural network; deep learning},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Ryusuke Mikata, Chaoran Liu, Hiroshi Ishiguro, "Analysis of Anger Motion Expression and Evaluation in Android Robot", Advanced Robotics, vol. Vol.34, Issue 24, pp. 1581-1590, December, 2020.
Abstract: Recent studies in human–human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions which are beneficial to the expresser and also help fostering cordiality and closeness amongst interlocutors during conversation. Effort in human–robot interaction has also been devoted to furnish robots with the expression of both positive and negative emotions. However, only a few have considered body gestures in context with the dialogue act functions conveyed by the emotional utterances. This study aims on furnishing robots with humanlike negative emotional expression, specifically anger-based body gestures roused by the utterance context. In this regard, we adopted a multimodal HHI corpus for the study, and then analyzed and established predominant gestures types and dialogue acts associated with anger-based utterances in HHI. Based on the analysis results, we implemented these gesture types in an android robot, and carried out a subjective evaluation to investigate their effects on the perception of anger expression in utterances with different dialogue act functions. Results showed significant effects of the presence of gesture on the anger degree perception. Findings from this study also revealed that the functional content of anger-based utterances plays a significant role in the choice of the gesture accompanying such utterances.
BibTeX:
@Article{Ajibo2020a,
  author   = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Ryusuke Mikata and Chaoran Liu and Hiroshi Ishiguro},
  title    = {Analysis of Anger Motion Expression and Evaluation in Android Robot},
  journal  = {Advanced Robotics},
  year     = {2020},
  volume   = {Vol.34, Issue 24},
  pages    = {1581-1590},
  month    = dec,
  abstract = {Recent studies in human–human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions which are beneficial to the expresser and also help fostering cordiality and closeness amongst interlocutors during conversation. Effort in human–robot interaction has also been devoted to furnish robots with the expression of both positive and negative emotions. However, only a few have considered body gestures in context with the dialogue act functions conveyed by the emotional utterances. This study aims on furnishing robots with humanlike negative emotional expression, specifically anger-based body gestures roused by the utterance context. In this regard, we adopted a multimodal HHI corpus for the study, and then analyzed and established predominant gestures types and dialogue acts associated with anger-based utterances in HHI. Based on the analysis results, we implemented these gesture types in an android robot, and carried out a subjective evaluation to investigate their effects on the perception of anger expression in utterances with different dialogue act functions. Results showed significant effects of the presence of gesture on the anger degree perception. Findings from this study also revealed that the functional content of anger-based utterances plays a significant role in the choice of the gesture accompanying such utterances.},
  day      = {8},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2020.1855244},
  doi      = {10.1080/01691864.2020.1855244},
  keywords = {Anger emotion; gesture and speech; android robot; human–robot interaction},
}
Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network", Sensors, vol. 21, Issue 1, no. 205, pp. 1-16, December, 2020.
Abstract: Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Existing gesture-based emotion recognition methods using deep learning are on convolutional neural networks or recurrent neural networks, without explicitly considering the spatial connection between joints. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition.
BibTeX:
@Article{Shi2020,
  author   = {Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network},
  year     = {2020},
  abstract = {Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Existing gesture-based emotion recognition methods using deep learning are on convolutional neural networks or recurrent neural networks, without explicitly considering the spatial connection between joints. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition.},
  day      = {30},
  doi      = {10.3390/s21010205},
  month    = dec,
  number   = {205},
  pages    = {1-16},
  url      = {https://www.mdpi.com/1424-8220/21/1/205},
  volume   = {21, Issue 1},
  keywords = {Emotion recognition; Gesture; Skeleton; Graph convolutional networks; Self-attention},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots", IEEE Robotics and Automation Letters, vol. 5, Issue 4, pp. 6081-6088, October, 2020.
Abstract: Pointing at a person is usually deemed to be impolite. However, several different forms of person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we first analyzed pointing gestures in human-human dialogue interactions and observed different trends in the use of gesture types, based on the inter-personal relationships between dialogue partners. Then we conducted multiple subjective experiments by systematically creating behaviors in an android robot to investigate the effects of different types of pointing gestures on the impressions of its behaviors. Several factors were included: pointing gesture motion types (hand shapes, such as an open palm or an extended index finger, hand orientation, and motion direction), language types (formal or colloquial), gesture speeds, and gesture hold duration. Our evaluation results indicated that impressions of polite or casual are affected by the analyzed factors, and a behavior’s appropriateness depends on the inter-personal relationship with the dialogue partner.
BibTeX:
@Article{Ishi2020b,
  author   = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters},
  title    = {Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots},
  year     = {2020},
  abstract = {Pointing at a person is usually deemed to be impolite. However, several different forms of person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we first analyzed pointing gestures in human-human dialogue interactions and observed different trends in the use of gesture types, based on the inter-personal relationships between dialogue partners. Then we conducted multiple subjective experiments by systematically creating behaviors in an android robot to investigate the effects of different types of pointing gestures on the impressions of its behaviors. Several factors were included: pointing gesture motion types (hand shapes, such as an open palm or an extended index finger, hand orientation, and motion direction), language types (formal or colloquial), gesture speeds, and gesture hold duration. Our evaluation results indicated that impressions of polite or casual are affected by the analyzed factors, and a behavior’s appropriateness depends on the inter-personal relationship with the dialogue partner.},
  day      = {1},
  doi      = {10.1109/LRA.2020.3011354},
  month    = oct,
  pages    = {6081-6088},
  url      = {https://ieeexplore.ieee.org/document/9146747},
  volume   = {5, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2020 Program Committee for presentation at the Conference)},
  keywords = {Pointing gestures, politeness, motion types, inter-personal relationship, android robots},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Multi-modality Emotion Recognition Model with GAT-based Multi-head Inter-modality Attention", Sensors, vol. 20, Issue 17, no. 4894, pp. 1-15, August, 2020.
Abstract: Emotion recognition has been gaining increasing attention in recent years due to its applications on artificial agents. In order to achieve a good performance on this task, numerous research have been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, there still remains a research question of what is the appropriate way to fuse the information from different modalities. In this paper, we not only proposed some strategies, such as audio sample augmentation, an emotion-oriented encoder-decoder, to improve the performance of emotion recognition, but also discussed an inter-modality decision level fusion method based on graph attention network (GAT). Compared to the baseline, our model improves the weighted average F1-score from 64.18% to 68.31% and weighted average accuracy from 65.25% to 69.88%.
BibTeX:
@Article{Fu2020,
  author   = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Multi-modality Emotion Recognition Model with GAT-based Multi-head Inter-modality Attention},
  year     = {2020},
  abstract = {Emotion recognition has been gaining increasing attention in recent years due to its applications on artificial agents. In order to achieve a good performance on this task, numerous research have been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, there still remains a research question of what is the appropriate way to fuse the information from different modalities. In this paper, we not only proposed some strategies, such as audio sample augmentation, an emotion-oriented encoder-decoder, to improve the performance of emotion recognition, but also discussed an inter-modality decision level fusion method based on graph attention network (GAT). Compared to the baseline, our model improves the weighted average F1-score from 64.18% to 68.31% and weighted average accuracy from 65.25% to 69.88%.},
  day      = {29},
  doi      = {10.3390/s20174894},
  month    = aug,
  number   = {4894},
  pages    = {1-15},
  url      = {https://www.mdpi.com/1424-8220/20/17/4894/htm},
  volume   = {20, Issue 17},
  keywords = {emotion recognition, multi-modality, graph attention network},
}
Takahisa Uchida, Hideyuki Takahashi, Midori Ban, Jiro Shimaya, Takashi Minato, Kohei Ogawa, Yuichiro Yoshikawa, Hiroshi Ishiguro, "Japanese Young Women Did not Discriminate between Robots and Humans as Listeners for Their Self-Disclosure -Pilot Study-", Multimodal Technologies and Interaction, vol. 4, Issue3, no. 35, pp. 1-16, June, 2020.
Abstract: Disclosing personal matters to other individuals often contributes to the maintenance of our mental health and social bonding. However, in face-to-face situations, it can be difficult to prompt others to self-disclose because people often feel embarrassed disclosing personal matters to others. Although artificial agents without strong social pressure for listeners to induce self-disclosure is a promising engineering method that can be applied in daily stress management and reduce depression, gender difference is known to make a drastic difference of the attitude toward robots. We hypothesized that, as compared to men, women tend to prefer robots as a listener for their self-disclosure. The experimental results that are based on questionnaires and the actual self-disclosure behavior indicate that men preferred to self-disclose to the human listener, while women did not discriminate between robots and humans as listeners for their self-disclosure in the willingness and the amount of self-disclosure. This also suggests that the gender difference needs to be considered when robots are used as a self-disclosure listener.
BibTeX:
@Article{Uchida2020,
  author   = {Takahisa Uchida and Hideyuki Takahashi and Midori Ban and Jiro Shimaya and Takashi Minato and Kohei Ogawa and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  title    = {Japanese Young Women Did not Discriminate between Robots and Humans as Listeners for Their Self-Disclosure -Pilot Study-},
  journal  = {Multimodal Technologies and Interaction},
  year     = {2020},
  volume   = {4, Issue3},
  number   = {35},
  pages    = {1-16},
  month    = jun,
  abstract = {Disclosing personal matters to other individuals often contributes to the maintenance of our mental health and social bonding. However, in face-to-face situations, it can be difficult to prompt others to self-disclose because people often feel embarrassed disclosing personal matters to others. Although artificial agents without strong social pressure for listeners to induce self-disclosure is a promising engineering method that can be applied in daily stress management and reduce depression, gender difference is known to make a drastic difference of the attitude toward robots. We hypothesized that, as compared to men, women tend to prefer robots as a listener for their self-disclosure. The experimental results that are based on questionnaires and the actual self-disclosure behavior indicate that men preferred to self-disclose to the human listener, while women did not discriminate between robots and humans as listeners for their self-disclosure in the willingness and the amount of self-disclosure. This also suggests that the gender difference needs to be considered when robots are used as a self-disclosure listener.},
  day      = {30},
  url      = {https://www.mdpi.com/2414-4088/4/3/35},
  doi      = {10.3390/mti4030035},
  keywords = {self-disclosure; gender difference; conversational robot},
}
Soheil Keshmiri, Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behaviour: a Case Study in Facial Pre-Touch Interaction", Applied Sciences, vol. 10, Issue 11, no. 3817, pp. 1-15, May, 2020.
Abstract: A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.
BibTeX:
@Article{Keshmiri2020c,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Applied Sciences},
  title    = {Critical Examination of the Parametric Approaches to Analysis of the Non-Verbal Human Behaviour: a Case Study in Facial Pre-Touch Interaction},
  year     = {2020},
  abstract = {A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.},
  day      = {30},
  doi      = {10.3390/app10113817},
  month    = may,
  number   = {3817},
  pages    = {1-15},
  url      = {https://www.mdpi.com/2076-3417/10/11/3817},
  volume   = {10, Issue 11},
  keywords = {physical interaction; physical pre-touch distance; parametric analysis; non-parametric analysis; non-verbal behavior},
}
Liang-Yu Chen, Hidenobu Sumioka, Li-Ju Ke, Masahiro Shiomi, Liang-Kung Chen, "Effects of Teleoperated Humanoid Robot Application in Older Adults with Neurocognitive Disorders in Taiwan: A Report of Three Cases", Aging Medicine and Healtlcare, Full Universe Integrated Marketing Limited, pp. 67-71, May, 2020.
Abstract: Rising prevalence of major neurocognitive disorders (NCDs) is associated with a great variety of care needs and care stress for caregivers and families. A holistic care pathway to empower person-centered care is recommended, and non-pharmacological strategies are prioritized to manage neuropsychiatric symptoms (NPS) of people with NCDs. However, limited formal services, shortage of manpower, and unpleasant experiences related to NPS of these patients often discourage caregivers and cause the care stress and psychological burnout. Telenoid, a teleoperated humanoid robot, is a new technology that aims to improve the quality of life and to reduce the severity of NPS for persons with major NCDs by facilitating meaningful connection and social engagement. Herein, we presented 3 cases with major NCDs in a day care center in Taiwan who experienced interaction with the Telenoid. Overall, no fear neither distressed emotional response was observed during their conversation, neither worsening of delusion or hallucination after interaction with Telenoid. The severity of NCDs seemed to affect the verbal communication and the attention during conversation with Telenoid. Other factors, such as hearing impairment, insomnia, and environmental stimuli, may also hinder the efficacy of Telenoid in long-term care settings. Further studies with proper study design are needed to evaluate the effects of Telenoid application on older adults with major NCDs.
BibTeX:
@Article{Chen2020,
  author    = {Liang-Yu Chen and Hidenobu Sumioka and Li-Ju Ke and Masahiro Shiomi and Liang-Kung Chen},
  journal   = {Aging Medicine and Healtlcare},
  title     = {Effects of Teleoperated Humanoid Robot Application in Older Adults with Neurocognitive Disorders in Taiwan: A Report of Three Cases},
  year      = {2020},
  abstract  = {Rising prevalence of major neurocognitive disorders (NCDs) is associated with a great variety of care needs and care stress for caregivers and families. A holistic care pathway to empower person-centered care is recommended, and non-pharmacological strategies are prioritized to manage neuropsychiatric symptoms (NPS) of people with NCDs. However, limited formal services, shortage of manpower, and unpleasant experiences related to NPS of these patients often discourage caregivers and cause the care stress and psychological burnout. Telenoid, a teleoperated humanoid robot, is a new technology that aims to improve the quality of life and to reduce the severity of NPS for persons with major NCDs by facilitating meaningful connection and social engagement. Herein, we presented 3 cases with major NCDs in a day care center in Taiwan who experienced interaction with the Telenoid. Overall, no fear neither distressed emotional response was observed during their conversation, neither worsening of delusion or hallucination after interaction with Telenoid. The severity of NCDs seemed to affect the verbal communication and the attention during conversation with Telenoid. Other factors, such as hearing impairment, insomnia, and environmental stimuli, may also hinder the efficacy of Telenoid in long-term care settings. Further studies with proper study design are needed to evaluate the effects of Telenoid application on older adults with major NCDs.},
  day       = {27},
  doi       = {10.33879/AMH.2020.066-2001.003},
  month     = may,
  pages     = {67-71},
  url       = {https://www.agingmedhealthc.com/?p=21602},
  booktitle = {Aging Medicine and Healtlcare},
  editor    = {Asian Association for Frailty and Sarcopenia and Taiwan Association for Integrated Care},
  keywords  = {Dementia, neurocognitive disorder, neuropsychiatric symptom, Telenoid, uncanny valley},
  publisher = {Full Universe Integrated Marketing Limited},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Mediated Hugs Modulates Impression of Hearsay Information", Advanced Robotics, vol. 34, Issue12, pp. 781-788, May, 2020.
Abstract: Although it is perceivable that direct interpersonal touch affects recipient's impression of touch provider as well as the information relating to the provider alike, its utility in mediated interpersonal touch remains unclear to date. In this article, we report the alleviating effect of mediated interpersonal touch on social judgment. In particular, we show that mediated hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, participants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that mediated hug reduces the negative inferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via mediated hug.
BibTeX:
@Article{Nakanishi2020,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  journal  = {Advanced Robotics},
  title    = {Mediated Hugs Modulates Impression of Hearsay Information},
  year     = {2020},
  abstract = {Although it is perceivable that direct interpersonal touch affects recipient's impression of touch provider as well as the information relating to the provider alike, its utility in mediated interpersonal touch remains unclear to date. In this article, we report the alleviating effect of mediated interpersonal touch on social judgment. In particular, we show that mediated hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, participants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that mediated hug reduces the negative inferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via mediated hug.},
  day      = {6},
  doi      = {10.1080/01691864.2020.1760933},
  month    = may,
  pages    = {781-788},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2020.1760933},
  volume   = {34, Issue12},
  keywords = {Interpersonal touch, mediated touch, huggable communication media, impression bias, hearsay information, stress reduction},
}
Soheil Keshmiri, Masahiro Shiomi, Hidenobu Sumioka, Takashi Minato, Hiroshi Ishiguro, "Gentle Versus Strong Touch Classification: Preliminary Results, Challenges, and Potentials", Sensors, vol. 20, Issue 11, no. 3033, pp. 1-22, May, 2020.
Abstract: Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).
BibTeX:
@Article{Keshmiri2020d,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hidenobu Sumioka and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Sensors},
  title    = {Gentle Versus Strong Touch Classification: Preliminary Results, Challenges, and Potentials},
  year     = {2020},
  abstract = {Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).},
  day      = {27},
  doi      = {10.3390/s20113033},
  month    = may,
  number   = {3033},
  pages    = {1-22},
  url      = {https://www.mdpi.com/1424-8220/20/11/3033},
  volume   = {20, Issue 11},
  keywords = {physical interaction; touch classification; human–agent physical interaction},
}
Soheil Keshmiri, Maryam Alimardani, Masahiro Shiomi, Hidenobu Sumioka, Hiroshi Ishiguro, Kazuo Hiraki, "Higher hypnotic suggestibility is associated with the lower EEG signal variability in theta, alpha, and beta frequency bands", PLOS ONE, vol. 15, no. 4, pp. 1-20, April, 2020.
Abstract: Variation of information in the firing rate of neural population, as reflected in different frequency bands of electroencephalographic (EEG) time series, provides direct evidence for change in neural responses of the brain to hypnotic suggestibility. However, realization of an effective biomarker for spiking behaviour of neural population proves to be an elusive subject matter with its impact evident in highly contrasting results in the literature. In this article, we took an information-theoretic stance on analysis of the EEG time series of the brain activity during hypnotic suggestions, thereby capturing the variability in pattern of brain neural activity in terms of its information content. For this purpose, we utilized differential entropy (DE, i.e., the average information content in a continuous time series) of theta, alpha, and beta frequency bands of fourteen-channel EEG time series recordings that pertain to the brain neural responses of twelve carefully selected high and low hypnotically suggestible individuals. Our results show that the higher hypnotic suggestibility is associated with a significantly lower variability in information content of theta, alpha, and beta frequencies. Moreover, they indicate that such a lower variability is accompanied by a significantly higher functional connectivity (FC, a measure of spatiotemporal synchronization) in the parietal and the parieto-occipital regions in the case of theta and alpha frequency bands and a non-significantly lower FC in the central region’s beta frequency band. Our results contribute to the field in two ways. First, they identify the applicability of DE as a unifying measure to reproduce the similar observations that are separately reported through adaptation of different hypnotic biomarkers in the literature. Second, they extend these previous findings that were based on neutral hypnosis (i.e., a hypnotic procedure that involves no specific suggestions other than those for becoming hypnotized) to the case of hypnotic suggestions, thereby identifying their presence as a potential signature of hypnotic experience.
BibTeX:
@Article{Keshmiri2020b,
  author   = {Soheil Keshmiri and Maryam Alimardani and Masahiro Shiomi and Hidenobu Sumioka and Hiroshi Ishiguro and Kazuo Hiraki},
  title    = {Higher hypnotic suggestibility is associated with the lower EEG signal variability in theta, alpha, and beta frequency bands},
  journal  = {PLOS ONE},
  year     = {2020},
  volume   = {15},
  number   = {4},
  pages    = {1-20},
  month    = apr,
  abstract = {Variation of information in the firing rate of neural population, as reflected in different frequency bands of electroencephalographic (EEG) time series, provides direct evidence for change in neural responses of the brain to hypnotic suggestibility. However, realization of an effective biomarker for spiking behaviour of neural population proves to be an elusive subject matter with its impact evident in highly contrasting results in the literature. In this article, we took an information-theoretic stance on analysis of the EEG time series of the brain activity during hypnotic suggestions, thereby capturing the variability in pattern of brain neural activity in terms of its information content. For this purpose, we utilized differential entropy (DE, i.e., the average information content in a continuous time series) of theta, alpha, and beta frequency bands of fourteen-channel EEG time series recordings that pertain to the brain neural responses of twelve carefully selected high and low hypnotically suggestible individuals. Our results show that the higher hypnotic suggestibility is associated with a significantly lower variability in information content of theta, alpha, and beta frequencies. Moreover, they indicate that such a lower variability is accompanied by a significantly higher functional connectivity (FC, a measure of spatiotemporal synchronization) in the parietal and the parieto-occipital regions in the case of theta and alpha frequency bands and a non-significantly lower FC in the central region’s beta frequency band. Our results contribute to the field in two ways. First, they identify the applicability of DE as a unifying measure to reproduce the similar observations that are separately reported through adaptation of different hypnotic biomarkers in the literature. Second, they extend these previous findings that were based on neutral hypnosis (i.e., a hypnotic procedure that involves no specific suggestions other than those for becoming hypnotized) to the case of hypnotic suggestions, thereby identifying their presence as a potential signature of hypnotic experience.},
  day      = {9},
  url      = {https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0230853},
  doi      = {10.1371/journal.pone.0230853},
}
Nobuhiro Jinnai, Hidenobu Sumioka, Takashi Minato, Hiroshi Ishiguro, "Multi-modal Interaction through Anthropomorphically Designed Communication Medium to Enhance the Self-Disclosures of Personal Information", Journal of Robotics and Mechatronics, vol. 32, no. 1, pp. 76-85, February, 2020.
Abstract: Although current communication media facilitate the interaction among individuals, researchers have warned that human relationships constructed by these media tend to lack the level of intimacy acquired through face-to-face communications. In this paper, \textcolorredwe investigate how long-term use of humanlike communication media affects the development of intimate relationships between human users. We examine changes in the relationship between individuals while having conversation with each other through humanlike communication media or mobile phones for about a month. The intimacy of their relationship was evaluated with the amount of self-disclosure of personal information. The result shows that a communication medium with humanlike appearance and soft material \textcolorredsignificantly increases the total amount of self-disclosure through the experiment, compared with typical mobile phone. The amount of self-disclosure showed cyclic variation through the experiment in humanlike communication media condition. Furthermore, we discuss a possible underlying mechanism of this effect from misattribution of a feeling caused by intimate distance with the medium to a conversation partner.
BibTeX:
@Article{Jinnai2020,
  author   = {Nobuhiro Jinnai and Hidenobu Sumioka and Takashi Minato and Hiroshi Ishiguro},
  journal  = {Journal of Robotics and Mechatronics},
  title    = {Multi-modal Interaction through Anthropomorphically Designed Communication Medium to Enhance the Self-Disclosures of Personal Information},
  year     = {2020},
  abstract = {Although current communication media facilitate the interaction among individuals, researchers have warned that human relationships constructed by these media tend to lack the level of intimacy acquired through face-to-face communications. In this paper, \textcolor{red}{we investigate how long-term use of humanlike communication media affects the development of intimate relationships between human users.} We examine changes in the relationship between individuals while having conversation with each other through humanlike communication media or mobile phones for about a month. The intimacy of their relationship was evaluated with the amount of self-disclosure of personal information. The result shows that a communication medium with humanlike appearance and soft material \textcolor{red}{significantly increases the total amount of self-disclosure through the experiment, compared with typical mobile phone. The amount of self-disclosure showed cyclic variation through the experiment in humanlike communication media condition.} Furthermore, we discuss a possible underlying mechanism of this effect from misattribution of a feeling caused by intimate distance with the medium to a conversation partner.},
  day      = {20},
  doi      = {10.20965/jrm.2020.p0076},
  month    = feb,
  number   = {1},
  pages    = {76-85},
  url      = {https://www.fujipress.jp/jrm/rb_ja/},
  volume   = {32},
  keywords = {social presence, mediated social interaction, human relationship},
}
内田貴久, 港隆史, 石黒浩, "コミュニケーションロボットは人間と同等な主観を持つべきか", 日本ロボット学会誌(RSJ), vol. 39, no. 1, pp. 34-38, January, 2020.
Abstract: 近年,コミュニケーションロボットが我々の生活に浸透しつつある.人とコミュニケーションを行うために,ロボットは人間のような機能を持つことが重要である.昨今の音声認識技術や言語理解技術などの発展に見られるように,コミュニケーションに必要な機能はますます向上している.では,どこまで人間のような機能を有すれば,ロボットは人とのコミュニケーションを円滑に,そして豊かに行うことができるのであろうか.我々の日常対話で顕在化するものの一つに,個人の主観的な意見がある.本稿では,ロボットが主観的な経験を持つと想像できる(帰属する)かという問いに対して哲学的に調査・考察した研究を紹介する.次に,ロボットに対する主観的な意見を帰属することが,ロボットとのコミュニケーションにどのような影響を与えるのかについて,筆者らが行った研究を報告する.そして最後に,これらの研究をふまえ,コミュニケーションロボットは人間と同等な主観を持つべきかという問いに関して議論する.
BibTeX:
@Article{内田貴久2020a,
  author   = {内田貴久 and 港隆史 and 石黒浩},
  journal  = {日本ロボット学会誌(RSJ)},
  title    = {コミュニケーションロボットは人間と同等な主観を持つべきか},
  year     = {2020},
  abstract = {近年,コミュニケーションロボットが我々の生活に浸透しつつある.人とコミュニケーションを行うために,ロボットは人間のような機能を持つことが重要である.昨今の音声認識技術や言語理解技術などの発展に見られるように,コミュニケーションに必要な機能はますます向上している.では,どこまで人間のような機能を有すれば,ロボットは人とのコミュニケーションを円滑に,そして豊かに行うことができるのであろうか.我々の日常対話で顕在化するものの一つに,個人の主観的な意見がある.本稿では,ロボットが主観的な経験を持つと想像できる(帰属する)かという問いに対して哲学的に調査・考察した研究を紹介する.次に,ロボットに対する主観的な意見を帰属することが,ロボットとのコミュニケーションにどのような影響を与えるのかについて,筆者らが行った研究を報告する.そして最後に,これらの研究をふまえ,コミュニケーションロボットは人間と同等な主観を持つべきかという問いに関して議論する.},
  day      = {15},
  doi      = {10.7210/jrsj.39.34},
  etitle   = {Should Communication Robots Have the Same Subjectivity as Humans?},
  month    = jan,
  number   = {1},
  pages    = {34-38},
  url      = {https://www.rsj.or.jp/pub/jrsj/about.html},
  volume   = {39},
  keywords = {communication robot, dialogue robot, subjectiv-ity},
}
Soheil Keshmiri, Masahiro Shiomi, Hiroshi Ishiguro, "Emergence of the Affect from the Variation in the Whole-Brain Flow of Information", Brain Sciences, vol. 10, Issue 1, no. 8, pp. 1-32, January, 2020.
Abstract: Over the past few decades, the quest for discovering the brain substrates of the affect to understand the underlying neural basis of the human’s emotions has resulted in substantial and yet contrasting results. Whereas some point at distinct and independent brain systems for the Positive and Negative affects, others propose the presence of flexible brain regions. In this respect, there are two factors that are common among these previous studies. First, they all focused on the change in brain activation, thereby neglecting the findings that indicate that the stimuli with equivalent sensory and behavioral processing demands may not necessarily result in differential brain activation. Second, they did not take into consideration the brain regional interactivity and the findings that identify that the signals from individual cortical neurons are shared across multiple areas and thus concurrently contribute to multiple functional pathways. To address these limitations, we performed Granger causal analysis on the electroencephalography (EEG) recordings of the human subjects who watched movie clips that elicited Negative, Neutral, and Positive affects. This allowed us to look beyond the brain regional activation in isolation to investigate whether the brain regional interactivity can provide further insights for understanding the neural substrates of the affect. Our results indicated that the differential affect states emerged from subtle variation in information flow of the brain cortical regions that were in both hemispheres. They also showed that these regions that were rather common between affect states than distinct to a specific affect were characterized with both short- as well as long-range information flow. This provided evidence for the presence of simultaneous integration and differentiation in the brain functioning that leads to the emergence of different affects. These results are in line with the findings on the presence of intrinsic large-scale interacting brain networks that underlie the production of psychological events. These findings can help advance our understanding of the neural basis of the human’s emotions by identifying the signatures of differential affect in subtle variation that occurs in the whole-brain cortical flow of information.
BibTeX:
@Article{Keshmiri2020,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Brain Sciences},
  title    = {Emergence of the Affect from the Variation in the Whole-Brain Flow of Information},
  year     = {2020},
  abstract = {Over the past few decades, the quest for discovering the brain substrates of the affect to understand the underlying neural basis of the human’s emotions has resulted in substantial and yet contrasting results. Whereas some point at distinct and independent brain systems for the Positive and Negative affects, others propose the presence of flexible brain regions. In this respect, there are two factors that are common among these previous studies. First, they all focused on the change in brain activation, thereby neglecting the findings that indicate that the stimuli with equivalent sensory and behavioral processing demands may not necessarily result in differential brain activation. Second, they did not take into consideration the brain regional interactivity and the findings that identify that the signals from individual cortical neurons are shared across multiple areas and thus concurrently contribute to multiple functional pathways. To address these limitations, we performed Granger causal analysis on the electroencephalography (EEG) recordings of the human subjects who watched movie clips that elicited Negative, Neutral, and Positive affects. This allowed us to look beyond the brain regional activation in isolation to investigate whether the brain regional interactivity can provide further insights for understanding the neural substrates of the affect. Our results indicated that the differential affect states emerged from subtle variation in information flow of the brain cortical regions that were in both hemispheres. They also showed that these regions that were rather common between affect states than distinct to a specific affect were characterized with both short- as well as long-range information flow. This provided evidence for the presence of simultaneous integration and differentiation in the brain functioning that leads to the emergence of different affects. These results are in line with the findings on the presence of intrinsic large-scale interacting brain networks that underlie the production of psychological events. These findings can help advance our understanding of the neural basis of the human’s emotions by identifying the signatures of differential affect in subtle variation that occurs in the whole-brain cortical flow of information.},
  day      = {1},
  doi      = {10.3390/brainsci10010008},
  month    = jan,
  number   = {8},
  pages    = {1-32},
  url      = {https://www.mdpi.com/2076-3425/10/1/8},
  volume   = {10, Issue 1},
  keywords = {Granger causality; functional connectivity; information flow; affect; brain signal variability},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hirosh Ishiguro, "What Kinds of Robot's Touch Will Match Expressed Emotions?", IEEE Robotics and Automation Letters (RA-L), vol. 5, Issue1, pp. 127-134, January, 2020.
Abstract: This study investigated the effects of touch characteristics that change the strength and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot's touch characteristics that can affect people's perceived emotions received less focus. In this study, we concentrated on three touch characteristics (length, type, and part) based on arousal/valence perspectives, and their effects on the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happiness, and its counterpart emotion, (i.e., sadness), borrowing Ekman's definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggest that a brief pat and a longer contact by the fingers are better combinations to express happy and sad emotions with our robot. Since we only used a female android, we discussed future works with a male humanoid robot and/or a robot whose appearance is less humanoid.
BibTeX:
@Article{Zheng2019a,
  author   = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hirosh Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {What Kinds of Robot's Touch Will Match Expressed Emotions?},
  year     = {2020},
  abstract = {This study investigated the effects of touch characteristics that change the strength and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot's touch characteristics that can affect people's perceived emotions received less focus. In this study, we concentrated on three touch characteristics (length, type, and part) based on arousal/valence perspectives, and their effects on the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happiness, and its counterpart emotion, (i.e., sadness), borrowing Ekman's definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggest that a brief pat and a longer contact by the fingers are better combinations to express happy and sad emotions with our robot. Since we only used a female android, we discussed future works with a male humanoid robot and/or a robot whose appearance is less humanoid.},
  doi      = {10.1109/LRA.2019.2947010},
  month    = jan,
  pages    = {127-134},
  url      = {https://ieeexplore.ieee.org/document/8865356?source=authoralert},
  volume   = {5, Issue1},
  comment  = {(The contents of this paper were also selected by Humanoids 2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Masahiro Shiomi, Hiroshi Ishiguro, "Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability", Entropy, vol. 21, Issue 12, no. 1228, pp. 1-25, December, 2019.
Abstract: Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity.
BibTeX:
@Article{Keshmiri2019l,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Entropy},
  title    = {Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability},
  year     = {2019},
  abstract = {Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity.},
  day      = {16},
  doi      = {10.3390/e21121228},
  month    = dec,
  number   = {1228},
  pages    = {1-25},
  url      = {https://www.mdpi.com/1099-4300/21/12/1228/htm},
  volume   = {21, Issue 12},
  keywords = {entropy; differential entropy; affect; brain variability},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Masahiro Shiomi, Hiroshi Ishiguro, "Information Content of Prefrontal Cortex Activity Quantifies the Difficulty of Narrated Stories", Scientific Reports, vol. 9, no. 17959, November, 2019.
Abstract: The ability to realize the individuals' impressions during the verbal communication can enable social robots to play a significant role in facilitating our social interactions in such areas as child education and elderly care. However, such impressions are highly subjective and internalized and therefore cannot be easily comprehended through behavioural observations. Although brain-machine interface suggests the utility of the brain information in human-robot interaction, previous studies did not consider its potential for estimating the internal impressions during verbal communication. In this article, we introduce a novel approach to estimation of the individuals' perceived difficulty of stories using their prefrontal cortex activity. We demonstrate the robustness of our approach by showing its comparable performance in in-person, humanoid, speaker, and video-chat system. Our results contribute to the field of socially assistive robotics by taking a step toward enabling robots determine their human companions' perceived difficulty of conversations to sustain their communication by adapting to individuals' pace and interest in response to conversational nuances and complexity. They also verify the use of brain information to complement the behavioural-based study of a robotic theory of mind through critical investigation of its implications in humans' neurological responses while interacting with their synthetic companions.
BibTeX:
@Article{Keshmiri2019g,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Masahiro Shiomi and Hiroshi Ishiguro},
  journal  = {Scientific Reports},
  title    = {Information Content of Prefrontal Cortex Activity Quantifies the Difficulty of Narrated Stories},
  year     = {2019},
  abstract = {The ability to realize the individuals' impressions during the verbal communication can enable social robots to play a significant role in facilitating our social interactions in such areas as child education and elderly care. However, such impressions are highly subjective and internalized and therefore cannot be easily comprehended through behavioural observations. Although brain-machine interface suggests the utility of the brain information in human-robot interaction, previous studies did not consider its potential for estimating the internal impressions during verbal communication. In this article, we introduce a novel approach to estimation of the individuals' perceived difficulty of stories using their prefrontal cortex activity. We demonstrate the robustness of our approach by showing its comparable performance in in-person, humanoid, speaker, and video-chat system. Our results contribute to the field of socially assistive robotics by taking a step toward enabling robots determine their human companions' perceived difficulty of conversations to sustain their communication by adapting to individuals' pace and interest in response to conversational nuances and complexity. They also verify the use of brain information to complement the behavioural-based study of a robotic theory of mind through critical investigation of its implications in humans' neurological responses while interacting with their synthetic companions.},
  day      = {29},
  doi      = {10.1038/s41598-019-54280-1},
  month    = nov,
  number   = {17959},
  url      = {https://www.nature.com/articles/s41598-019-54280-1},
  volume   = {9},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care", IEEE Robotics and Automation Letters (RA-L), vol. 4, Issue 4, pp. 3263-3269, October, 2019.
Abstract: In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.
BibTeX:
@Article{Keshmiri2019c,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care},
  year     = {2019},
  abstract = {In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.},
  doi      = {10.1109/LRA.2019.2925732},
  month    = oct,
  pages    = {3263-3269},
  url      = {https://ieeexplore.ieee.org/abstract/document/8750900},
  volume   = {4, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation", IEEE Robotics and Automation Letters (RA-L), vol. 4, Issue 4, pp. 4108-4115, October, 2019.
Abstract: In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.
BibTeX:
@Article{Keshmiri2019d,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation},
  year     = {2019},
  abstract = {In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.},
  doi      = {10.1109/LRA.2019.2930495},
  month    = oct,
  pages    = {4108-4115},
  url      = {https://ieeexplore.ieee.org/document/8769897},
  volume   = {4, Issue 4},
  comment  = {(The contents of this paper were also selected by IROS2019 Program Committee for presentation at the Conference)},
}
Soheil Keshmiri, Hidenobu Suioka, Hiroshi Ishiguro Ryuji Yamazaki, "Differential Effect of the Physical Embodiment on the Prefrontal Cortex Activity as Quantified by Its Entropy", Entropy, vol. 21, Issue 9, no. 875, pp. 1-26, September, 2019.
Abstract: Computer-mediated-communication (CMC) research suggests that unembodied media can surpass in-person communication due to their utility to bypass the nonverbal components of verbal communication such as physical presence and facial expressions. However, recent results on communicative humanoids suggest the importance of the physical embodiment of conversational partners. These contradictory findings are strengthened by the fact that almost all of these results are based on the subjective assessments of the behavioural impacts of these systems. To investigate these opposing views of the potential role of the embodiment during communication, we compare the effect of a physically embodied medium that is remotely controlled by a human operator with such unembodied media as telephones and video-chat systems on the frontal brain activity of human subjects, given the pivotal role of this region in social cognition and verbal comprehension. Our results provide evidence that communicating through a physically embodied medium affects the frontal brain activity of humans whose patterns potentially resemble those of in-person communication. These findings argue for the significance of embodiment in naturalistic scenarios of social interaction, such as storytelling and verbal comprehension, and the potential application of brain information as a promising sensory gateway in the characterization of behavioural responses in human-robot interaction.
BibTeX:
@Article{Keshmiri2019i,
  author   = {Soheil Keshmiri and Hidenobu Suioka and Ryuji Yamazaki, Hiroshi Ishiguro},
  title    = {Differential Effect of the Physical Embodiment on the Prefrontal Cortex Activity as Quantified by Its Entropy},
  journal  = {Entropy},
  year     = {2019},
  volume   = {21, Issue 9},
  number   = {875},
  pages    = {1-26},
  month    = sep,
  abstract = {Computer-mediated-communication (CMC) research suggests that unembodied media can surpass in-person communication due to their utility to bypass the nonverbal components of verbal communication such as physical presence and facial expressions. However, recent results on communicative humanoids suggest the importance of the physical embodiment of conversational partners. These contradictory findings are strengthened by the fact that almost all of these results are based on the subjective assessments of the behavioural impacts of these systems. To investigate these opposing views of the potential role of the embodiment during communication, we compare the effect of a physically embodied medium that is remotely controlled by a human operator with such unembodied media as telephones and video-chat systems on the frontal brain activity of human subjects, given the pivotal role of this region in social cognition and verbal comprehension. Our results provide evidence that communicating through a physically embodied medium affects the frontal brain activity of humans whose patterns potentially resemble those of in-person communication. These findings argue for the significance of embodiment in naturalistic scenarios of social interaction, such as storytelling and verbal comprehension, and the potential application of brain information as a promising sensory gateway in the characterization of behavioural responses in human-robot interaction.},
  day      = {8},
  url      = {https://www.mdpi.com/1099-4300/21/9/875},
  doi      = {10.3390/e21090875},
  keywords = {differential entropy; embodied media; tele-communication; humanoid; prefrontal cortex},
}
Soheil Keshmiri, Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals", Scientific Reports, vol. 9, no. 11924, August, 2019.
Abstract: Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals’ facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals’ facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals’ behavioural responses but also these responses allow for a fine-grained categorization of individuals’ personality.
BibTeX:
@Article{Keshmiri2019h,
  author   = {Soheil Keshmiri and Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  title    = {Facial Pre-Touch Space Differentiates the Level of Openness Among Individuals},
  journal  = {Scientific Reports},
  year     = {2019},
  volume   = {9},
  number   = {11924},
  month    = aug,
  abstract = {Social and cognitive psychology provide a rich map of our personality landscape. What appears to be unexplored is the correspondence between these findings and our behavioural responses during day-to-day life interaction. In this article, we utilize cluster analysis to show that the individuals’ facial pre-touch space can be divided into three well-defined subspaces and that within the first two immediate clusters around the face area such distance information significantly correlate with their openness in the five-factor model (FFM). In these two clusters, we also identify that the individuals’ facial pre-touch space can predict their level of openness that are further categorized into six distinct levels with a highly above chance accuracy. Our results suggest that such personality factors as openness are not only reflected in individuals’ behavioural responses but also these responses allow for a fine-grained categorization of individuals’ personality.},
  day      = {15},
  url      = {https://www.nature.com/articles/s41598-019-48481-x},
  doi      = {10.1038/s41598-019-48481-x},
}
Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Information-theoretic investigation of impact of huggable communication medium on prefrontal brain activation", Advanced Robotics, vol. 33, Issue19, pp. 1019-1029, August, 2019.
Abstract: This paper examines the effect of mediated hugs that are achieved with a huggable communication medium on the brain activities of users during conversations. We measured their brain activities with functional near-infrared spectroscopy (NIRS) and evaluated them with two information theoretic measures: permutation entropy, an indicator of relaxation, and multiscale entropy, which captures complexity in brain activation at multiple time scales. We first verify the influence of lip movements on brain activities during conversation and then compare brain activities during tele-conversation through a huggable communication medium with a mobile phone. Our analysis of NIRS signals shows that mediated hugs decrease permutation entropy and increase multiscale entropy. These results suggest that touch interaction through a mediated hug induces a relaxed state in our brain but increases complex patterns of brain activation.
BibTeX:
@Article{Sumioka2019h,
  author   = {Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  journal  = {Advanced Robotics},
  title    = {Information-theoretic investigation of impact of huggable communication medium on prefrontal brain activation},
  year     = {2019},
  abstract = {This paper examines the effect of mediated hugs that are achieved with a huggable communication medium on the brain activities of users during conversations. We measured their brain activities with functional near-infrared spectroscopy (NIRS) and evaluated them with two information theoretic measures: permutation entropy, an indicator of relaxation, and multiscale entropy, which captures complexity in brain activation at multiple time scales. We first verify the influence of lip movements on brain activities during conversation and then compare brain activities during tele-conversation through a huggable communication medium with a mobile phone. Our analysis of NIRS signals shows that mediated hugs decrease permutation entropy and increase multiscale entropy. These results suggest that touch interaction through a mediated hug induces a relaxed state in our brain but increases complex patterns of brain activation.},
  day      = {12},
  doi      = {10.1080/01691864.2019.1652114},
  month    = aug,
  pages    = {1019-1029},
  url      = {https://www.tandfonline.com/doi/abs/10.1080/01691864.2019.1652114},
  volume   = {33, Issue19},
  keywords = {Mediated hug, huggable communication, telecommunication, information theory, permutation entropy, multiscale entropy analysis},
}
Malcolm Doering, Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Dana Kulić, Hiroshi Ishiguro, "Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot", ACM Transactions on Human-Robot Interaction(THRI), vol. 8, Issue3, no. 15, pp. 1-24, July, 2019.
Abstract: Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive learning. Our robot first learns high-level dialog and spatial behavior patterns from offline examples of human-human interaction. Then, during live interactions, it chooses among appropriate actions according to its curiosity about the customer's expected behavior, continually updating its predictive model to learn and adapt to each individual. In a user study, we found that participants thought the curious robot was significantly more humanlike with respect to repetitiveness and diversity of behavior, more interesting, and better overall in comparison to a non-curious robot.
BibTeX:
@Article{Doering2019,
  author   = {Malcolm Doering and Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Dana Kulić and Hiroshi Ishiguro},
  journal  = {ACM Transactions on Human-Robot Interaction(THRI)},
  title    = {Curiosity did not kill the robot: A curiosity-based learning system for a shopkeeper robot},
  year     = {2019},
  abstract = {Learning from human interaction data is a promising approach for developing robot interaction logic, but behaviors learned only from offline data simply represent the most frequent interaction patterns in the training data, without any adaptation for individual differences. We developed a robot that incorporates both data-driven and interactive learning. Our robot first learns high-level dialog and spatial behavior patterns from offline examples of human-human interaction. Then, during live interactions, it chooses among appropriate actions according to its curiosity about the customer's expected behavior, continually updating its predictive model to learn and adapt to each individual. In a user study, we found that participants thought the curious robot was significantly more humanlike with respect to repetitiveness and diversity of behavior, more interesting, and better overall in comparison to a non-curious robot.},
  day      = {23},
  doi      = {10.1145/3326462},
  month    = jul,
  number   = {15},
  pages    = {1-24},
  url      = {https://dl.acm.org/citation.cfm?id=3326462},
  volume   = {8, Issue3},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "Probabilistic nod generation model based on speech and estimated utterance categories", Advanced Robotics, vol. 33, Issue 15-16, pp. 731-741, May, 2019.
Abstract: We proposed and evaluated a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability distribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. The effectiveness of the proposed model was evaluated using an android robot, through subjective experiments. Experiment results indicated that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.
BibTeX:
@Article{Liu2019a,
  author   = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  title    = {Probabilistic nod generation model based on speech and estimated utterance categories},
  journal  = {Advanced Robotics},
  year     = {2019},
  volume   = {33, Issue 15-16},
  pages    = {731-741},
  month    = may,
  issn     = {0169-1864},
  abstract = {We proposed and evaluated a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability distribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. The effectiveness of the proposed model was evaluated using an android robot, through subjective experiments. Experiment results indicated that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.},
  day      = {4},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2019.1610063},
  doi      = {10.1080/01691864.2019.1610063},
  keywords = {Nod, motion generation, SVM, humanoid robot},
}
Takahisa Uchida, Takashi Minato, Tora Koyama, Hiroshi Ishiguro, "Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues", Frontiers in Robotics and AI, vol. 6 Article 29, pp. 1-11, April, 2019.
Abstract: We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.
BibTeX:
@Article{Uchida2019a,
  author   = {Takahisa Uchida and Takashi Minato and Tora Koyama and Hiroshi Ishiguro},
  title    = {Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues},
  journal  = {Frontiers in Robotics and AI},
  year     = {2019},
  volume   = {6 Article 29},
  pages    = {1-11},
  month    = apr,
  abstract = {We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.},
  day      = {24},
  url      = {https://www.frontiersin.org/articles/10.3389/frobt.2019.00029/full},
  doi      = {10.3389/frobt.2019.00029},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "Auditory Scene Reproduction for Tele-operated Robot Systems", Advanced Robotics, vol. 33, Issue 7-8, pp. 415-423, April, 2019.
Abstract: In a tele-operated robot environment, reproducing auditory scenes and conveying 3D spatial information of sound sources are inevitable in order to make operators feel more realistic presence. In this paper, we propose a tele-presence robot system that enables reproduction and manipulation of auditory scenes. This tele-presence system is carried out on the basis of 3D information about where targeted human voices are speaking, and matching with the operator's head orientation. We employed multiple microphone arrays and human tracking technologies to localize and separate voices around a robot. In the operator side, separated sound sources are rendered using head-related transfer functions (HRTF) according to the sound sources' spatial positions and the operator's head orientation that is being tracked real-time. Two-party and three-party interaction experiments indicated that the proposed system has significantly higher accuracy when perceiving direction of sounds and gains higher subjective scores in sense of presence and listenability, compared to a baseline system which uses stereo binaural sounds obtained by two microphones located at the humanoid robot's ears.
BibTeX:
@Article{Liu2019,
  author   = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  title    = {Auditory Scene Reproduction for Tele-operated Robot Systems},
  journal  = {Advanced Robotics},
  year     = {2019},
  volume   = {33, Issue 7-8},
  pages    = {415-423},
  month    = apr,
  issn     = {0169-1864},
  abstract = {In a tele-operated robot environment, reproducing auditory scenes and conveying 3D spatial information of sound sources are inevitable in order to make operators feel more realistic presence. In this paper, we propose a tele-presence robot system that enables reproduction and manipulation of auditory scenes. This tele-presence system is carried out on the basis of 3D information about where targeted human voices are speaking, and matching with the operator's head orientation. We employed multiple microphone arrays and human tracking technologies to localize and separate voices around a robot. In the operator side, separated sound sources are rendered using head-related transfer functions (HRTF) according to the sound sources' spatial positions and the operator's head orientation that is being tracked real-time. Two-party and three-party interaction experiments indicated that the proposed system has significantly higher accuracy when perceiving direction of sounds and gains higher subjective scores in sense of presence and listenability, compared to a baseline system which uses stereo binaural sounds obtained by two microphones located at the humanoid robot's ears.},
  day      = {2},
  url      = {https://www.tandfonline.com/doi/full/10.1080/01691864.2019.1599729},
  doi      = {10.1080/01691864.2019.1599729},
  keywords = {Human–robot interaction, HRTF, sound source localization, beamforming},
}
劉超然, 石井カルロス, 石黒浩, "言語・韻律情報及び対話履歴を用いたLSTMベースのターンテイキング推定", 人工知能学会論文誌, vol. 34, no. 2, pp. C-I65_1-9, March, 2019.
Abstract: A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation partner. We propose a Recurrent Neural Network (RNN) based model that takes the current utterance and the dialog history as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. The dialog history is represented by a sequence of speaker-specified joint embedding of lexical and prosodic contents. To this end, we trained a neural network to embed the lexical and the prosodic contents into a joint embedding space. To learn meaningful embedding spaces, the prosodic feature sequence from each single utterance is mapped into a fixed-dimensional space using RNN and combined with utterance lexical embedding. These joint embeddings are then shifted to different parts of embedding spaces according to the speakers. Finally, the speaker-specified joint embeddings are used as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it outperformed conventional models that use lexical/prosodic features and dialog history without speaker information.
BibTeX:
@Article{劉超然2019a,
  author   = {劉超然 and 石井カルロス and 石黒浩},
  title    = {言語・韻律情報及び対話履歴を用いたLSTMベースのターンテイキング推定},
  journal  = {人工知能学会論文誌},
  year     = {2019},
  volume   = {34},
  number   = {2},
  pages    = {C-I65_1-9},
  month    = mar,
  abstract = {A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation partner. We propose a Recurrent Neural Network (RNN) based model that takes the current utterance and the dialog history as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. The dialog history is represented by a sequence of speaker-specified joint embedding of lexical and prosodic contents. To this end, we trained a neural network to embed the lexical and the prosodic contents into a joint embedding space. To learn meaningful embedding spaces, the prosodic feature sequence from each single utterance is mapped into a fixed-dimensional space using RNN and combined with utterance lexical embedding. These joint embeddings are then shifted to different parts of embedding spaces according to the speakers. Finally, the speaker-specified joint embeddings are used as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it outperformed conventional models that use lexical/prosodic features and dialog history without speaker information.},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/tjsai/34/2/34_C-I65/_article/-char/ja},
  doi      = {10.1527/tjsai.C-I65},
  etitle   = {LSTM-based Turn-taking Estimation Model using Lexical/Prosodic Contents and Dialog History},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis", Entropy, vol. 21, Issue 2, pp. 1-16, February, 2019.
Abstract: Todays' communication media virtually impact and transform every aspect of our daily communication and yet extent of their embodiment on our brain is unexplored. Investigation of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the intelligent and interactive media that provide assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of physical embodiment on older peoples’ prefrontal cortex (PFC) activity while listening to the stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older peoples’ PFC activity and that such a shift in dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit the researchers in age-related cognitive function and rehabilitation that seek the use of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.
BibTeX:
@Article{Keshmiri2019,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis},
  journal  = {Entropy},
  year     = {2019},
  volume   = {21, Issue 2},
  pages    = {1-16},
  month    = feb,
  abstract = {Todays' communication media virtually impact and transform every aspect of our daily communication and yet extent of their embodiment on our brain is unexplored. Investigation of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the intelligent and interactive media that provide assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of physical embodiment on older peoples’ prefrontal cortex (PFC) activity while listening to the stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older peoples’ PFC activity and that such a shift in dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit the researchers in age-related cognitive function and rehabilitation that seek the use of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.},
  day      = {19},
  url      = {https://www.mdpi.com/1099-4300/21/2/199},
  doi      = {10.3390/e21020199},
  keywords = {multiscale entropy; embodied media; tele-communication; humanoid; prefrontal cortex},
}
Malcolm Doering, Dylan F. Glas, Hiroshi Ishiguro, "Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior", IEEE Transactions on Human-Machine Systems, February, 2019.
Abstract: We present an unsupervised, learning-by-imitation technique for learning social robot interaction behaviors from noisy, human-human interaction data full of natural linguistic variation. In particular our proposed system learns the space of common actions for a given domain, important contextual features relating to the interaction structure, and a set of human-readable rules for generating appropriate behaviors. We demonstrated our technique on a travel agent scenario where the robot learns to play the role of the travel agent while communicating with human customers. In this domain, we demonstrate how modeling the interaction structure can be used to resolve the often ambiguous customer speech. We introduce a novel clustering algorithm to automatically discover the interaction structure based on action co-occurrence frequency, revealing the topics of conversation. We then train a topic state estimator to determine the topic of conversation at runtime so the robot may present information pertaining the correct topic. In a human-robot evaluation, our proposed system significantly outperformed a nearest-neighbor baseline technique in both subjective and objective evaluations. In particular, participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with. Furthermore, we found that incorporation of the topic state into prediction significantly improved performance when responding to ambiguous questions.
BibTeX:
@Article{Doering2019a,
  author   = {Malcolm Doering and Dylan F. Glas and Hiroshi Ishiguro},
  journal  = {IEEE Transactions on Human-Machine Systems},
  title    = {Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior},
  year     = {2019},
  abstract = {We present an unsupervised, learning-by-imitation technique for learning social robot interaction behaviors from noisy, human-human interaction data full of natural linguistic variation. In particular our proposed system learns the space of common actions for a given domain, important contextual features relating to the interaction structure, and a set of human-readable rules for generating appropriate behaviors. We demonstrated our technique on a travel agent scenario where the robot learns to play the role of the travel agent while communicating with human customers. In this domain, we demonstrate how modeling the interaction structure can be used to resolve the often ambiguous customer speech. We introduce a novel clustering algorithm to automatically discover the interaction structure based on action co-occurrence frequency, revealing the topics of conversation. We then train a topic state estimator to determine the topic of conversation at runtime so the robot may present information pertaining the correct topic. In a human-robot evaluation, our proposed system significantly outperformed a nearest-neighbor baseline technique in both subjective and objective evaluations. In particular, participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with. Furthermore, we found that incorporation of the topic state into prediction significantly improved performance when responding to ambiguous questions.},
  day      = {26},
  doi      = {10.1109/THMS.2019.2895753},
  month    = feb,
  url      = {https://ieeexplore.ieee.org/document/8653359},
}
Soheil Keshmiri, Hidenobu Sumioka, Masataka Okubo, Hiroshi Ishiguro, "An Information-Theoretic Approach to Quantitative Analysis of the Correspondence Between Skin Blood Flow and Functional Near-Infrared Spectroscopy Measurement in Prefrontal Cortex Activity", Frontiers in Neuroscience, vol. 13, February, 2019.
Abstract: Effect of Skin blood flow (SBF) on functional near-infrared spectroscopy (fNIRS) measurement of cortical activity proves to be an illusive subject matter with divided stances in the neuroscientific literature on its extent. Whereas, some reports on its non-significant influence on fNIRS time series of cortical activity, others consider its impact misleading, even detrimental, in analysis of the brain activity as measured by fNIRS. This situation is further escalated by the fact that almost all analytical studies are based on comparison with functional Magnetic Resonance Imaging (fMRI). In this article, we pinpoint the lack of perspective in previous studies on preservation of information content of resulting fNIRS time series once the SBF is attenuated. In doing so, we propose information-theoretic criteria to quantify the necessary and sufficient conditions for SBF attenuation such that the information content of frontal brain activity in resulting fNIRS times series is preserved. We verify these criteria through evaluation of their utility in comparative analysis of principal component (PCA) and independent component (ICA) SBF attenuation algorithms. Our contributions are 2-fold. First, we show that mere reduction of SBF influence on fNIRS time series of frontal activity is insufficient to warrant preservation of cortical activity information. Second, we empirically justify a higher fidelity of PCA-based algorithm in preservation of the fontal activity's information content in comparison with ICA-based approach. Our results suggest that combination of the first two principal components of PCA-based algorithm results in most efficient SBF attenuation while preserving maximum frontal activity's information. These results contribute to the field by presenting a systematic approach to quantification of the SBF as an interfering process during fNIRS measurement, thereby drawing an informed conclusion on this debate. Furthermore, they provide evidence for a reliable choice among existing SBF attenuation algorithms and their inconclusive number of components, thereby ensuring minimum loss of cortical information during SBF attenuation process.
BibTeX:
@Article{Keshmirie,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Masataka Okubo and Hiroshi Ishiguro},
  title    = {An Information-Theoretic Approach to Quantitative Analysis of the Correspondence Between Skin Blood Flow and Functional Near-Infrared Spectroscopy Measurement in Prefrontal Cortex Activity},
  journal  = {Frontiers in Neuroscience},
  year     = {2019},
  volume   = {13},
  month    = feb,
  abstract = {Effect of Skin blood flow (SBF) on functional near-infrared spectroscopy (fNIRS) measurement of cortical activity proves to be an illusive subject matter with divided stances in the neuroscientific literature on its extent. Whereas, some reports on its non-significant influence on fNIRS time series of cortical activity, others consider its impact misleading, even detrimental, in analysis of the brain activity as measured by fNIRS. This situation is further escalated by the fact that almost all analytical studies are based on comparison with functional Magnetic Resonance Imaging (fMRI). In this article, we pinpoint the lack of perspective in previous studies on preservation of information content of resulting fNIRS time series once the SBF is attenuated. In doing so, we propose information-theoretic criteria to quantify the necessary and sufficient conditions for SBF attenuation such that the information content of frontal brain activity in resulting fNIRS times series is preserved. We verify these criteria through evaluation of their utility in comparative analysis of principal component (PCA) and independent component (ICA) SBF attenuation algorithms. Our contributions are 2-fold. First, we show that mere reduction of SBF influence on fNIRS time series of frontal activity is insufficient to warrant preservation of cortical activity information. Second, we empirically justify a higher fidelity of PCA-based algorithm in preservation of the fontal activity's information content in comparison with ICA-based approach. Our results suggest that combination of the first two principal components of PCA-based algorithm results in most efficient SBF attenuation while preserving maximum frontal activity's information. These results contribute to the field by presenting a systematic approach to quantification of the SBF as an interfering process during fNIRS measurement, thereby drawing an informed conclusion on this debate. Furthermore, they provide evidence for a reliable choice among existing SBF attenuation algorithms and their inconclusive number of components, thereby ensuring minimum loss of cortical information during SBF attenuation process.},
  day      = {15},
  url      = {https://www.frontiersin.org/articles/10.3389/fnins.2019.00079/full},
  doi      = {10.3389/fnins.2019.00079},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Analysis and generation of laughter motions, and evaluation in an android robot", APSIPA Transactions on Signal and Information Processing, vol. 8, no. e6, pp. 1-10, January, 2019.
Abstract: Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also to expressing some type of attitudes, having important social functions in communication. The background of the present work is to generate natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatching between audio and visual modalities, especially in laughter events. In the present work, we used a multimodal dialogue database, and analyzed facial, head, and body motion during laughing speech. Based on the analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results showed the effectiveness of controlling different parts of the face, head, and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion, and upper body motion control).
BibTeX:
@Article{Ishi2019,
  author   = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title    = {Analysis and generation of laughter motions, and evaluation in an android robot},
  journal  = {APSIPA Transactions on Signal and Information Processing},
  year     = {2019},
  volume   = {8},
  number   = {e6},
  pages    = {1-10},
  month    = jan,
  abstract = {Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also to expressing some type of attitudes, having important social functions in communication. The background of the present work is to generate natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatching between audio and visual modalities, especially in laughter events. In the present work, we used a multimodal dialogue database, and analyzed facial, head, and body motion during laughing speech. Based on the analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results showed the effectiveness of controlling different parts of the face, head, and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion, and upper body motion control).},
  day      = {25},
  url      = {https://www.cambridge.org/core/journals/apsipa-transactions-on-signal-and-information-processing/article/analysis-and-generation-of-laughter-motions-and-evaluation-in-an-android-robot/353D071416BDE0536FDB4E5B86696175},
  doi      = {10.1017/ATSIP.2018.32},
}
内田貴久, 港隆史, 石黒浩, "対話アンドロイドに対する主観的意見の帰属と対話意欲の関係", 人工知能学会論文誌, vol. 34, no. 1, pp. B162_1-8, January, 2019.
Abstract: The goal of this research is to construct conversational robots that can stimulate users' motivation to talk with them in non-task-oriented dialogue, where it is required to keep up the dialogue. The non-task-oriented dialogue involves exchanging subjective opinions between speakers. This paper aims at investigating how the user's dialogue motivation is influenced by the attribution of opinions to the conversational android. We examined the influence by testing various kinds of the android's opinions in a questionnaire survey. As the result, it is clarified that not only the users' interest in the android's opinions but the attribution of the subjective opinions to the android influence their motivation for dialogue. This result suggests that there is a problem when the conversational robot makes the utterances based on human-human dialogue database that includes the opinions which are hardly attributed to it. In a design of conversational robot, it is necessary to take account of whether users can attribute the subjective opinions included in the dialogue contents to the robot in order to promote their motivation of dialogue.
BibTeX:
@Article{内田貴久2019,
  author   = {内田貴久 and 港隆史 and 石黒浩},
  title    = {対話アンドロイドに対する主観的意見の帰属と対話意欲の関係},
  journal  = {人工知能学会論文誌},
  year     = {2019},
  volume   = {34},
  number   = {1},
  pages    = {B162_1-8},
  month    = jan,
  abstract = {The goal of this research is to construct conversational robots that can stimulate users' motivation to talk with them in non-task-oriented dialogue, where it is required to keep up the dialogue. The non-task-oriented dialogue involves exchanging subjective opinions between speakers. This paper aims at investigating how the user's dialogue motivation is influenced by the attribution of opinions to the conversational android. We examined the influence by testing various kinds of the android's opinions in a questionnaire survey. As the result, it is clarified that not only the users' interest in the android's opinions but the attribution of the subjective opinions to the android influence their motivation for dialogue. This result suggests that there is a problem when the conversational robot makes the utterances based on human-human dialogue database that includes the opinions which are hardly attributed to it. In a design of conversational robot, it is necessary to take account of whether users can attribute the subjective opinions included in the dialogue contents to the robot in order to promote their motivation of dialogue.},
  day      = {7},
  url      = {https://www.jstage.jst.go.jp/article/tjsai/34/1/34_B-I62/_article/-char/ja},
  doi      = {10.1527/tjsai.B-I62},
  etitle   = {The relationship between dialogue motivation and attribution of subjective opinions to conversational androids},
  keywords = {conversational robot, android, dialogue system, dialogue strategy, subjective opinion},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Hiroko Kase, "Use of Robotic Media as Persuasive Technology and Its Ethical Implications in Care Settings", Journal of Philosophy and Ethics in Health Care and Medicine, no. 12, pp. 45-58, December, 2018.
Abstract: Communication support for older adults has become a growing need, and as assistive technology robotic media are expected to facilitate social interactions in both verbal and nonverbal ways. Focusing on dementia care, we look into two studies exploring the potential of robotic media that could promote changes in subjectivity in older adults with behavioral and psychological symptoms of dementia (BPSD). Furthermore, we investigate the conditions that might facilitate such media’s use in therapeutic improvement. Based on case studies in dementia care, this paper aims to investigate the potential and conditions that allow robotic media to mediate changes in human subjects. The case studies indicate that those with dementia become open and prosocial through robotic intervention and that by setting suitable conversational topics their reactions can be extracted efficiently. Previous studies also mentioned the requirement of considering both the positive and negative aspects of using robotic media. With social robots being developed as persuasive agents, users have difficulty controlling the information flow, and thus when personal data is dealt with ethical concerns arise. The ethical implication is that persuasive technology puts human autonomy at risk. Finally, we discuss the ethical implications and the effects on emotions and behaviors by applying persuasive robotic media in care settings.
BibTeX:
@Article{Yamazaki2018,
  author   = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Hiroko Kase},
  title    = {Use of Robotic Media as Persuasive Technology and Its Ethical Implications in Care Settings},
  journal  = {Journal of Philosophy and Ethics in Health Care and Medicine},
  year     = {2018},
  number   = {12},
  pages    = {45-58},
  month    = dec,
  abstract = {Communication support for older adults has become a growing need, and as assistive technology robotic media are expected to facilitate social interactions in both verbal and nonverbal ways. Focusing on dementia care, we look into two studies exploring the potential of robotic media that could promote changes in subjectivity in older adults with behavioral and psychological symptoms of dementia (BPSD). Furthermore, we investigate the conditions that might facilitate such media’s use in therapeutic improvement. Based on case studies in dementia care, this paper aims to investigate the potential and conditions that allow robotic media to mediate changes in human subjects. The case studies indicate that those with dementia become open and prosocial through robotic intervention and that by setting suitable conversational topics their reactions can be extracted efficiently. Previous studies also mentioned the requirement of considering both the positive and negative aspects of using robotic media. With social robots being developed as persuasive agents, users have difficulty controlling the information flow, and thus when personal data is dealt with ethical concerns arise. The ethical implication is that persuasive technology puts human autonomy at risk. Finally, we discuss the ethical implications and the effects on emotions and behaviors by applying persuasive robotic media in care settings.},
  url      = {https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwit1fCiy5DiAhV7yIsBHUlkACkQFjAAegQIAhAC&url=http%3A%2F%2Fitetsu.jp%2Fmain%2Fwp-content%2Fuploads%2F2019%2F03%2FPEHCM12-yamazaki.pdf&usg=AOvVaw0ie8swnUm_nlGMgx2CPByB},
}
Rosario Sorbello, Carmelo Cali, Salvatore Tramonte, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "A Cognitive Model of Trust for Biological and Artificial Humanoid Robots", Procedia Computer Science, vol. 145, pp. 526-532, December, 2018.
Abstract: This paper presents a model of trust for biological and artificial humanoid robots and agents as antecedent condition of interaction. We discuss the cognitive engines of social perception that accounts for the units on which agents operate and the rules they follow when they bestow trust and assess trustworthiness. We propose that this structural information is the domain of the model. The model represents it in terms of modular cognitive structures connected by a parallel architecture. Finally we give a preliminary formalization of the model in the mathematical framework of the I/O automata for future computational and human-humanoid application.
BibTeX:
@Article{Sorbello2018b,
  author   = {Rosario Sorbello and Carmelo Cali and Salvatore Tramonte and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {A Cognitive Model of Trust for Biological and Artificial Humanoid Robots},
  journal  = {Procedia Computer Science},
  year     = {2018},
  volume   = {145},
  pages    = {526-532},
  month    = Dec,
  abstract = {This paper presents a model of trust for biological and artificial humanoid robots and agents as antecedent condition of interaction. We discuss the cognitive engines of social perception that accounts for the units on which agents operate and the rules they follow when they bestow trust and assess trustworthiness. We propose that this structural information is the domain of the model. The model represents it in terms of modular cognitive structures connected by a parallel architecture. Finally we give a preliminary formalization of the model in the mathematical framework of the I/O automata for future computational and human-humanoid application.},
  day      = {11},
  url      = {https://www.sciencedirect.com/science/article/pii/S1877050918324050},
  doi      = {10.1016/j.procs.2018.11.117},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "A huggable communication medium can provide a sustained listening support for students with special needs in a classroom", Computers in Human Behavior, vol. 93, pp. 106-113, October, 2018.
Abstract: Poor listening ability has been a serious problem for students with a wide range of developmental disabilities. We conducted a memory test to students with special needs in a typical listening situation and a situation with a huggable communication medium, called Hugvie, to evaluate how well the students can listen to others at morning meetings. The results showed that listening via Hugvies improved the scores of their memories for information provided by teachers. In particular, the memories of distracted students with emotional troubles tended to be highly improved. It was worthy of note that the improvement of their skills kept maintaining for three months. Besides, the students' perception and impression of Hugvies were preferable for long-term use.
BibTeX:
@Article{Nakanishi2018a,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {A huggable communication medium can provide a sustained listening support for students with special needs in a classroom},
  journal  = {Computers in Human Behavior},
  year     = {2018},
  volume   = {93},
  pages    = {106-113},
  month    = oct,
  abstract = {Poor listening ability has been a serious problem for students with a wide range of developmental disabilities. We conducted a memory test to students with special needs in a typical listening situation and a situation with a huggable communication medium, called Hugvie, to evaluate how well the students can listen to others at morning meetings. The results showed that listening via Hugvies improved the scores of their memories for information provided by teachers. In particular, the memories of distracted students with emotional troubles tended to be highly improved. It was worthy of note that the improvement of their skills kept maintaining for three months. Besides, the students' perception and impression of Hugvies were preferable for long-term use.},
  day      = {3},
  url      = {https://www.journals.elsevier.com/computers-in-human-behavior},
  doi      = {10.1016/j.chb.2018.10.008},
}
Carlos Ishi, Daichi Machiyashiki, Ryusuke Mikata, Hiroshi Ishiguro, "A speech-driven hand gesture generation method and evaluation in android robots", IEEE Robotics and Automation Letters (RA-L), vol. 3, Issue4, pp. 3757-3764, July, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.
BibTeX:
@Article{Ishi2018e,
  author   = {Carlos Ishi and Daichi Machiyashiki and Ryusuke Mikata and Hiroshi Ishiguro},
  title    = {A speech-driven hand gesture generation method and evaluation in android robots},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  year     = {2018},
  volume   = {3, Issue4},
  pages    = {3757-3764},
  month    = jul,
  abstract = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.},
  day      = {16},
  url      = {https://ieeexplore.ieee.org/document/8411101},
  doi      = {10.1109/LRA.2018.2856281},
  comment  = {(The contents of this paper were also selected by IROS2018 Program Committee for presentation at the Conference)},
  keywords = {Android robots, Emotion, Hand Gesture, Motion generation, Speech-driven},
}
Carlos T. Ishi, Chaoran Liu, Jani Even, Norihiro Hagita, "A sound-selective hearing support system using environment sensor network", Acoustic Science and Technology, vol. 39, Issue 4, pp. 287-294, July, 2018.
Abstract: We have developed a sound-selective hearing support system by making use of an environment sensor network, so that individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources can be reconstructed. The performance of the selective sound separation module was evaluated under different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. Subjective intelligibility tests were conducted in the same noise condition. For words with high familiarity, intelligibility rates increased from 67% to 90% for normal hearing subjects and from 50% to 70% for elderly subjects, when the proposed system was applied.
BibTeX:
@Article{Ishi2018d,
  author   = {Carlos T. Ishi and Chaoran Liu and Jani Even and Norihiro Hagita},
  title    = {A sound-selective hearing support system using environment sensor network},
  journal  = {Acoustic Science and Technology},
  year     = {2018},
  volume   = {39, Issue 4},
  pages    = {287-294},
  month    = Jul,
  abstract = {We have developed a sound-selective hearing support system by making use of an environment sensor network, so that individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources can be reconstructed. The performance of the selective sound separation module was evaluated under different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. Subjective intelligibility tests were conducted in the same noise condition. For words with high familiarity, intelligibility rates increased from 67% to 90% for normal hearing subjects and from 50% to 70% for elderly subjects, when the proposed system was applied.},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/ast/39/4/39_E1757/_article/-char/en},
  doi      = {10.1250/ast.39.287},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "How Should a Robot React Before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot's Face", IEEE Robotics and Automation Letters (RA-L), pp. 3773-3780, July, 2018.
Abstract: This study addresses the pre touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another's touch by observing a data set of human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with participants. Our experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.
BibTeX:
@Article{Shiomi2018a,
  author   = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {How Should a Robot React Before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot's Face},
  year     = {2018},
  abstract = {This study addresses the pre touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another's touch by observing a data set of human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with participants. Our experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.},
  day      = {16},
  doi      = {10.1109/LRA.2018.2856303},
  month    = jul,
  pages    = {3773-3780},
  url      = {https://ieeexplore.ieee.org/document/8411337},
  comment  = {(The contents of this paper were also selected by IROS2018 Program Committee for presentation at the Conference)},
}
Christian Penaloza, Shuichi Nishio, "BMI control of a third arm for multitasking", Science Robotics, vol. 3, Issue20, July, 2018.
Abstract: Brain-machine interface (BMI) systems have been widely studied to allow people with motor paralysis conditions to control assistive robotic devices that replace or recover lost function but not to extend the capabilities of healthy users. We report an experiment in which healthy participants were able to extend their capabilities by using a noninvasive BMI to control a human-like robotic arm and achieve multitasking. Experimental results demonstrate that participants were able to reliably control the robotic arm with the BMI to perform a goal-oriented task while simultaneously using their own arms to do a different task. This outcome opens possibilities to explore future human body augmentation applications for healthy people that not only enhance their capability to perform a particular task but also extend their physical capabilities to perform multiple tasks simultaneously.
BibTeX:
@Article{Penaloza2018a,
  author   = {Christian Penaloza and Shuichi Nishio},
  title    = {BMI control of a third arm for multitasking},
  journal  = {Science Robotics},
  year     = {2018},
  volume   = {3, Issue20},
  month    = Jul,
  abstract = {Brain-machine interface (BMI) systems have been widely studied to allow people with motor paralysis conditions to control assistive robotic devices that replace or recover lost function but not to extend the capabilities of healthy users. We report an experiment in which healthy participants were able to extend their capabilities by using a noninvasive BMI to control a human-like robotic arm and achieve multitasking. Experimental results demonstrate that participants were able to reliably control the robotic arm with the BMI to perform a goal-oriented task while simultaneously using their own arms to do a different task. This outcome opens possibilities to explore future human body augmentation applications for healthy people that not only enhance their capability to perform a particular task but also extend their physical capabilities to perform multiple tasks simultaneously.},
  day      = {25},
  url      = {http://www.geminoid.jp/misc/scirobotics.aat1228.html},
  doi      = {10.1126/scirobotics.aat1228},
}
Soheil Keshmiri, Hidenobu Sumioka, Junya Nakanishi, Hiroshi Ishiguro, "Bodily-Contact Communication Medium Induces Relaxed Mode of Brain Activity While Increasing Its Dynamical Complexity: A Pilot Study", Frontiers in Psychology, vol. 9, Article1192, July, 2018.
Abstract: We present the results of the analysis of the effect of a bodily-contact communication medium on the brain activity of the individuals during verbal communication. Our results suggest that the communicated content that is mediated through such a device induces a significant effect on electroencephalogram (EEG) time series of human subjects. Precisely, we find a significant reduction of overall power of the EEG signals of the individuals. This observation that is supported by the analysis of the permutation entropy (PE) of the EEG time series of brain activity of the participants suggests the positive effect of such a medium on the stress relief and the induced sense of relaxation. Additionally, multiscale entropy (MSE) analysis of our data implies that such a medium increases the level of complexity that is exhibited by EEG time series of our participants, thereby suggesting their sustained sense of involvement in their course of communication. These findings that are in accord with the results reported by cognitive neuroscience research suggests that the use of such a medium can be beneficial as a complementary step in treatment of developmental disorders, attentiveness of schoolchildren and early child development, as well as scenarios where intimate physical interaction over distance is desirable (e.g., distance-parenting).
BibTeX:
@Article{Keshmiri2018b,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Junya Nakanishi and Hiroshi Ishiguro},
  title    = {Bodily-Contact Communication Medium Induces Relaxed Mode of Brain Activity While Increasing Its Dynamical Complexity: A Pilot Study},
  journal  = {Frontiers in Psychology},
  year     = {2018},
  volume   = {9, Article1192},
  month    = Jul,
  abstract = {We present the results of the analysis of the effect of a bodily-contact communication medium on the brain activity of the individuals during verbal communication. Our results suggest that the communicated content that is mediated through such a device induces a significant effect on electroencephalogram (EEG) time series of human subjects. Precisely, we find a significant reduction of overall power of the EEG signals of the individuals. This observation that is supported by the analysis of the permutation entropy (PE) of the EEG time series of brain activity of the participants suggests the positive effect of such a medium on the stress relief and the induced sense of relaxation. Additionally, multiscale entropy (MSE) analysis of our data implies that such a medium increases the level of complexity that is exhibited by EEG time series of our participants, thereby suggesting their sustained sense of involvement in their course of communication. These findings that are in accord with the results reported by cognitive neuroscience research suggests that the use of such a medium can be beneficial as a complementary step in treatment of developmental disorders, attentiveness of schoolchildren and early child development, as well as scenarios where intimate physical interaction over distance is desirable (e.g., distance-parenting).},
  day      = {9},
  url      = {https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01192/full},
  doi      = {10.3389/fpsyg.2018.01192},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated with Working Memory", Frontiers in Neuroinformatics, vol. 12, June, 2018.
Abstract: Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are suboptimal in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.
BibTeX:
@Article{Keshmiri2018a,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {Differential Entropy Preserves Variational Information of Near-Infrared Spectroscopy Time Series Associated with Working Memory},
  journal  = {Frontiers in Neuroinformatics},
  year     = {2018},
  volume   = {12},
  month    = Jun,
  abstract = {Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are suboptimal in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.},
  url      = {https://www.frontiersin.org/articles/10.3389/fninf.2018.00033/full},
  doi      = {10.3389/fninf.2018.00033},
}
Hiroaki Hatano, Cheng Chao Song, Carlos T. Ishi, Makiko Matsuda, "Automatic evaluation of accentuation of Japanese read speech", Digital Resources for Learning Japanese, pp. 1-10, June, 2018.
Abstract: Japanese is a typical mora-timed language with lexical pitch-accent (Beckman 1986, Kubozono 1996, McCawley 1978). A mora is a seg-mental unit of sound with a relatively constant duration. Phonologically, the accent's location corresponds to the mora before the pitch drop (Haraguchi 1999) and its location are arbitrary. For learners of Japanese, such prosodic characteristics complicate realizing correct word accents. Incorrect pitch accents cause misunderstanding of word meaning and lead to unnatural-sounding speech in non-native Japanese speakers (Isomura 1996, Toda 2003). The acquisition of pitch accents is critical for Japanese language learners (A 2015). Although students often express a desire to learn Japanese pronunciation including accents, the practice is rare in Japanese education (Fujiwara and Negishi 2005, Tago and Isomura 2014). The main reason is that the priority of teaching pronunciation is relatively low, and many teachers lack the confidence to evaluate the accents of learners. Non-native Japanese-language teachers in their own countries have these tendencies. Much effort has stressed acoustic-based evaluations of Japanese accentuation. However, most work has focused on word-level accent evaluation. If the learners of Japanese were given a chance to participate in such activities as speech contests, their scripts might contain large word varieties. We believe that a text-independent evaluation system is required for Japanese accents. Our research is investigating a text-independent automatic evaluation method for Japanese accentuation based on acoustic features.
BibTeX:
@Article{Hatano2018,
  author   = {Hiroaki Hatano and Cheng Chao Song and Carlos T. Ishi and Makiko Matsuda},
  title    = {Automatic evaluation of accentuation of Japanese read speech},
  journal  = {Digital Resources for Learning Japanese},
  year     = {2018},
  pages    = {1-10},
  month    = jun,
  issn     = {2283-8910},
  abstract = {Japanese is a typical mora-timed language with lexical pitch-accent (Beckman 1986, Kubozono 1996, McCawley 1978). A mora is a seg-mental unit of sound with a relatively constant duration. Phonologically, the accent's location corresponds to the mora before the pitch drop (Haraguchi 1999) and its location are arbitrary. For learners of Japanese, such prosodic characteristics complicate realizing correct word accents. Incorrect pitch accents cause misunderstanding of word meaning and lead to unnatural-sounding speech in non-native Japanese speakers (Isomura 1996, Toda 2003). The acquisition of pitch accents is critical for Japanese language learners (A 2015). Although students often express a desire to learn Japanese pronunciation including accents, the practice is rare in Japanese education (Fujiwara and Negishi 2005, Tago and Isomura 2014). The main reason is that the priority of teaching pronunciation is relatively low, and many teachers lack the confidence to evaluate the accents of learners. Non-native Japanese-language teachers in their own countries have these tendencies. Much effort has stressed acoustic-based evaluations of Japanese accentuation. However, most work has focused on word-level accent evaluation. If the learners of Japanese were given a chance to participate in such activities as speech contests, their scripts might contain large word varieties. We believe that a text-independent evaluation system is required for Japanese accents. Our research is investigating a text-independent automatic evaluation method for Japanese accentuation based on acoustic features.},
  day      = {5},
  url      = {https://www.digibup.com/products/digital-resources},
}
Abdelkader Nasreddine Belkacem, Shuichi Nishio, Takafumi Suzuki, Hiroshi Ishiguro, Masayuki Hirata, "Neuromagnetic decoding of simultatenous bilateral hand movements for multidimensional brain-machine interfaces", IEEE Transactions on Neural Systems and Rehalibitaion Engineering, vol. 26, no. Issue 6, pp. 1301-1310, May, 2018.
Abstract: To provide multidimensional control, we describe the first reported decoding of bilateral hand movements by using single-trial magnetoencephalography signals as a new approach to enhance a user's ability to interact with a complex environment through a multidimensional brain-machine interface. Ten healthy participants performed or imagined four types of bilateral hand movements during neuromagnetic measurements. By applying a support vector machine (SVM) method to classify the four movements regarding the sensor data obtained from the sensorimotor area, we found the mean accuracy of a two-class classification using the amplitudes of neuromagnetic fields to be particularly suitable for real-time applications, with accuracies comparable to those obtained in previous studies involving unilateral movement. The sensor data from over the sensorimotor cortex showed discriminative time-series waveforms and time-frequency maps in the bilateral hemispheres according to the four tasks. Furthermore, we used four-class classification algorithms based on the SVM method to decode all types of bilateral movements. Our results provided further proof that the slow components of neuromagnetic fields carry sufficient neural information to classify even bilateral hand movements and demonstrated the potential utility of decoding bilateral movements for engineering purposes such as multidimensional motor control.
BibTeX:
@Article{Belkacem2018d,
  author   = {Abdelkader Nasreddine Belkacem and Shuichi Nishio and Takafumi Suzuki and Hiroshi Ishiguro and Masayuki Hirata},
  title    = {Neuromagnetic decoding of simultatenous bilateral hand movements for multidimensional brain-machine interfaces},
  journal  = {IEEE Transactions on Neural Systems and Rehalibitaion Engineering},
  year     = {2018},
  volume   = {26},
  number   = {Issue 6},
  pages    = {1301-1310},
  month    = May,
  abstract = {To provide multidimensional control, we describe the first reported decoding of bilateral hand movements by using single-trial magnetoencephalography signals as a new approach to enhance a user's ability to interact with a complex environment through a multidimensional brain-machine interface. Ten healthy participants performed or imagined four types of bilateral hand movements during neuromagnetic measurements. By applying a support vector machine (SVM) method to classify the four movements regarding the sensor data obtained from the sensorimotor area, we found the mean accuracy of a two-class classification using the amplitudes of neuromagnetic fields to be particularly suitable for real-time applications, with accuracies comparable to those obtained in previous studies involving unilateral movement. The sensor data from over the sensorimotor cortex showed discriminative time-series waveforms and time-frequency maps in the bilateral hemispheres according to the four tasks. Furthermore, we used four-class classification algorithms based on the SVM method to decode all types of bilateral movements. Our results provided further proof that the slow components of neuromagnetic fields carry sufficient neural information to classify even bilateral hand movements and demonstrated the potential utility of decoding bilateral movements for engineering purposes such as multidimensional motor control.},
  day      = {15},
  url      = {https://ieeexplore.ieee.org/document/8359204},
  doi      = {10.1109/TNSRE.2018.2837003},
}
Jakub Złotowski, Hidenobu Sumioka, Friederike Eyssel, Shuichi Nishio, Christoph Bartneck, Hiroshi Ishiguro, "Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism", International Journal of Social Robotics, pp. 1-14, April, 2018.
Abstract: Anthropomorphism, the attribution of humanlike characteristics to nonhuman entities, may be resulting from a dual process: first, a fast and intuitive (Type 1) process permits to quickly classify an object as humanlike and results in implicit anthropomorphism. Second, a reflective (Type 2) process may moderate the initial judgment based on conscious effort and result in explicit anthropomorphism. In this study, we manipulated both participants’ motivation for Type 2 processing and a robot’s emotionality to investigate the role of Type 1 versus Type 2 processing in forming judgments about the robot Robovie R2. We did so by having participants play the “Jeopardy!” game with the robot. Subsequently, we directly and indirectly measured anthropomorphism by administering self-report measures and a priming task, respectively. Furthermore, we measured treatment of the robot as a social actor to establish its relation with implicit and explicit anthropomorphism. The results suggested that the model of dual anthropomorphism can explain when responses are likely to reflect judgments based on Type 1 and Type 2 processes. Moreover, we showed that the social treatment of a robot, as described by the Media Equation theory, is related with implicit, but not explicit anthropomorphism.
BibTeX:
@Article{Zlotowski2018,
  author   = {Jakub Złotowski and Hidenobu Sumioka and Friederike Eyssel and Shuichi Nishio and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism},
  journal  = {International Journal of Social Robotics},
  year     = {2018},
  pages    = {1-14},
  month    = Apr,
  abstract = {Anthropomorphism, the attribution of humanlike characteristics to nonhuman entities, may be resulting from a dual process: first, a fast and intuitive (Type 1) process permits to quickly classify an object as humanlike and results in implicit anthropomorphism. Second, a reflective (Type 2) process may moderate the initial judgment based on conscious effort and result in explicit anthropomorphism. In this study, we manipulated both participants’ motivation for Type 2 processing and a robot’s emotionality to investigate the role of Type 1 versus Type 2 processing in forming judgments about the robot Robovie R2. We did so by having participants play the “Jeopardy!” game with the robot. Subsequently, we directly and indirectly measured anthropomorphism by administering self-report measures and a priming task, respectively. Furthermore, we measured treatment of the robot as a social actor to establish its relation with implicit and explicit anthropomorphism. The results suggested that the model of dual anthropomorphism can explain when responses are likely to reflect judgments based on Type 1 and Type 2 processes. Moreover, we showed that the social treatment of a robot, as described by the Media Equation theory, is related with implicit, but not explicit anthropomorphism.},
  day      = {4},
  url      = {https://link.springer.com/article/10.1007/s12369-018-0476-5},
  doi      = {10.1007/s12369-018-0476-5},
}
Carlos T. Ishi, Jun Arai, "Periodicity, spectral and electroglottographic analyses of pressed voice in expressive speech", Acoustic Science and Technology, vol. 39, Issue 2, pp. 101-108, March, 2018.
Abstract: Pressed voice is a type of voice quality produced by pressing/straining the vocal folds, which often appears in Japanese conversational speech when expressing paralinguistic information related to emotional or attitudinal behaviors of the speaker. With the aim of clarifying the acoustic and physiological features involved in pressed voice production, in present work, acoustic and electroglottographic (EGG) analyses have been conducted on pressed voice segments extracted from spontaneous dialogue speech of several speakers. Periodicity analysis indicated that pressed voice is usually accompanied by creaky or harsh voices, having irregularities in periodicity, but can also be accompanied by periodic voices with fundamental frequencies in the range of modal phonation. A spectral measure H1'-A1' was proposed for characterizing pressed voice segments which commonly has few or no harmonicity. Vocal fold vibratory pattern analysis from the EGG signals revealed that most pressed voice segments are characterized by glottal pulses with closed intervals longer than open intervals on average, regardless of periodicity.
BibTeX:
@Article{Ishi2018,
  author   = {Carlos T. Ishi and Jun Arai},
  title    = {Periodicity, spectral and electroglottographic analyses of pressed voice in expressive speech},
  journal  = {Acoustic Science and Technology},
  year     = {2018},
  volume   = {39, Issue 2},
  pages    = {101-108},
  month    = Mar,
  abstract = {Pressed voice is a type of voice quality produced by pressing/straining the vocal folds, which often appears in Japanese conversational speech when expressing paralinguistic information related to emotional or attitudinal behaviors of the speaker. With the aim of clarifying the acoustic and physiological features involved in pressed voice production, in present work, acoustic and electroglottographic (EGG) analyses have been conducted on pressed voice segments extracted from spontaneous dialogue speech of several speakers. Periodicity analysis indicated that pressed voice is usually accompanied by creaky or harsh voices, having irregularities in periodicity, but can also be accompanied by periodic voices with fundamental frequencies in the range of modal phonation. A spectral measure H1'-A1' was proposed for characterizing pressed voice segments which commonly has few or no harmonicity. Vocal fold vibratory pattern analysis from the EGG signals revealed that most pressed voice segments are characterized by glottal pulses with closed intervals longer than open intervals on average, regardless of periodicity.},
  day      = {1},
  url      = {https://www.jstage.jst.go.jp/article/ast/39/2/39_E1732/_article},
  doi      = {10.1250/ast.39.101},
  file     = {Ishi2018.pdf:pdf/Ishi2018.pdf:PDF},
}
Christian Penaloza, Maryam Alimardani, Shuichi Nishio, "Android Feedback-based Training modulates Sensorimotor Rhythms during Motor Imagery", IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, Issue 3, pp. 666-674, March, 2018.
Abstract: EEG-based brain computer interface (BCI) systems have demonstrated potential to assist patients with devastating motor paralysis conditions. However, there is great interest in shifting the BCI trend towards applications aimed to healthy users. Although BCI operation depends on technological factors (i.e. EEG pattern classification algorithm) and human factors (i.e. how well the person is able to generate good quality EEG patterns), it is the latter the least investigated. In order to control a Motor Imagery based BCI, the user needs to learn to modulate his/her sensorimotor brain rhythms by practicing Motor Imagery using a classical training protocol with an abstract visual feedback. In this paper, we investigate a different BCI training protocol using a human-like android robot (Geminoid HI-2) to provide realistic visual feedback. The proposed training protocol addresses deficiencies of the classical approach and takes advantage of body-abled user capabilities. Experimental results suggest that android feedback based BCI training improves the modulation of sensorimotor rhythms during motor imagery task. Moreover, we discuss how the influence of body ownership transfer illusion towards the android might have an effect in the modulation of event related desynchronization/synchronization (ERD/ERS) activity.
BibTeX:
@Article{Penaloza2018,
  author   = {Christian Penaloza and Maryam Alimardani and Shuichi Nishio},
  title    = {Android Feedback-based Training modulates Sensorimotor Rhythms during Motor Imagery},
  journal  = {IEEE Transactions on Neural Systems and Rehabilitation Engineering},
  year     = {2018},
  volume   = {26, Issue 3},
  pages    = {666-674},
  month    = Mar,
  abstract = {EEG-based brain computer interface (BCI) systems have demonstrated potential to assist patients with devastating motor paralysis conditions. However, there is great interest in shifting the BCI trend towards applications aimed to healthy users. Although BCI operation depends on technological factors (i.e. EEG pattern classification algorithm) and human factors (i.e. how well the person is able to generate good quality EEG patterns), it is the latter the least investigated. In order to control a Motor Imagery based BCI, the user needs to learn to modulate his/her sensorimotor brain rhythms by practicing Motor Imagery using a classical training protocol with an abstract visual feedback. In this paper, we investigate a different BCI training protocol using a human-like android robot (Geminoid HI-2) to provide realistic visual feedback. The proposed training protocol addresses deficiencies of the classical approach and takes advantage of body-abled user capabilities. Experimental results suggest that android feedback based BCI training improves the modulation of sensorimotor rhythms during motor imagery task. Moreover, we discuss how the influence of body ownership transfer illusion towards the android might have an effect in the modulation of event related desynchronization/synchronization (ERD/ERS) activity.},
  url      = {http://ieeexplore.ieee.org/document/8255672/},
  doi      = {10.1109/TNSRE.2018.2792481},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "An android architecture for bio-inspired honest signalling in Human- Humanoid Interaction", Biologically Inspired Cognitive Architectures, vol. 23, pp. 27-34, January, 2018.
Abstract: This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.
BibTeX:
@Article{Sorbello2018a,
  author   = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {An android architecture for bio-inspired honest signalling in Human- Humanoid Interaction},
  journal  = {Biologically Inspired Cognitive Architectures},
  year     = {2018},
  volume   = {23},
  pages    = {27-34},
  month    = Jan,
  abstract = {This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.},
  url      = {https://www.sciencedirect.com/science/article/pii/S2212683X17301032},
  doi      = {10.1016/j.bica.2017.12.001},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "Embodied responses to musical experience detected by human bio-feedback brain features in a Geminoid augmented architecture", Biologically Inspired Cognitive Architectures, vol. 23, pp. 19-26, January, 2018.
Abstract: This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). On the grounds of the theoretical and experimental literature on the biological foundation of music, the grammar of music perception and the perception and feeling of emotions in music hearing, we argue that music cognition is specific and that it is realized by a cognitive capacity for music that consists of conceptual and affective constituents. We discuss the relationship between such constituents that enables understanding, that is extracting meaning from music at the different levels of the organization of sounds that are felt as bearers of affects and emotions. To account for the way such cognitive mechanisms are realized in music hearing and extended to movements and gestures we bring in the construct of tensions and of music experience as a cognitive frame. Finally, we describe the principled approach to the design and the architecture of a BCI-controlled robotic system that can be employed to map and specify the constituents of the cognitive capacity for music as well as to simulate their contribution to music meaning understanding in the context of music experience by displaying it through the Geminoid robot movements.
BibTeX:
@Article{Sorbello2018,
  author   = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title    = {Embodied responses to musical experience detected by human bio-feedback brain features in a Geminoid augmented architecture},
  journal  = {Biologically Inspired Cognitive Architectures},
  year     = {2018},
  volume   = {23},
  pages    = {19-26},
  month    = Jan,
  abstract = {This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). On the grounds of the theoretical and experimental literature on the biological foundation of music, the grammar of music perception and the perception and feeling of emotions in music hearing, we argue that music cognition is specific and that it is realized by a cognitive capacity for music that consists of conceptual and affective constituents. We discuss the relationship between such constituents that enables understanding, that is extracting meaning from music at the different levels of the organization of sounds that are felt as bearers of affects and emotions. To account for the way such cognitive mechanisms are realized in music hearing and extended to movements and gestures we bring in the construct of tensions and of music experience as a cognitive frame. Finally, we describe the principled approach to the design and the architecture of a BCI-controlled robotic system that can be employed to map and specify the constituents of the cognitive capacity for music as well as to simulate their contribution to music meaning understanding in the context of music experience by displaying it through the Geminoid robot movements.},
  url      = {https://www.sciencedirect.com/science/article/pii/S2212683X17301044},
  doi      = {10.1016/j.bica.2018.01.001},
}
Takashi Ikeda, Masayuki Hirata, Masashi Kasaki, Maryam Alimardani, Kojiro Matsushita, Tomoyuki Yamamoto, Shuichi Nishio, Hiroshi Ishiguro, "Subthalamic nucleus detects unnatural android movement", Scientific Reports, vol. 7, no. 17851, December, 2017.
Abstract: An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.
BibTeX:
@Article{Ikeda2017,
  author   = {Takashi Ikeda and Masayuki Hirata and Masashi Kasaki and Maryam Alimardani and Kojiro Matsushita and Tomoyuki Yamamoto and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Subthalamic nucleus detects unnatural android movement},
  journal  = {Scientific Reports},
  year     = {2017},
  volume   = {7},
  number   = {17851},
  month    = Dec,
  abstract = {An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.},
  day      = {19},
  url      = {https://www.nature.com/articles/s41598-017-17849-2},
  doi      = {10.1038/s41598-017-17849-2},
}
Hideyuki Takahashi, Midori Ban, Hirotaka Osawa, Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Huggable communication medium maintains level of trust during conversation game", Frontiers in Psychology, vol. 8, no. 1862, pp. 1-8, October, 2017.
Abstract: The present research is based on the hypothesis that using Hugvie maintains users' level of trust toward their conversation partners in situations prone to suspicion. The level of trust felt toward other remote game players was compared between participants using Hugvie and those using a basic communication device while playing a modified version of Werewolf, a conversation-based game, designed to evaluate trust. Although there are always winners and losers in the regular version of Werewolf, the rules were modified to generate a possible scenario in which no enemy was present among the players and all players would win if they trusted each other. We examined the effect of using Hugvie while playing Werewolf on players' level of trust toward each other and our results demonstrated that in those using Hugvie, the level of trust toward other players was maintained.
BibTeX:
@Article{Takahashi2017,
  author   = {Hideyuki Takahashi and Midori Ban and Hirotaka Osawa and Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {Huggable communication medium maintains level of trust during conversation game},
  journal  = {Frontiers in Psychology},
  year     = {2017},
  volume   = {8},
  number   = {1862},
  pages    = {1-8},
  month    = oct,
  abstract = {The present research is based on the hypothesis that using Hugvie maintains users' level of trust toward their conversation partners in situations prone to suspicion. The level of trust felt toward other remote game players was compared between participants using Hugvie and those using a basic communication device while playing a modified version of Werewolf, a conversation-based game, designed to evaluate trust. Although there are always winners and losers in the regular version of Werewolf, the rules were modified to generate a possible scenario in which no enemy was present among the players and all players would win if they trusted each other. We examined the effect of using Hugvie while playing Werewolf on players' level of trust toward each other and our results demonstrated that in those using Hugvie, the level of trust toward other players was maintained.},
  day      = {25},
  url      = {https://www.frontiersin.org/journals/psychology#},
  doi      = {10.3389/fpsyg.2017.01862},
}
Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Novel Speech Motion Generation by Modelling Dynamics of Human Speech Production", Frontiers in Robotics and AI, vol. 4 Article 49, pp. 1-14, October, 2017.
Abstract: We have developed a method to automatically generate humanlike trunk motions based on speech (i.e., the neck and waist motions involved in speech) for a conversational android from its speech in real time. To generate humanlike movements, a mechanical limitation of the android (i.e., limited number of joint) needs to be compensated in order to express an emotional type of motion. By expressly presenting the synchronization of speech and motion in the android, the method enables us to compensate for its mechanical limitations. Moreover, the motion can be modulated for expressing emotions by tuning the parameters in the dynamical model. This method's model is based on a spring-damper dynamical model driven by voice features to simulate a human's trunk movement involved in speech. In contrast to the existing methods based on machine learning, our system can easily modulate the motions generated due to speech patterns because the model's parameters correspond to muscle stiffness. The experimental results show that the android motions generated by the our model can be perceived as more natural and thus motivate users to talk with the android more, compared with a system that simply copies human motions. In addition, it is possible to make the model generate emotional speech motions by tuning its parameters.
BibTeX:
@Article{Sakai2017,
  author   = {Kurima Sakai and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title    = {Novel Speech Motion Generation by Modelling Dynamics of Human Speech Production},
  journal  = {Frontiers in Robotics and AI},
  year     = {2017},
  volume   = {4 Article 49},
  pages    = {1-14},
  month    = Oct,
  abstract = {We have developed a method to automatically generate humanlike trunk motions based on speech (i.e., the neck and waist motions involved in speech) for a conversational android from its speech in real time. To generate humanlike movements, a mechanical limitation of the android (i.e., limited number of joint) needs to be compensated in order to express an emotional type of motion. By expressly presenting the synchronization of speech and motion in the android, the method enables us to compensate for its mechanical limitations. Moreover, the motion can be modulated for expressing emotions by tuning the parameters in the dynamical model. This method's model is based on a spring-damper dynamical model driven by voice features to simulate a human's trunk movement involved in speech. In contrast to the existing methods based on machine learning, our system can easily modulate the motions generated due to speech patterns because the model's parameters correspond to muscle stiffness. The experimental results show that the android motions generated by the our model can be perceived as more natural and thus motivate users to talk with the android more, compared with a system that simply copies human motions. In addition, it is possible to make the model generate emotional speech motions by tuning its parameters.},
  day      = {27},
  url      = {http://journal.frontiersin.org/article/10.3389/frobt.2017.00049/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Robotics_and_AI&id=219035},
  doi      = {10.3389/frobt.2017.00049},
  file     = {Sakai2017.pdf:pdf/Sakai2017.pdf:PDF},
}
森田貴美子, 住岡英信, "抱擁型コミュニケーションメディアによるヒトへの効果およびさらなる触感向上を目指した取り組み", 繊維製品消費科学, vol. 58, no. 8, pp. 664-665, August, 2017.
Abstract: より高次元でのヒト代替を目指して、人工システムである触覚インタラクションを伴うメディアの開発が進められており、それら人工システムとの触れ合いによるヒトへの心理生理面への影響についての研究も盛んである。 本稿では、ハグビーによるストレス軽減効果を紹介し、それを元に2015年に発売された新しくなったハグビーが、よりヒトに近付いたと言えることを、感覚計測技術を用いたデータで紹介する.
BibTeX:
@Article{森田貴美子2017,
  author   = {森田貴美子 and 住岡英信},
  title    = {抱擁型コミュニケーションメディアによるヒトへの効果およびさらなる触感向上を目指した取り組み},
  journal  = {繊維製品消費科学},
  year     = {2017},
  volume   = {58},
  number   = {8},
  pages    = {664-665},
  month    = Aug,
  abstract = {より高次元でのヒト代替を目指して、人工システムである触覚インタラクションを伴うメディアの開発が進められており、それら人工システムとの触れ合いによるヒトへの心理生理面への影響についての研究も盛んである。 本稿では、ハグビーによるストレス軽減効果を紹介し、それを元に2015年に発売された新しくなったハグビーが、よりヒトに近付いたと言えることを、感覚計測技術を用いたデータで紹介する.},
  file     = {森田貴美子.pdf:pdf/森田貴美子.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions and motion generation in android robots", IEEE Robotics and Automation Letters (RA-L), vol. 2 Issue 3, pp. 1748-1784, July, 2017.
Abstract: Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.
BibTeX:
@Article{Ishi2017d,
  author   = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  journal  = {IEEE Robotics and Automation Letters (RA-L)},
  title    = {Motion analysis in vocalized surprise expressions and motion generation in android robots},
  year     = {2017},
  abstract = {Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.},
  day      = {17},
  doi      = {10.1109/LRA.2017.2700941},
  month    = Jul,
  pages    = {1748-1784},
  url      = {http://www.ieee-ras.org/publications/ra-l},
  volume   = {2 Issue 3},
  comment  = {(The contents of this paper were also selected by IROS2017 Program Committee for presentation at the Conference)},
  file     = {Ishi2017d.pdf:pdf/Ishi2017d.pdf:PDF},
}
波多野博顕, 石井カルロス寿憲, "日本語自然対話に現れる質問発話の句末音調", 日本音声学会 学会誌「音声研究」, vol. 21, no. 1, pp. 1-11, April, 2017.
Abstract: The aim of this paper is to clarify the relationships between the phrase final tones of questioning utterances, the pragmatic factor, and the linguistic factor. We extracted questioning utterances from 51 conversations by 16 speakers and classified them according to the degree of information request and the sentence final particles. Tones were classified into the rising or non-rising tones based on its acoustic features. We got results indicating below. The relationships between the rising tone and the pragmatic factor are coordinating. The relationships between the rising tone and the linguistic factor are complementary. The form of particles moderately constrain tones.
BibTeX:
@Article{波多野博顕2016b,
  author   = {波多野博顕 and 石井カルロス寿憲},
  title    = {日本語自然対話に現れる質問発話の句末音調},
  journal  = {日本音声学会 学会誌「音声研究」},
  year     = {2017},
  volume   = {21},
  number   = {1},
  pages    = {1-11},
  month    = apr,
  abstract = {The aim of this paper is to clarify the relationships between the phrase final tones of questioning utterances, the pragmatic factor, and the linguistic factor. We extracted questioning utterances from 51 conversations by 16 speakers and classified them according to the degree of information request and the sentence final particles. Tones were classified into the rising or non-rising tones based on its acoustic features. We got results indicating below. The relationships between the rising tone and the pragmatic factor are coordinating. The relationships between the rising tone and the linguistic factor are complementary. The form of particles moderately constrain tones.},
  url      = {https://www.jstage.jst.go.jp › article › onseikenkyu › _pdf},
  etitle   = {Phrase final tones of questioning utterances appering in Japanese natural conversations},
  keywords = {phrase funal tone, questioning utterance, natural conversation, quantitative analysis, sentence final particle},
}
船山智, 港隆史, 石井カルロス寿憲, 石黒浩, "操作者の笑い声に基づく遠隔操作型アンドロイドの笑い動作生成", 情報処理学会論文誌, vol. 58, no. 4, pp. 932-944, April, 2017.
Abstract: 遠隔操作型アンドロイドは強い存在感を伝達するコミュニケーションメディアであるが,動作自由度の制約により人間と同様に動くことができず,人の動作を複製する遠隔操作方法では不自然な振る舞いとなることがある.本論文ではコミュニケーション中の重要な要素である“笑い"に注目し,限られた自由度の中で動きの誇張によって自然に見える笑い動作を設計し,その有効性を検証した.また,操作者の笑い声を認識するシステムを開発し,操作者の笑い声に合わせて自動的に笑い動作を付加する遠隔操作システムの実用性を検証した.
BibTeX:
@Article{船山智2017,
  author          = {船山智 and 港隆史 and 石井カルロス寿憲 and 石黒浩},
  title           = {操作者の笑い声に基づく遠隔操作型アンドロイドの笑い動作生成},
  journal         = {情報処理学会論文誌},
  year            = {2017},
  volume          = {58},
  number          = {4},
  pages           = {932-944},
  month           = Apr,
  abstract        = {遠隔操作型アンドロイドは強い存在感を伝達するコミュニケーションメディアであるが,動作自由度の制約により人間と同様に動くことができず,人の動作を複製する遠隔操作方法では不自然な振る舞いとなることがある.本論文ではコミュニケーション中の重要な要素である“笑い"に注目し,限られた自由度の中で動きの誇張によって自然に見える笑い動作を設計し,その有効性を検証した.また,操作者の笑い声を認識するシステムを開発し,操作者の笑い声に合わせて自動的に笑い動作を付加する遠隔操作システムの実用性を検証した.},
  url             = {http://www.ipsj.or.jp/journal/},
  etitle          = {Speech Driven Laughter Generation of Teleoperated Android},
  eabstract       = {Teleoperated androids are developed as communication media which can share strong human presence. However, android cannot move like humans since the degrees of freedom are limited. Therefore, a behavior of sndroid is not always natural. In this paper, we focus on "laughter", that is an automatic laughtergeneration system of teleoperated android. Prychological experiments verified the effectiveness of proposed method, and also the results suggest the exaggeration should depend on the appearance of android.},
  file            = {船山智2017.pdf:pdf/船山智2017.pdf:PDF},
  keywords        = {Android, Teleoperation, Laughter, Exaggeration, Laughter detection},
}
境くりま, 港隆史, 石井カルロス寿憲, 石黒浩, "わずかな感情変化を表現可能なアンドロイド動作の生成モデルの提案", 電子情報通信学会論文誌 D, vol. J100-D, no. 3, pp. 310-320, March, 2017.
Abstract: 人間はわずかな感情や態度の変化を細かな動作の変化で表現することにより,対話相手に様々な感情や態度を伝達することができる. さらにそれらが場の雰囲気を形成し,対話しやすさの促進などをもたらす. 人間に酷似したアンドロイドで人間同様に感情や態度を伝達するためには,感情の連続的な変化に対応するように動作特徴(動作の振幅や速度など)を変化させることができる動作生成手法が必要となる. 人間では感情が身体の筋系に影響を及ぼして身体動作を変化させていることを踏まえると,筋系の振る舞いをモデル化した動作生成手法において,筋系のパラメータと 感情状態を対応づけることで,上記のような動作生成手法が構築できると考えられる. 本論文では,著者らがこれまでに提案した音声駆動頭部動作生成システムのパラメータ空間と感情空間の対応を実験により明らかにした. このマッピングを用いて,感情の細かな変化を表現するように動作を変調することができる発話動作生成システムを提案する.
BibTeX:
@Article{境くりま2017,
  author   = {境くりま and 港隆史 and 石井カルロス寿憲 and 石黒浩},
  title    = {わずかな感情変化を表現可能なアンドロイド動作の生成モデルの提案},
  journal  = {電子情報通信学会論文誌 D},
  year     = {2017},
  volume   = {J100-D},
  number   = {3},
  pages    = {310-320},
  month    = Mar,
  abstract = {人間はわずかな感情や態度の変化を細かな動作の変化で表現することにより,対話相手に様々な感情や態度を伝達することができる. さらにそれらが場の雰囲気を形成し,対話しやすさの促進などをもたらす. 人間に酷似したアンドロイドで人間同様に感情や態度を伝達するためには,感情の連続的な変化に対応するように動作特徴(動作の振幅や速度など)を変化させることができる動作生成手法が必要となる. 人間では感情が身体の筋系に影響を及ぼして身体動作を変化させていることを踏まえると,筋系の振る舞いをモデル化した動作生成手法において,筋系のパラメータと 感情状態を対応づけることで,上記のような動作生成手法が構築できると考えられる. 本論文では,著者らがこれまでに提案した音声駆動頭部動作生成システムのパラメータ空間と感情空間の対応を実験により明らかにした. このマッピングを用いて,感情の細かな変化を表現するように動作を変調することができる発話動作生成システムを提案する.},
  url      = {https://search.ieice.org/bin/index.php?category=D&lang=J&num=3&vol=J100-D},
  doi      = {10.14923/transinfj.2016PDP0032},
  etitle   = {A Novel Reconstruction of Subtle Emotional Expressions in Android Motions},
  file     = {境くりま2017.pdf:pdf/境くりま2017.pdf:PDF},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series", Frontiers in Human Neuroscience, vol. 11, no. 15, pp. 1-14, February, 2017.
Abstract: We present a non-parametric approach to prediction of the n-back n ∈ 1, 2 task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.
BibTeX:
@Article{Keshmiri2017b,
  author   = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title    = {A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series},
  journal  = {Frontiers in Human Neuroscience},
  year     = {2017},
  volume   = {11},
  number   = {15},
  pages    = {1-14},
  month    = Feb,
  abstract = {We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.},
  url      = {http://journal.frontiersin.org/article/10.3389/fnhum.2017.00015/full},
  doi      = {10.3389/fnhum.2017.00015},
  file     = {Keshmiri2017b.pdf:pdf/Keshmiri2017b.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "A Model for Generating Socially-Appropriate Deictic Behaviors Towards People", International Journal of Social Robotics, vol. 9, no. Issue 1, pp. 33-49, January, 2017.
Abstract: Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.
BibTeX:
@Article{Liu2017a,
  author   = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {A Model for Generating Socially-Appropriate Deictic Behaviors Towards People},
  journal  = {International Journal of Social Robotics},
  year     = {2017},
  volume   = {9},
  number   = {Issue 1},
  pages    = {33-49},
  month    = Jan,
  abstract = {Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.},
  url      = {http://link.springer.com/article/10.1007%2Fs12369-016-0348-9},
  doi      = {10.1007/s12369-016-0348-9},
  file     = {Liu2017a.pdf:pdf/Liu2017a.pdf:PDF},
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan F. Glas, Christoph Bartneck, Hiroshi Ishiguro, "Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy", Paladyn, Journal of Behavioral Robotics, vol. 7, no. 1, pp. 55-66, December, 2016.
Abstract: An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.
BibTeX:
@Article{Zlotowski2016a,
  author   = {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan F. Glas and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy},
  journal  = {Paladyn, Journal of Behavioral Robotics},
  year     = {2016},
  volume   = {7},
  number   = {1},
  pages    = {55-66},
  month    = Dec,
  abstract = {An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.},
  url      = {https://www.degruyter.com/view/j/pjbr.2016.7.issue-1/pjbr-2016-0005/pjbr-2016-0005.xml},
  file     = {Zlotowski2016a.pdf:pdf/Zlotowski2016a.pdf:PDF},
}
Jani Even, Jonas Furrer, Yoichi Morales, Carlos T. Ishi, Norihiro Hagita, "Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting", IEEE Transactions on Robotics (T-RO), vol. 33 Issue2, pp. 333-345, December, 2016.
Abstract: This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.
BibTeX:
@Article{Even2016a,
  author   = {Jani Even and Jonas Furrer and Yoichi Morales and Carlos T. Ishi and Norihiro Hagita},
  title    = {Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting},
  journal  = {IEEE Transactions on Robotics (T-RO)},
  year     = {2016},
  volume   = {33 Issue2},
  pages    = {333-345},
  month    = Dec,
  abstract = {This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.},
  url      = {http://ieeexplore.ieee.org/document/7790815/},
  doi      = {10.1109/TRO.2016.2630053},
  file     = {Even2016a.pdf:pdf/Even2016a.pdf:PDF},
}
Dylan F. Glas, Kanae Wada, Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "Personal Greetings: Personalizing Robot Utterances Based on Novelty of Observed Behavior", International Journal of Social Robotics, November, 2016.
Abstract: One challenge in creating conversational service robots is how to reproduce the kind of individual recognition and attention that a human can provide. We believe that interactions can be made to seem more warm and humanlike by using sensors to observe a person's behavior or appearance over time, and programming the robot to comment when a novel feature, such as a new hairstyle, is observed. To create a system capable of recognizing such novelty, we collected one month of training data from customers in a shopping mall and recorded features of people's visits, such as time of day and group size. We then trained SVM classifiers to identify each feature as novel, typical, or neither, based on the inputs of a human coder, and we trained an additional classifier to choose an appropriate topic for a personalized greeting. An utterance generator was developed to generate text for the robot to speak, based on the selected topic and sensor data. A cross-validation analysis showed that the trained classifiers could accurately reproduce human novelty judgments with 88% accuracy and topic selection with 93% accuracy. We then deployed a teleoperated robot using this system to greet customers in a shopping mall for three weeks, and we present an example interaction and results from interviews showing that customers appreciated the robot's personalized greetings and felt a sense of familiarity with the robot.
BibTeX:
@Article{Glas2016c,
  author   = {Dylan F. Glas and Kanae Wada and Masahiro Shiomi and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Personal Greetings: Personalizing Robot Utterances Based on Novelty of Observed Behavior},
  journal  = {International Journal of Social Robotics},
  year     = {2016},
  month    = Nov,
  abstract = {One challenge in creating conversational service robots is how to reproduce the kind of individual recognition and attention that a human can provide. We believe that interactions can be made to seem more warm and humanlike by using sensors to observe a person's behavior or appearance over time, and programming the robot to comment when a novel feature, such as a new hairstyle, is observed. To create a system capable of recognizing such novelty, we collected one month of training data from customers in a shopping mall and recorded features of people's visits, such as time of day and group size. We then trained SVM classifiers to identify each feature as novel, typical, or neither, based on the inputs of a human coder, and we trained an additional classifier to choose an appropriate topic for a personalized greeting. An utterance generator was developed to generate text for the robot to speak, based on the selected topic and sensor data. A cross-validation analysis showed that the trained classifiers could accurately reproduce human novelty judgments with 88% accuracy and topic selection with 93% accuracy. We then deployed a teleoperated robot using this system to greet customers in a shopping mall for three weeks, and we present an example interaction and results from interviews showing that customers appreciated the robot's personalized greetings and felt a sense of familiarity with the robot.},
  url      = {http://link.springer.com/article/10.1007/s12369-016-0385-4},
  doi      = {10.1007/s12369-016-0385-4},
  file     = {Glas2016c.pdf:pdf/Glas2016c.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning", PLOS ONE, pp. 1-17, September, 2016.
Abstract: Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.
BibTeX:
@Article{Alimardani2016a,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning},
  journal         = {PLOS ONE},
  year            = {2016},
  pages           = {1-17},
  month           = Sep,
  abstract        = {Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.},
  day             = {6},
  url             = {http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0161945},
  doi             = {10.1371/journal.pone.0161945},
  file            = {Alimardani2016a.pdf:pdf/Alimardani2016a.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot", Scientific Reports, vol. 6, no. 33514, September, 2016.
Abstract: Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.
BibTeX:
@Article{Alimardani2016,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot},
  journal         = {Scientific Reports},
  year            = {2016},
  volume          = {6},
  number          = {33514},
  month           = Sep,
  abstract        = {Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.},
  url             = {http://www.nature.com/articles/srep33514},
  doi             = {10.1038/srep33514},
  file            = {Alimardani2016.pdf:pdf/Alimardani2016.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "Data-driven HRI: Learning social behaviors by example from human-human interaction", IEEE Transactions on Robotics, vol. 32, no. 4, pp. 988-1008, August, 2016.
Abstract: Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.
BibTeX:
@Article{Liu2016d,
  author   = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Data-driven HRI: Learning social behaviors by example from human-human interaction},
  journal  = {IEEE Transactions on Robotics},
  year     = {2016},
  volume   = {32},
  number   = {4},
  pages    = {988-1008},
  month    = Aug,
  abstract = {Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.},
  url      = {http://ieeexplore.ieee.org/document/7539621/},
  file     = {Liu2016d.pdf:pdf/Liu2016d.pdf:PDF},
}
Kaiko Kuwamura, Shuichi Nishio, Shinichi Sato, "Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease", Frontiers in Psychology, vol. 7, no. 1066, pp. 1-13, July, 2016.
Abstract: This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.
BibTeX:
@Article{Kuwamura2016a,
  author          = {Kaiko Kuwamura and Shuichi Nishio and Shinichi Sato},
  title           = {Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease},
  journal         = {Frontiers in Psychology},
  year            = {2016},
  volume          = {7},
  number          = {1066},
  pages           = {1-13},
  month           = Jul,
  abstract        = {This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.},
  day             = {19},
  url             = {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01066},
  doi             = {10.3389/fpsyg.2016.01066},
  file            = {Kuwamura2016a.pdf:pdf/Kuwamura2016a.pdf:PDF},
  keywords        = {Elderly care robot, Teleoperated robot, Alzheimer's disease, Elderly care facility, Gerontology},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening", Frontiers in Psychology, section Human-Media Interaction, vol. 7, no. 510, pp. 1-10, April, 2016.
Abstract: In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.
BibTeX:
@Article{Nakanishi2016,
  author   = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title    = {Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening},
  journal  = {Frontiers in Psychology, section Human-Media Interaction},
  year     = {2016},
  volume   = {7},
  number   = {510},
  pages    = {1-10},
  month    = Apr,
  abstract = {In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.},
  day      = {19},
  url      = {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.00510},
  doi      = {10.3389/fpsyg.2016.00510},
  file     = {Nakanishi2016.pdf:pdf/Nakanishi2016.pdf:PDF},
}
Ryuji Yamazaki, Louise Christensen, Kate Skov, Chi-Chih Chang, Malene F. Damholdt, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie", Frontiers in Psychology, vol. 7, no. 537, April, 2016.
Abstract: There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.
BibTeX:
@Article{Yamazaki2016,
  author   = {Ryuji Yamazaki and Louise Christensen and Kate Skov and Chi-Chih Chang and Malene F. Damholdt and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie},
  journal  = {Frontiers in Psychology},
  year     = {2016},
  volume   = {7},
  number   = {537},
  month    = Apr,
  abstract = {There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.},
  url      = {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705},
  doi      = {10.3389/fpsyg.2016.00537},
  file     = {Yamazaki2016.pdf:pdf/Yamazaki2016.pdf:PDF},
}
石井カルロス寿憲, エヴァンイアニ, 萩田紀博, "複数のマイクロホンアレイによる音源方向情報と人位置情報に基づく音声区間検出および顔の向きの推定の評価", 日本ロボット学会誌 April 2016, vol. 34, no. 3, pp. 199-204, April, 2016.
Abstract: 本研究では,複数のマイクロホンアレイを用いた音源位置推定と人位置情報を組み合わせて,発話区間および発話時の顔の向きを検出するシステムを提案する.開発したシステムを研究室のミーティングスペースに設置し,単独発話および複数人による対話における評価を行った.その結果,話者がすべてのアレイに背いていない条件を除き,発話区間が90%以上の精度で検出できる結果が得られ,発話している際の顔のおおまかな向き(前後左右,標準偏差25度)も推定できる結果が得られた.複数の話者が同時に会話をしている場合も,同程度の精度が得られ,提案システムの実用性が示された.
BibTeX:
@Article{石井カルロス寿憲2016a,
  author   = {石井カルロス寿憲 and エヴァンイアニ and 萩田紀博},
  title    = {複数のマイクロホンアレイによる音源方向情報と人位置情報に基づく音声区間検出および顔の向きの推定の評価},
  journal  = {日本ロボット学会誌 April 2016},
  year     = {2016},
  volume   = {34},
  number   = {3},
  pages    = {199-204},
  month    = Apr,
  abstract = {本研究では,複数のマイクロホンアレイを用いた音源位置推定と人位置情報を組み合わせて,発話区間および発話時の顔の向きを検出するシステムを提案する.開発したシステムを研究室のミーティングスペースに設置し,単独発話および複数人による対話における評価を行った.その結果,話者がすべてのアレイに背いていない条件を除き,発話区間が90%以上の精度で検出できる結果が得られ,発話している際の顔のおおまかな向き(前後左右,標準偏差25度)も推定できる結果が得られた.複数の話者が同時に会話をしている場合も,同程度の精度が得られ,提案システムの実用性が示された.},
  etitle   = {Evaluation of Speech Interval Detection and Face Orientation Estimation based on Sound Directions by Multiple Microphone Arrays and Human Positions},
  file     = {石井カルロス寿憲2016a.pdf:pdf/石井カルロス寿憲2016a.pdf:PDF},
}
境くりま, 石井カルロス寿憲, 港隆史, 石黒浩, "音声に対応する頭部動作のオンライン生成システムと遠隔操作における効果", 電子情報通信学会和文論文誌A, vol. J99-A, no. 1, pp. 14-24, January, 2016.
Abstract: ロボットアバターを用いた遠隔対話システムは,操作者の声と動作をロボットアバターで再現することで,対話相手に電話やビデオチャット以上の対話感をもたらす. しかし,対面とは勝手の異なる操作インタフェースでは,操作者の動きは対面時よりも制限されるため,ロボットアバターの効果が十分に発揮できない. そこで本論文では,制限されていない操作者の音声から頭部動作をオンラインで生成し,それをロボットアバターの頭部動作に重ね合わせる遠隔操作システムを提案した. 発話の言語情報と韻律情報を用いることにより,多種類の頭部動作を生成可能である. 被験者実験では,提案システムにより自動生成された頭部動作は不自然ではなく,生成された頭部動作を付加したロボットアバターとの対話がよい印象を与えることが示された.
BibTeX:
@Article{境くりま2016,
  author          = {境くりま and 石井カルロス寿憲 and 港隆史 and 石黒浩},
  title           = {音声に対応する頭部動作のオンライン生成システムと遠隔操作における効果},
  journal         = {電子情報通信学会和文論文誌A},
  year            = {2016},
  volume          = {J99-A},
  number          = {1},
  pages           = {14-24},
  month           = Jan,
  abstract        = {ロボットアバターを用いた遠隔対話システムは,操作者の声と動作をロボットアバターで再現することで,対話相手に電話やビデオチャット以上の対話感をもたらす. しかし,対面とは勝手の異なる操作インタフェースでは,操作者の動きは対面時よりも制限されるため,ロボットアバターの効果が十分に発揮できない. そこで本論文では,制限されていない操作者の音声から頭部動作をオンラインで生成し,それをロボットアバターの頭部動作に重ね合わせる遠隔操作システムを提案した. 発話の言語情報と韻律情報を用いることにより,多種類の頭部動作を生成可能である. 被験者実験では,提案システムにより自動生成された頭部動作は不自然ではなく,生成された頭部動作を付加したロボットアバターとの対話がよい印象を与えることが示された.},
  etitle          = {Online speech-driven head motion generating system and evaluation on a tele-operated robot},
  file            = {kurima_IEICE_2015.pdf:pdf/kurima_IEICE_2015.pdf:PDF},
}
中西惇也, 桑村海光, 港隆史, 西尾修一, 石黒浩, "人型対話メディアにおける抱擁から生まれる好意", 電子情報通信学会和文論文誌, vol. J99-A, no. 1, pp. 36-44, January, 2016.
Abstract: 本研究は人型対話メディアを用いた身体的相互作用が,使用者の対話相手に対する感情に与える影響を検証した.身体的相互作用として抱擁に着目し,人型対話メディアの抱擁が対話者が感じる対話相手への関心や好意を向上させるという仮説を立てた.身体的相互作用を促す仕様の対話メディアを提案し,従来の対話メディアと違い,親密な人間関係を築くサポートメディアとしての可能性を示した.
BibTeX:
@Article{中西惇也2016,
  author          = {中西惇也 and 桑村海光 and 港隆史 and 西尾修一 and 石黒浩},
  title           = {人型対話メディアにおける抱擁から生まれる好意},
  journal         = {電子情報通信学会和文論文誌},
  year            = {2016},
  volume          = {J99-A},
  number          = {1},
  pages           = {36-44},
  month           = Jan,
  abstract        = {本研究は人型対話メディアを用いた身体的相互作用が,使用者の対話相手に対する感情に与える影響を検証した.身体的相互作用として抱擁に着目し,人型対話メディアの抱擁が対話者が感じる対話相手への関心や好意を向上させるという仮説を立てた.身体的相互作用を促す仕様の対話メディアを提案し,従来の対話メディアと違い,親密な人間関係を築くサポートメディアとしての可能性を示した.},
  etitle          = {Evoking affection by hugging a human-like telecommunication medium},
  file            = {中西惇也2015.pdf:pdf/中西惇也2015.pdf:PDF},
}
中道大介, 西尾修一, "遠隔操作型コミュニケーションロボットにおける頷き動作の半自律化による操作主体感への影響", 人工知能学会論文誌, vol. 31, no. 2, 2016.
Abstract: Teleoperation enables us to act in remote location through operated entities such as robots or virtual agents. This advantage allows us to work in places dangerous for humans or places not designed for humans such as in volcano disaster site or in narrow maintenance pipes. However, teleoperation also has a weakness, namely, several gaps (operation interface, environment, appearance, and intentionality) among ourselves and the teleoperated entities in remote. As teleoperated robots own physical bodies different from us, teleoperation requires special interfacing systems that are usually not so intuitive. Such a system requires rather long period of training for one to become familiar with it. One possible solution for this issue is to implement semi-autonomous teleoperation (SAT) facility which combines manual operation and autonomous action.
BibTeX:
@Article{中道大介2016,
  author   = {中道大介 and 西尾修一},
  title    = {遠隔操作型コミュニケーションロボットにおける頷き動作の半自律化による操作主体感への影響},
  journal  = {人工知能学会論文誌},
  year     = {2016},
  volume   = {31},
  number   = {2},
  abstract = {Teleoperation enables us to act in remote location through operated entities such as robots or virtual agents. This advantage allows us to work in places dangerous for humans or places not designed for humans such as in volcano disaster site or in narrow maintenance pipes. However, teleoperation also has a weakness, namely, several gaps (operation interface, environment, appearance, and intentionality) among ourselves and the teleoperated entities in remote. As teleoperated robots own physical bodies different from us, teleoperation requires special interfacing systems that are usually not so intuitive. Such a system requires rather long period of training for one to become familiar with it. One possible solution for this issue is to implement semi-autonomous teleoperation (SAT) facility which combines manual operation and autonomous action.},
  url      = {https://www.jstage.jst.go.jp/article/tjsai/advpub/0/advpub_H-F81/_article/-char/ja/},
  doi      = {10.1527/tjsai.H-F81},
  etitle   = {Effect of Agency to Teleoperated Communication Robot by Semi-autonomous Nod},
  file     = {中道大介2016.pdf:pdf/中道大介2016.pdf:PDF},
}
Malene F. Damholdt, Marco Nørskov, Ryuji Yamazaki, Raul Hakli, Catharina V. Hansen, Christina Vestergaard, Johanna Seibt, "Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality", Frontiers in Psychology, vol. 6, no. 1701, November, 2015.
Abstract: Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.
BibTeX:
@Article{Damholdt2015,
  author   = {Malene F. Damholdt and Marco Nørskov and Ryuji Yamazaki and Raul Hakli and Catharina V. Hansen and Christina Vestergaard and Johanna Seibt},
  title    = {Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality},
  journal  = {Frontiers in Psychology},
  year     = {2015},
  volume   = {6},
  number   = {1701},
  month    = Nov,
  abstract = {Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.},
  url      = {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705},
  doi      = {10.3389/fpsyg.2015.01701},
  file     = {Damholdt2015.pdf:pdf/Damholdt2015.pdf:PDF},
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication", Interaction Studies, vol. 16, no. 2, pp. 249-271, November, 2015.
Abstract: In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.
BibTeX:
@Article{Kuwamura2013a,
  author          = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication},
  journal         = {Interaction Studies},
  year            = {2015},
  volume          = {16},
  number          = {2},
  pages           = {249-271},
  month           = NOV,
  abstract        = {In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.},
  file            = {Kuwamura2013a.pdf:pdf/Kuwamura2013a.pdf:PDF},
  keywords        = {teleoperated android; telecomunication; robot; appearance; personality},
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan Glas, Christoph Bartneck, Hiroshi Ishiguro, "Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception", Frontiers in Psychology, June, 2015.
Abstract: The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.
BibTeX:
@Article{Zlotowski,
  author   = {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan Glas and Christoph Bartneck and Hiroshi Ishiguro},
  title    = {Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception},
  journal  = {Frontiers in Psychology},
  year     = {2015},
  month    = JUN,
  abstract = {The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.},
  url      = {http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00883/abstract},
  doi      = {10.3389/fpsyg.2015.00883},
  file     = {Jakub2014a.pdf:pdf/Jakub2014a.pdf:PDF},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot", International Journal of Humanoid Robotics, vol. 12, issue 01, pp. 1550002 (22 pages), 2015.
Abstract: To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.
BibTeX:
@Article{Cooney2013b,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2015},
  volume          = {12, issue 01},
  pages           = {1550002 (22 pages)},
  abstract        = {To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.},
  doi             = {10.1142/S0219843615500024},
  file            = {Cooney2014a.pdf:pdf/Cooney2014a.pdf:PDF},
  keywords        = {Affection; multi-modal; play; small humanoid robot, human-robot interaction},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior", ACM Transactions on Interactive Intelligent Systems, vol. 4, no. 4, pp. 32, December, 2014.
Abstract: Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.
BibTeX:
@Article{Cooney2014c,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior},
  journal         = {{ACM} Transactions on Interactive Intelligent Systems},
  year            = {2014},
  volume          = {4},
  number          = {4},
  pages           = {32},
  month           = Dec,
  abstract        = {Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.},
  url             = {http://dl.acm.org/citation.cfm?doid=2688469.2685395},
  doi             = {10.1145/2685395},
  file            = {Cooney2014b.pdf:pdf/Cooney2014b.pdf:PDF},
  keywords        = {human-robot interaction; activity recognition; small humanoid companion robot; affectionate touch behavior; intelligent systems},
}
Rosario Sorbello, Antonio Chella, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction", Robotics and Autonomous Systems Journal, vol. 62, issue 9, pp. 1329-1341, September, 2014.
Abstract: The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.
BibTeX:
@Article{Sorbello2013a,
  author   = {Rosario Sorbello and Antonio Chella and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  title    = {Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction},
  journal  = {Robotics and Autonomous Systems Journal},
  year     = {2014},
  volume   = {62, issue 9},
  pages    = {1329-1341},
  month    = SEP,
  abstract = {The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.},
  url      = {http://www.sciencedirect.com/science/article/pii/S092188901400061X},
  doi      = {10.1016/j.robot.2014.03.017},
  file     = {Sorbello2013a.pdf:pdf/Sorbello2013a.pdf:PDF},
  keywords = {Telenoid; Geminoid; Social Robot; Human-Humanoid Robot Interaction},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Effect of biased feedback on motor imagery learning in BCI-teleoperation system", Frontiers in Systems Neuroscience, vol. 8, no. 52, April, 2014.
Abstract: Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.
BibTeX:
@Article{Alimardani2014a,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Effect of biased feedback on motor imagery learning in BCI-teleoperation system},
  journal         = {Frontiers in Systems Neuroscience},
  year            = {2014},
  volume          = {8},
  number          = {52},
  month           = Apr,
  abstract        = {Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.},
  url             = {http://journal.frontiersin.org/Journal/10.3389/fnsys.2014.00052/full},
  doi             = {10.3389/fnsys.2014.00052},
  file            = {Alimardani2014a.pdf:pdf/Alimardani2014a.pdf:PDF},
  keywords        = {body ownership illusion; BCI‐teleoperation; motor imagery learning; feedback effect; training},
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: communication device for encouraging good relationship through the act of hugging", Lovotics, vol. Vol. 1, Issue 1, pp. 10000104, February, 2014.
Abstract: In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.
BibTeX:
@Article{Kuwamura2014a,
  author          = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Hugvie: communication device for encouraging good relationship through the act of hugging},
  journal         = {Lovotics},
  year            = {2014},
  volume          = {Vol. 1, Issue 1},
  pages           = {10000104},
  month           = Feb,
  abstract        = {In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.},
  url             = {http://www.omicsonline.com/open-access/hugvie_communication_device_for_encouraging_good_relationship_through_the_act_of_hugging.pdf?aid=24445},
  doi             = {10.4172/2090-9888.10000104},
  file            = {Kuwamura2014a.pdf:pdf/Kuwamura2014a.pdf:PDF},
  keywords        = {hug; co-presence; telecommunication},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.", International Journal of Social Robotics, vol. 6, no. 1, pp. 67-83, January, 2014.
Abstract: Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.
BibTeX:
@Article{Putten2011b,
  author          = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.},
  journal         = {International Journal of Social Robotics},
  year            = {2014},
  volume          = {6},
  number          = {1},
  pages           = {67-83},
  month           = Jan,
  abstract        = {Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.},
  url             = {http://link.springer.com/article/10.1007/s12369-013-0198-7},
  doi             = {10.1007/s12369-013-0198-7},
  file            = {Putten2011b.pdf:pdf/Putten2011b.pdf:PDF},
  keywords        = {human-robot interaction; field study; observation; multimodal evaluation of human interaction with robots; Uncanny Valley},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care", International Journal of Social Robotics, vol. 6, no. 3, pp. 429-442, 2014.
Abstract: We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@Article{Yamazaki2013a,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title           = {Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care},
  journal         = {International Journal of Social Robotics},
  year            = {2014},
  volume          = {6},
  number          = {3},
  pages           = {429-442},
  abstract        = {We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  doi             = {10.1007/s12369-014-0247-x},
  file            = {Yamazaki2013a.pdf:pdf/Yamazaki2013a.pdf:PDF},
  keywords        = {teleoperated android; minimal design; embodied communication; social isolation; elderly care; social acceptance},
}
Hidenobu Sumioka, Shuichi Nishio, Takashi Minato, Ryuji Yamazaki, Hiroshi Ishiguro, "Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence", Cognitive Computation, vol. 6, Issue 4, pp. 760-774, 2014.
Abstract: Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.
BibTeX:
@Article{Sumioka2013e,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Takashi Minato and Ryuji Yamazaki and Hiroshi Ishiguro},
  title           = {Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence},
  journal         = {Cognitive Computation},
  year            = {2014},
  volume          = {6, Issue 4},
  pages           = {760-774},
  abstract        = {Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.},
  url             = {http://link.springer.com/article/10.1007%2Fs12559-014-9270-3},
  doi             = {10.1007/s12559-014-9270-3},
  file            = {Sumioka2014.pdf:pdf/Sumioka2014.pdf:PDF},
  keywords        = {Human–robot Interaction; Minimal design; Elderly care; Android science},
}
Kurima Sakai, Hidenobu Sumioka, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of Interactive Small Humanoid Robot with Visual Illusion", International Journal of Innovative Computing, Information and Control, vol. 9, no. 12, pp. 4725-4736, December, 2013.
Abstract: This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.
BibTeX:
@Article{Sakai2013,
  author          = {Kurima Sakai and Hidenobu Sumioka and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Motion Design of Interactive Small Humanoid Robot with Visual Illusion},
  journal         = {International Journal of Innovative Computing, Information and Control},
  year            = {2013},
  volume          = {9},
  number          = {12},
  pages           = {4725-4736},
  month           = Dec,
  abstract        = {This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.},
  url             = {http://www.ijicic.org/apchi12-275.pdf},
  file            = {Sakai2013.pdf:pdf/Sakai2013.pdf:PDF},
  keywords        = {Tele-communication; Nonverbal communication; Portable robot avatar; Visual illusion of motion},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot", Lovotics, November, 2013.
Abstract: Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.
BibTeX:
@Article{Cooney2013d,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot},
  journal         = {Lovotics},
  year            = {2013},
  month           = Nov,
  abstract        = {Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.},
  url             = {http://www.omicsonline.com/open-access/designing_robots_for_well_being_theoretical_background_and_visual.pdf?aid=24444},
  doi             = {10.4172/2090-9888.1000101},
  file            = {Cooney2013d.pdf:pdf/Cooney2013d.pdf:PDF},
  keywords        = {Human-robot interaction; well-being; enjoyment; affection; recognizing typical behavior; small humanoid robot},
}
Hidenobu Sumioka, Aya Nakae, Ryota Kanai, Hiroshi Ishiguro, "Huggable communication medium decreases cortisol levels", Scientific Reports, vol. 3, no. 3034, October, 2013.
Abstract: Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.
BibTeX:
@Article{Sumioka2013d,
  author          = {Hidenobu Sumioka and Aya Nakae and Ryota Kanai and Hiroshi Ishiguro},
  title           = {Huggable communication medium decreases cortisol levels},
  journal         = {Scientific Reports},
  year            = {2013},
  volume          = {3},
  number          = {3034},
  month           = Oct,
  abstract        = {Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.},
  url             = {http://www.nature.com/srep/2013/131023/srep03034/full/srep03034.html},
  doi             = {10.1038/srep03034},
  file            = {Sumioka2013d.pdf:pdf/Sumioka2013d.pdf:PDF},
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot", International Journal of Social Robotics, vol. 6, pp. 173-193, September, 2013.
Abstract: Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and naïve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a naïve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.
BibTeX:
@Article{Cooney2013,
  author          = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  title           = {Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot},
  journal         = {International Journal of Social Robotics},
  year            = {2013},
  volume          = {6},
  pages           = {173-193},
  month           = Sep,
  abstract        = {Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and na\"{i}ve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a na\"{i}ve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.},
  url             = {http://link.springer.com/article/10.1007%2Fs12369-013-0212-0},
  doi             = {10.1007/s12369-013-0212-0},
  file            = {Cooney2013.pdf:pdf/Cooney2013.pdf:PDF},
  keywords        = {Interaction design for enjoyment; Playful human-robot interaction; Full-body gesture recognition; Inertial sensing; Small humanoid robot},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators", Scientific Reports, vol. 3, no. 2396, August, 2013.
Abstract: Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.
BibTeX:
@Article{Alimardani2013,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators},
  journal         = {Scientific Reports},
  year            = {2013},
  volume          = {3},
  number          = {2396},
  month           = Aug,
  abstract        = {Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.},
  day             = {9},
  url             = {http://www.nature.com/srep/2013/130809/srep02396/full/srep02396.html},
  doi             = {10.1038/srep02396},
  file            = {alimardani2013a.pdf:pdf/alimardani2013a.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Teleoperated Android Robot as Emotion Regulation Media", International Journal of Social Robotics, vol. 5, no. 4, pp. 563-573, July, 2013.
Abstract: In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.
BibTeX:
@Article{Nishio2013a,
  author          = {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title           = {Teleoperated Android Robot as Emotion Regulation Media},
  journal         = {International Journal of Social Robotics},
  year            = {2013},
  volume          = {5},
  number          = {4},
  pages           = {563-573},
  month           = Jul,
  abstract        = {In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.},
  url             = {http://link.springer.com/article/10.1007%2Fs12369-013-0201-3},
  doi             = {10.1007/s12369-013-0201-3},
  file            = {Nishio2013a.pdf:pdf/Nishio2013a.pdf:PDF},
  keywords        = {Teleoperated android robot; Emotion regulation; Facial feedback hypothesis; Body ownership transfer},
}
石井カルロス寿憲, 劉超然, 石黒浩, 萩田紀博, "遠隔存在感ロボットのためのフォルマントによる口唇動作生成手法", 日本ロボット学会誌, vol. 31, no. 4, pp. 83-90, May, 2013.
Abstract: Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Geminoid-F and Telenoid-R2). Subjective evaluation indicated that the proposed audio-based method is able to generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding synchronization of audio and motion streams, and online real-time processing are also discussed.
BibTeX:
@Article{石井カルロス寿憲2012,
  author          = {石井カルロス寿憲 and 劉超然 and 石黒浩 and 萩田紀博},
  title           = {遠隔存在感ロボットのためのフォルマントによる口唇動作生成手法},
  journal         = {日本ロボット学会誌},
  year            = {2013},
  volume          = {31},
  number          = {4},
  pages           = {83-90},
  month           = May,
  abstract        = {Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Geminoid-F and Telenoid-R2). Subjective evaluation indicated that the proposed audio-based method is able to generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding synchronization of audio and motion streams, and online real-time processing are also discussed.},
  doi             = {10.7210/jrsj.31.401},
  etitle          = {Lip motion generation method based on formants for tele-presence humanoid robots},
  eabstract       = {Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Geminoid-F and Telenoid-R2). Subjective evaluation indicated that the proposed audio-based method is able to generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding synchronization of audio and motion streams, and online real-time processing are also discussed.},
  file            = {石井カルロス寿憲2012.pdf:pdf/石井カルロス寿憲2012.pdf:PDF},
  keywords        = {Lip motion; tele-presence; humanoid robots; formant; real-time processing},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Kohei Matsumura, Takashi Minato, Hiroshi Ishiguro, Tsutomu Fujinami, Masaru Nishikawa, "Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350007(1-25), April, 2013.
Abstract: Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.
BibTeX:
@Article{Yamazaki2012e,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Kohei Matsumura and Takashi Minato and Hiroshi Ishiguro and Tsutomu Fujinami and Masaru Nishikawa},
  title           = {Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2013},
  volume          = {10},
  number          = {1},
  pages           = {1350007(1-25)},
  month           = Apr,
  abstract        = {Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.},
  day             = {2},
  url             = {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500072},
  doi             = {10.1142/S0219843613500072},
  file            = {Yamazaki2012e.pdf:pdf/Yamazaki2012e.pdf:PDF},
  keywords        = {Telecommunication; android robot; minimal design; cooperation; role differentiation; inter-generational relationship; embodied communication; teleoperation; socialization},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350009(1-19), April, 2013.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.
BibTeX:
@Article{Liu2012a,
  author          = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction},
  journal         = {International Journal of Humanoid Robotics},
  year            = {2013},
  volume          = {10},
  number          = {1},
  pages           = {1350009(1-19)},
  month           = Apr,
  abstract        = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.},
  day             = {2},
  url             = {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500096},
  doi             = {10.1142/S0219843613500096},
  file            = {Liu2012a.pdf:pdf/Liu2012a.pdf:PDF},
  keywords        = {Head motion; dialogue acts; gazing; motion generation},
}
港隆史, 境くりま, 西尾修一, 石黒浩, "運動錯視を利用した携帯型遠隔操作ヒューマノイドの運動表現", ヒューマンインタフェース学会論文誌, vol. 15, no. 1, pp. 51-62, February, 2013.
Abstract: A small (cellphone size) human-like robotic avatar in tele-communications will be a novel portable communication medium in that a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems to implement actuators in the small body. This paper proposes an idea to illusorily move the avatar's extremities with blinking lights. This idea needs only LEDs (Light Emitting Diodes) and avoids the above problems. This paper designs a LED blinking pattern to invoke a nodding motion of a hand-held tele-operated humanoid robot. A psychological experiment shows that the designed blinking pattern gives better impression to subjects than an symbolic blinking pattern. This result suggests that even the illusory motion of robotic avatar can improve tele-communications and it is suitable for human-like robotic avatars with respect to a minimally human-like motion expression.
BibTeX:
@Article{港隆史2012,
  author          = {港隆史 and 境くりま and 西尾修一 and 石黒浩},
  title           = {運動錯視を利用した携帯型遠隔操作ヒューマノイドの運動表現},
  journal         = {ヒューマンインタフェース学会論文誌},
  year            = {2013},
  volume          = {15},
  number          = {1},
  pages           = {51--62},
  month           = Feb,
  abstract        = {A small (cellphone size) human-like robotic avatar in tele-communications will be a novel portable communication medium in that a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems to implement actuators in the small body. This paper proposes an idea to illusorily move the avatar's extremities with blinking lights. This idea needs only LEDs (Light Emitting Diodes) and avoids the above problems. This paper designs a LED blinking pattern to invoke a nodding motion of a hand-held tele-operated humanoid robot. A psychological experiment shows that the designed blinking pattern gives better impression to subjects than an symbolic blinking pattern. This result suggests that even the illusory motion of robotic avatar can improve tele-communications and it is suitable for human-like robotic avatars with respect to a minimally human-like motion expression.},
  etitle          = {Visual Illusory Motion Design of a Hand-held Tele-operated Humanoid Robot for Effective Communication},
  eabstract       = {A small (cellphone size) human-like robotic avatar in tele-communications will be a novel portable communication medium in that a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems to implement actuators in the small body. This paper proposes an idea to illusorily move the avatar's extremities with blinking lights. This idea needs only LEDs (Light Emitting Diodes) and avoids the above problems. This paper designs a LED blinking pattern to invoke a nodding motion of a hand-held tele-operated humanoid robot. A psychological experiment shows that the designed blinking pattern gives better impression to subjects than an symbolic blinking pattern. This result suggests that even the illusory motion of robotic avatar can improve tele-communications and it is suitable for human-like robotic avatars with respect to a minimally human-like motion expression.},
  file            = {港隆史2012.pdf:pdf/港隆史2012.pdf:PDF},
  keywords        = {Robotic communication media; Tele-operated robot; Human-like motion; Illusory motion; Minimal design},
}
劉超然, 石井カルロス寿憲, 石黒浩, 萩田紀博, "人型コミュニケーションロボットのための首傾げ生成手法の提案および評価", 人工知能学会論文誌, vol. 28, no. 2, pp. 112-121, January, 2013.
Abstract: 人とロボットの自然な対話インタラクションを実現するには,ロボットも発話に伴って自然な頭部動作を行うことが重要である.本研究では,人の対面対話における頭部動作の分析結果に基づいて,談話機能の情報を利用した首傾げ生成モデルを提案した.この生成モデルを異なった種類の人型ロボットに応用して評価実験を行った結果,提案した首傾げ生成モデルは,頷きのみを生成したモデルに比べて自然な動作を生成する結果が得られた.また,口が動かないロボットの発話時の視覚情報が乏しいという問題の対策として,発話区間中に顔を上げる動作を追加したモデルを評価した.その結果,頷きのみの生成モデルでは,ロボットの動作がより自然な印象を与える結果となったが,首傾げ生成モデルの場合は,印象評定に有意差はみられなかった.さらに,被験者とロボットが実際に対面して対話インタラクションを行った場合の評価実験も行った結果,ビデオによる実験結果と同様の評価が得られた.また,すべての実験において,提案の首傾げ生成動作手法が,話者の動きをロボットに再現したオリジナルの動作に比べて高い評定を得たが,オリジナルの動作に視線情報も追加した場合,提案手法と匹敵する自然さの評定が得られた.
BibTeX:
@Article{劉超然2012a,
  author          = {劉超然 and 石井カルロス寿憲 and 石黒浩 and 萩田紀博},
  title           = {人型コミュニケーションロボットのための首傾げ生成手法の提案および評価},
  journal         = {人工知能学会論文誌},
  year            = {2013},
  volume          = {28},
  number          = {2},
  pages           = {112--121},
  month           = Jan,
  abstract        = {人とロボットの自然な対話インタラクションを実現するには,ロボットも発話に伴って自然な頭部動作を行うことが重要である.本研究では,人の対面対話における頭部動作の分析結果に基づいて,談話機能の情報を利用した首傾げ生成モデルを提案した.この生成モデルを異なった種類の人型ロボットに応用して評価実験を行った結果,提案した首傾げ生成モデルは,頷きのみを生成したモデルに比べて自然な動作を生成する結果が得られた.また,口が動かないロボットの発話時の視覚情報が乏しいという問題の対策として,発話区間中に顔を上げる動作を追加したモデルを評価した.その結果,頷きのみの生成モデルでは,ロボットの動作がより自然な印象を与える結果となったが,首傾げ生成モデルの場合は,印象評定に有意差はみられなかった.さらに,被験者とロボットが実際に対面して対話インタラクションを行った場合の評価実験も行った結果,ビデオによる実験結果と同様の評価が得られた.また,すべての実験において,提案の首傾げ生成動作手法が,話者の動きをロボットに再現したオリジナルの動作に比べて高い評定を得たが,オリジナルの動作に視線情報も追加した場合,提案手法と匹敵する自然さの評定が得られた.},
  url             = {https://www.jstage.jst.go.jp/article/tjsai/28/2/28_112/_article/-char/ja/},
  etitle          = {Proposal and Evaluation of a Head ilting Generation Method for Humanoid Communication Robot},
  eabstract       = {A Suitable control of head motion in robots synchronized with its utterances is important for having a smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and evaluates the model using different types of humanoid robots. Analysis of subjective scores showed that the proposed model can generate head motion with increased naturalness compared to nodding only or directoly mapping people's original motions (without gaze information). We also found that an upward motion of the robot's face can be used by robots which do not have movable lips in order to provide the appearance that utterance is taking place. Finally, we evaluate the proposed model in a real human-robot interaction, by conducting an experiment in which participants act as visitors to an information desk attended by robots. The effects of gazing control were also taken into account when mapping the original motion to the robot. Evaluation results indicated that the proposed model performs equally to directly mapping people's original motion with gaze information, in terms of perceived naturalness.},
  file            = {劉超然2012a.pdf:pdf/劉超然2012a.pdf:PDF},
  keywords        = {head motion; dialogue acts; motion generation; human-robot interaction},
}
Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of relationship between head motion events and speech in dialogue conversations", Speech Communication, Special issue on Gesture and speech in interaction, pp. 233-243, 2013.
Abstract: Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.
BibTeX:
@Article{Ishi2013,
  author   = {Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title    = {Analysis of relationship between head motion events and speech in dialogue conversations},
  journal  = {Speech Communication, Special issue on Gesture and speech in interaction},
  year     = {2013},
  pages    = {233-243},
  abstract = {Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.},
  file     = {Ishi2013.pdf:pdf/Ishi2013.pdf:PDF},
}
小川浩平, 石黒浩, "詩の朗読エージェントとしてのアンドロイドの可能性", ヒューマンインタフェース学会論文誌, vol. 14, no. 1, pp. 43-51, February, 2012.
Abstract: In recent years, researches on a very human-like android have became pop- ular. The main purposes of past android researches were investigating: (1) how people treat very human-like android and (2) whether it is possible to replace existing communi- cation media such as telephone or TV conference system, by androids as a communication medium. We found that androids have advantages compared to humans in a specific con- text. For example, the android drama that is one of a collaboration project with the artist, visitors reported that the android impress them especially when the android was reading a poem in the drama. We, therefore, did the experiment to investigate the advan- tages of the android compared to humans in the context of poem reading. An experiment was conducted to illustrate influences of the android poem reading. Participants were listened to the poem that was read by three kinds of poem reading agents: the android, the model of the android and the box. Experiment results showed that an entrainment for the poem gained the most highly score under the android condition. It indicated that the android have an advantage for communicating the meaning of the poem.
BibTeX:
@Article{小川浩平2011,
  author          = {小川浩平 and 石黒浩},
  title           = {詩の朗読エージェントとしてのアンドロイドの可能性},
  journal         = {ヒューマンインタフェース学会論文誌},
  year            = {2012},
  volume          = {14},
  number          = {1},
  pages           = {43-51},
  month           = Feb,
  abstract        = {In recent years, researches on a very human-like android have became pop- ular. The main purposes of past android researches were investigating: (1) how people treat very human-like android and (2) whether it is possible to replace existing communi- cation media such as telephone or TV conference system, by androids as a communication medium. We found that androids have advantages compared to humans in a specific con- text. For example, the android drama that is one of a collaboration project with the artist, visitors reported that the android impress them especially when the android was reading a poem in the drama. We, therefore, did the experiment to investigate the advan- tages of the android compared to humans in the context of poem reading. An experiment was conducted to illustrate influences of the android poem reading. Participants were listened to the poem that was read by three kinds of poem reading agents: the android, the model of the android and the box. Experiment results showed that an entrainment for the poem gained the most highly score under the android condition. It indicated that the android have an advantage for communicating the meaning of the poem.},
  url             = {http://www.his.gr.jp/paper/archives.cgi?c=download&pk=68},
  etitle          = {Possibilities of Androids as a Poem Reading Agent},
  file            = {小川浩平2011.pdf:小川浩平2011.pdf:PDF},
  keywords        = {アンドロイド;ロボット;Geminoid;Human-Robot Interaction},
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Giuseppe Balistreri, Tetsuya Watanabe, Hiroshi Ishiguro, "Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World", Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 15, no. 5, pp. 592-597, July, 2011.
Abstract: This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.
BibTeX:
@Article{Ogawa2011,
  author          = {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Giuseppe Balistreri and Tetsuya Watanabe and Hiroshi Ishiguro},
  title           = {Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World},
  journal         = {Journal of Advanced Computational Intelligence and Intelligent Informatics},
  year            = {2011},
  volume          = {15},
  number          = {5},
  pages           = {592--597},
  month           = Jul,
  abstract        = {This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.},
  url             = {http://www.fujipress.jp/finder/xslt.php?mode=present&inputfile=JACII001500050012.xml},
  file            = {Ogawa2011.pdf:Ogawa2011.pdf:PDF},
  keywords        = {Telenoid; Geminoid; human robot interaction},
}
渡辺哲矢, 西尾修一, 小川浩平, 石黒浩, "遠隔操作によるアンドロイドへの身体感覚の転移", 電子情報通信学会論文誌, The Institute of Electronics, Information and Communication Engineers, vol. J94-D, no. 1, pp. 86-93, January, 2011.
Abstract: 遠隔操作型アンドロイドロボットを操作する際,触覚フィードバックがないにもかかわらず,ロボットの身体に触られると自分に触られたように感じることがある.類似の現象として,身体への触覚刺激に同期して身体以外への物体に触覚刺激を与えている様子を観察させると,身体感覚の転移が生ずる「Rubber Hand Illusion」が知られているが,触覚刺激を伴わない身体感覚の転移についての研究事例は少なく,特に対象物を遠隔操作する際の転移に関する報告はこれまでない.本論文ではアンドロイドの遠隔操作時に身体感覚の転移が実際に生じているのかを検証した.その結果,アンドロイドと操作者の動きが同期した場合に,触覚刺激を与えなくても,身体感覚の転移が生ずることが分かった.
BibTeX:
@Article{渡辺哲矢2011,
  author    = {渡辺哲矢 and 西尾修一 and 小川浩平 and 石黒浩},
  title     = {遠隔操作によるアンドロイドへの身体感覚の転移},
  journal   = {電子情報通信学会論文誌},
  year      = {2011},
  volume    = {{J94-D}},
  number    = {1},
  pages     = {86--93},
  month     = Jan,
  issn      = {18804535},
  abstract  = {遠隔操作型アンドロイドロボットを操作する際,触覚フィードバックがないにもかかわらず,ロボットの身体に触られると自分に触られたように感じることがある.類似の現象として,身体への触覚刺激に同期して身体以外への物体に触覚刺激を与えている様子を観察させると,身体感覚の転移が生ずる「Rubber Hand Illusion」が知られているが,触覚刺激を伴わない身体感覚の転移についての研究事例は少なく,特に対象物を遠隔操作する際の転移に関する報告はこれまでない.本論文ではアンドロイドの遠隔操作時に身体感覚の転移が実際に生じているのかを検証した.その結果,アンドロイドと操作者の動きが同期した場合に,触覚刺激を与えなくても,身体感覚の転移が生ずることが分かった.},
  url       = {http://ci.nii.ac.jp/naid/110008006550/en/},
  etitle    = {Body Ownership Transfer to Android Robot Induced by Teleoperation},
  file      = {渡辺哲矢2011.pdf:渡辺哲矢2011.pdf:PDF},
  publisher = {The Institute of Electronics, Information and Communication Engineers},
}
Shuichi Nishio, Hiroshi Ishiguro, "Attitude Change Induced by Different Appearances of Interaction Agents", International Journal of Machine Consciousness, vol. 3, no. 1, pp. 115-126, 2011.
Abstract: Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.
BibTeX:
@Article{Nishio2011,
  author          = {Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Attitude Change Induced by Different Appearances of Interaction Agents},
  journal         = {International Journal of Machine Consciousness},
  year            = {2011},
  volume          = {3},
  number          = {1},
  pages           = {115--126},
  abstract        = {Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.},
  url             = {http://www.worldscinet.com/ijmc/03/0301/S1793843011000637.html},
  doi             = {10.1142/S1793843011000637},
  file            = {Nishio2011.pdf:Nishio2011.pdf:PDF},
  keywords        = {Robot; appearance; interaction agents; human-robot interaction},
}
Christian Becker-Asano, Hiroshi Ishiguro, "Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F", Journal of Artificial Intelligence and Soft Computing Research, vol. 1, no. 3, pp. 215-231, 2011.
Abstract: As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.
BibTeX:
@Article{Becker-Asano2011,
  author          = {Christian Becker-Asano and Hiroshi Ishiguro},
  title           = {Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F},
  journal         = {Journal of Artificial Intelligence and Soft Computing Research},
  year            = {2011},
  volume          = {1},
  number          = {3},
  pages           = {215--231},
  abstract        = {As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.},
  url             = {http://jaiscr.eu/issues.aspx},
}
垣尾政之, 宮下敬宏, 光永法明, 石黒浩, 萩田紀博, "倒立振子移動機構を持つ人型ロボットの反応動作の違いが人に与える印象の変化", 日本ロボット学会誌, vol. 28, no. 9, pp. 1110-1119, November, 2010.
Abstract: In this paper, we report the importance of the reactive behaviors of humanoid robots against human actions for smooth communication. We hypothesize that the reactive behaviors of robots play an important role in achieving human-like communication between humans and robots since the latter need to be recognized by the former as communication partners. To evaluate this hypothesis, we conducted psychological experiments in which we presented subjects with four types of reactive behaviors resulting from pushing a wheeled inverted-pendulum-type humanoid robot. From the experiment, we found that subject's impressions to the robot regarding extroversion and neuroticism changed by the robot's reactive behaviors. We also discuss the reasons for such changes in impressions by comparing the robot's and human reactive behavior.
BibTeX:
@Article{垣尾政之2010,
  author   = {垣尾政之 and 宮下敬宏 and 光永法明 and 石黒浩 and 萩田紀博},
  title    = {倒立振子移動機構を持つ人型ロボットの反応動作の違いが人に与える印象の変化},
  journal  = {日本ロボット学会誌},
  year     = {2010},
  volume   = {28},
  number   = {9},
  pages    = {1110--1119},
  month    = Nov,
  abstract = {In this paper, we report the importance of the reactive behaviors of humanoid robots against human actions for smooth communication. We hypothesize that the reactive behaviors of robots play an important role in achieving human-like communication between humans and robots since the latter need to be recognized by the former as communication partners. To evaluate this hypothesis, we conducted psychological experiments in which we presented subjects with four types of reactive behaviors resulting from pushing a wheeled inverted-pendulum-type humanoid robot. From the experiment, we found that subject's impressions to the robot regarding extroversion and neuroticism changed by the robot's reactive behaviors. We also discuss the reasons for such changes in impressions by comparing the robot's and human reactive behavior.},
  url      = {http://www.i-product.biz/rsj/Conts/Vol_28/Vol28_9j.html},
  etitle   = {How does a reactive behavior of a wheeled inverted-pendulum-type humanoid robot affect human impressions?},
  file     = {垣尾政之2010.pdf:垣尾政之2010.pdf:PDF},
  keywords = {Reactive Behavior; Wheeled Inverted Pendulum; Humanoid Robot},
}
Takayuki Kanda, Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Interactive Humanoid Robots and Androids in Children's Lives", Children, Youth and Environments, vol. 19, no. 1, pp. 12-33, 2009.
Abstract: This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.
BibTeX:
@Article{Kanda2009,
  author          = {Takayuki Kanda and Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Interactive Humanoid Robots and Androids in Children's Lives},
  journal         = {Children, Youth and Environments},
  year            = {2009},
  volume          = {19},
  number          = {1},
  pages           = {12--33},
  abstract        = {This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.},
  file            = {Kanda2009.pdf:Kanda2009.pdf:PDF;19_1_02_HumanoidRobots.pdf:http\://www.colorado.edu/journals/cye/19_1/19_1_02_HumanoidRobots.pdf:PDF},
}
坂本大介, 神田崇行, 小野哲雄, 石黒浩, 萩田紀博, "遠隔存在感メディアとしてのアンドロイド・ロボットの可能性", 情報処理学会論文誌, vol. 48, no. 12, pp. 3729-3738, December, 2007. (研究会推薦論文)
Abstract: 本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.
BibTeX:
@Article{坂本大介2007,
  author          = {坂本大介 and 神田崇行 and 小野哲雄 and 石黒浩 and 萩田紀博},
  title           = {遠隔存在感メディアとしてのアンドロイド・ロボットの可能性},
  journal         = {情報処理学会論文誌},
  year            = {2007},
  volume          = {48},
  number          = {12},
  pages           = {3729--3738},
  month           = Dec,
  issn            = {03875806},
  abstract        = {本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.},
  url             = {http://ci.nii.ac.jp/naid/110006531951},
  etitle          = {Android as a Telecommunication Medium with a Human-like Presence},
  eabstract       = {In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.},
  file            = {坂本大介2007.pdf:坂本大介2007.pdf:PDF;lognavi?name=nels&lang=jp&type=pdf&id=ART0008517485:http\://ci.nii.ac.jp/lognavi?name=nels&lang=jp&type=pdf&id=ART0008517485:PDF},
  note            = {研究会推薦論文},
}
Hiroshi Ishiguro, Shuichi Nishio, "Building artificial humans to understand humans", Journal of Artificial Organs, vol. 10, no. 3, pp. 133-142, September, 2007.
Abstract: If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.
BibTeX:
@Article{Ishiguro2007,
  author      = {Hiroshi Ishiguro and Shuichi Nishio},
  title       = {Building artificial humans to understand humans},
  journal     = {Journal of Artificial Organs},
  year        = {2007},
  volume      = {10},
  number      = {3},
  pages       = {133--142},
  month       = Sep,
  abstract    = {If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.},
  url         = {http://www.springerlink.com/content/pmv076w723140244/},
  doi         = {10.1007/s10047-007-0381-4},
  file        = {Ishiguro2007.pdf:Ishiguro2007.pdf:PDF},
  institution = {{ATR} Intelligent Robotics and Communication Laboratories, Department of Adaptive Machine Systems, Osaka University, Osaka, Japan.},
  keywords    = {Behavior; Behavioral Sciences, methods; Cognitive Science, methods; Facial Expression; Female; Humans, anatomy /&/ histology/psychology; Male; Movement; Perception; Robotics, instrumentation/methods},
  medline-pst = {ppublish},
  pmid        = {17846711},
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children", Psychologia, vol. 50, no. 4, pp. 330-342, 2007.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Article{Nishio2007,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children},
  journal         = {Psychologia},
  year            = {2007},
  volume          = {50},
  number          = {4},
  pages           = {330--342},
  abstract        = {Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  url             = {http://www.jstage.jst.go.jp/article/psysoc/50/4/50_330/_article},
  doi             = {10.2117/psysoc.2007.330},
  file            = {Nishio2007.pdf:Nishio2007.pdf:PDF},
}
会議発表(査読あり)
趙鵬群, 石井カルロス寿憲, 巽智子, "中国語会話インタラクションにおける話題終了部・開始部の頷きのパターンについての探索的研究", 第48回社会言語科学会研究大会, 福岡女子大学, 福岡, pp. 291-294, March, 2024.
Abstract: 本研究では,これまでの先行研究で注目されていなかった中国語会話における話題転換時の頷きのパターンを話題終了部と話題開始部に分けて明らかにしようと試みた.分析の結果,中国語会話の話題終了部における頷きの発生回数は,開始部の約 2.5 倍に及び,両者の間に顕著な差が存在することが明らかになった.これは,中国語の会話における話題の終了部での頷きが,日本語の会話と同じように,話題の終わりを確認または促進する役割を果たしている重要な話題終了ストラテジーであることを示唆している.また,中国語会話の話題転換部における頷きの役割が,話題転換の段階によって異なるという点が明らかにされている.具体的には,中国語会話の話題終了部では「間を埋める頷き」と「あいづちに対するあいづちの頷き」,開始部では「発話を促す頷き」がそれぞれ特徴的である.中国語母語話者は話題の転換を円滑にするために,様々な局面で頷きを行うことが示唆された.
BibTeX:
@InProceedings{趙鵬群2024,
  author    = {趙鵬群 and 石井カルロス寿憲 and 巽智子},
  booktitle = {第48回社会言語科学会研究大会},
  title     = {中国語会話インタラクションにおける話題終了部・開始部の頷きのパターンについての探索的研究},
  year      = {2024},
  address   = {福岡女子大学, 福岡},
  day       = {8-10},
  month     = mar,
  pages     = {291-294},
  url       = {https://www.jass.ne.jp/meeting/conference-next/},
  abstract  = {本研究では,これまでの先行研究で注目されていなかった中国語会話における話題転換時の頷きのパターンを話題終了部と話題開始部に分けて明らかにしようと試みた.分析の結果,中国語会話の話題終了部における頷きの発生回数は,開始部の約 2.5 倍に及び,両者の間に顕著な差が存在することが明らかになった.これは,中国語の会話における話題の終了部での頷きが,日本語の会話と同じように,話題の終わりを確認または促進する役割を果たしている重要な話題終了ストラテジーであることを示唆している.また,中国語会話の話題転換部における頷きの役割が,話題転換の段階によって異なるという点が明らかにされている.具体的には,中国語会話の話題終了部では「間を埋める頷き」と「あいづちに対するあいづちの頷き」,開始部では「発話を促す頷き」がそれぞれ特徴的である.中国語母語話者は話題の転換を円滑にするために,様々な局面で頷きを行うことが示唆された.},
}
秋吉拓斗, 住岡英信, 熊崎博一, 中西惇也, 大西祐美, 前田洋佐, 前田沙和, 加藤博一, 塩見昌裕, "精神科デイケアにおける思考整理を支援する対話ロボットの評価", 第42回日本社会精神医学会, 東北医科薬科大学 小松島キャンパス, 宮城県, pp. 1-3, March, 2024.
Abstract: 【目的】 精神科リハビリテーションにおいて、自分自身の思考を整理し話すことは、対人関係やコミュニケーションスキルの回復に役立つ可能性がある。また、自己理解を深め、新たな気づきを得るきっかけとなる可能性がある。しかし、医療現場での専門家のリソースには限りがあるため、患者に気軽に思考整理の機会を提供することは困難である。そこで、本研究は患者の思考整理を支援する対話ロボットシステムの実現を目的とし、本稿では開発したシステムの評価について述べる。 【方法】 精神科デイケアに通院する精神科患者38人に参加してもらい、開発したシステムを評価した。実験参加者は、思考整理のために認知行動療法のコラム法に基づいて悩みや目標等について状況、気分、思考の観点から質問するロボットと対話した。また、実験参加者はロボットとの対話前後の体調・覚醒度・気分を0点から100点で回答した。期間は令和4年4月から令和5年10月で、毎月1回から2回の合計17回実施した。 【結果】 期間内に延べ100回対話を行った。各評価項目の平均点について、体調は対話前66.6点、対話後70.2点、覚醒度は対話前64.3点、対話後70.4点、気分は対話前69.5点、対話後71.7点であった。各項目において対話前後での得点の増加傾向が示唆された。また、アンケート後の口頭アンケートでは「モヤモヤしていたが話したら悩みが消え、今を楽しもうと思った」「普段楽しかったことを思い出さないが、楽しい気持ちを思い出せた」と感想を述べた実験参加者もいた。 【結論】 本研究では、コラム法を基に患者の思考整理を支援するロボットを評価した。評価実験に参加した精神科デイケアに通院する精神科患者のロボットとの対話前後での体調・覚醒度・気分の得点変化を分析し、対話による増加傾向が示唆された。今後の展望として、ロボット利用による発話量や自己開示量の変化に関する更なる分析を行う予定である。
BibTeX:
@InProceedings{秋吉拓斗2024,
  author    = {秋吉拓斗 and 住岡英信 and 熊崎博一 and 中西惇也 and 大西祐美 and 前田洋佐 and 前田沙和 and 加藤博一 and 塩見昌裕},
  booktitle = {第42回日本社会精神医学会},
  title     = {精神科デイケアにおける思考整理を支援する対話ロボットの評価},
  year      = {2024},
  address   = {東北医科薬科大学 小松島キャンパス, 宮城県},
  day       = {14-15},
  month     = mar,
  pages     = {1-3},
  url       = {http://jssp42.umin.jp/},
  abstract  = {【目的】 精神科リハビリテーションにおいて、自分自身の思考を整理し話すことは、対人関係やコミュニケーションスキルの回復に役立つ可能性がある。また、自己理解を深め、新たな気づきを得るきっかけとなる可能性がある。しかし、医療現場での専門家のリソースには限りがあるため、患者に気軽に思考整理の機会を提供することは困難である。そこで、本研究は患者の思考整理を支援する対話ロボットシステムの実現を目的とし、本稿では開発したシステムの評価について述べる。 【方法】 精神科デイケアに通院する精神科患者38人に参加してもらい、開発したシステムを評価した。実験参加者は、思考整理のために認知行動療法のコラム法に基づいて悩みや目標等について状況、気分、思考の観点から質問するロボットと対話した。また、実験参加者はロボットとの対話前後の体調・覚醒度・気分を0点から100点で回答した。期間は令和4年4月から令和5年10月で、毎月1回から2回の合計17回実施した。 【結果】 期間内に延べ100回対話を行った。各評価項目の平均点について、体調は対話前66.6点、対話後70.2点、覚醒度は対話前64.3点、対話後70.4点、気分は対話前69.5点、対話後71.7点であった。各項目において対話前後での得点の増加傾向が示唆された。また、アンケート後の口頭アンケートでは「モヤモヤしていたが話したら悩みが消え、今を楽しもうと思った」「普段楽しかったことを思い出さないが、楽しい気持ちを思い出せた」と感想を述べた実験参加者もいた。 【結論】 本研究では、コラム法を基に患者の思考整理を支援するロボットを評価した。評価実験に参加した精神科デイケアに通院する精神科患者のロボットとの対話前後での体調・覚醒度・気分の得点変化を分析し、対話による増加傾向が示唆された。今後の展望として、ロボット利用による発話量や自己開示量の変化に関する更なる分析を行う予定である。},
}
大和信夫, 住岡英信, 石黒浩, 塩見昌裕, 神田陽治, "介護施設におけるコンパニオンロボットのTAM ―利用者、運用者及び施設管理者の各視点から考察した施設全体の技術受容モデル―", 日本MOT学会 第15回年次研究発表会, 東京工業大学, 東京(オンライン), March, 2024.
Abstract: 介護施設で使用されるコンパニオンロボットは、使用する認知症高齢者に良い影響を与える反面、介護スタッフのストレスや責任、さらには業務負担を増大させ、また高価であることから施設側が積極的に導入することが難しい。本稿では、この問題を解決するロボットを開発する過程で行われた2つの実験(受容性が低い実験と高い実験)を認知症高齢者、介護職員、施設管理者の視点から分析・考察し、TAM(技術受容モデル)を提案する。
BibTeX:
@InProceedings{大和信夫2024,
  author    = {大和信夫 and 住岡英信 and 石黒浩 and 塩見昌裕 and 神田陽治},
  booktitle = {日本MOT学会 第15回年次研究発表会},
  title     = {介護施設におけるコンパニオンロボットのTAM ―利用者、運用者及び施設管理者の各視点から考察した施設全体の技術受容モデル―},
  year      = {2024},
  address   = {東京工業大学, 東京(オンライン)},
  day       = {9},
  etitle    = {TAM for companion robots in nursing homes A facility-wide technology acceptance model from the different viewpoints of caregiver, receiver, and care facility administrator},
  month     = mar,
  url       = {http://www.js-mot.org/events/research-presentation/pre2023/conf15-240309/},
  abstract  = {介護施設で使用されるコンパニオンロボットは、使用する認知症高齢者に良い影響を与える反面、介護スタッフのストレスや責任、さらには業務負担を増大させ、また高価であることから施設側が積極的に導入することが難しい。本稿では、この問題を解決するロボットを開発する過程で行われた2つの実験(受容性が低い実験と高い実験)を認知症高齢者、介護職員、施設管理者の視点から分析・考察し、TAM(技術受容モデル)を提案する。},
}
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "QuickVC: Any-to-many Voice Conversion Using Inverse Short-time Fourier Transform for Faster Conversion", In 2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023), no. 979-8-3503-0689-7/23/, Taipei, Taiwan, December, 2023.
Abstract: With the development of automatic speech recognition and text-to-speech technology, high-quality voice conversion can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However,current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based voice conversion model that uses the HuBERTSoft model to extract content information features. Unlike the original VITS model, we use the inverse short-time Fourier transform to replace the most computationally expensive part. Through subjective and objective experiments on synthesized speech, the proposed model is capable of natural speech generation and it is very efficient at inference time. Experimental results show that our model can generate samples at over 5000 KHz on the 3090 GPU and over 250 KHz on the i9-10900K CPU, achieving faster speed in comparison to baseline methods using the same hardware configuration.
BibTeX:
@InProceedings{Guo2023,
  author    = {Houjian Guo and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023)},
  title     = {QuickVC: Any-to-many Voice Conversion Using Inverse Short-time Fourier Transform for Faster Conversion},
  year      = {2023},
  address   = {Taipei, Taiwan},
  day       = {16-21},
  doi       = {10.48550/arXiv.2302.08296},
  month     = dec,
  number    = {979-8-3503-0689-7/23/},
  url       = {https://arxiv.org/abs/2302.08296},
  abstract  = {With the development of automatic speech recognition and text-to-speech technology, high-quality voice conversion can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However,current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based voice conversion model that uses the HuBERTSoft model to extract content information features. Unlike the original VITS model, we use the inverse short-time Fourier transform to replace the most computationally expensive part. Through subjective and objective experiments on synthesized speech, the proposed model is capable of natural speech generation and it is very efficient at inference time. Experimental results show that our model can generate samples at over 5000 KHz on the 3090 GPU and over 250 KHz on the i9-10900K CPU, achieving faster speed in comparison to baseline methods using the same hardware configuration.},
  keywords  = {Voice conversion, lightweight model, inverseshort-time Fourier transform},
}
岸本千恵, ブオマル・ハニ・マハムード・ムハンマド, 中江文, "オフセット鎮痛と痛み回復速度の健康被験者を対象とした実験的熱刺激を用いた検討", 第45回日本疼痛学会, コラッセふくしま, 福島, December, 2023.
Abstract: 【背景】オフセット鎮痛 (OA)は、一時的に強い刺激にさらされることで、ほんの僅かに刺激を弱めただけで、痛み感覚が大幅に低下する現象である。慢性痛患者ではこのOAが働きにくく、痛みの認知速度も遅い傾向にあり、内向的で神経症的傾向を有していることが多いと言われている。我々は、健康被験者を対象にOAと痛みの認知速度、慢性痛患者と共通した性格的傾向との関連について調査した。 【方法】インフォームドコンセントに同意した18~87歳の健康被験者468名を対象に、ベース温度36℃、ピーク温度49℃の2山の熱刺激時の痛みについて、Visual Analogue Scale (VAS)を用いて連続的に評価してもらい、OAの有無と1回目のピーク温度から下降後のベース温度における痛みの回復傾向を調査した。性格的傾向はNEO-PI-Rを用い、f検定で分散を確認して2標本t検定を行った。 【結果と考察】全体の8.8%でOAがなく、OAがある人よりも神経症傾向(N)、外向性(E)、調和性(A)の下位次元である衝動性、刺激希求性、信頼が低い傾向がみられた。回復に時間がかかった人のうち、OAがない人はOAがある人よりも回復速度が有意に遅かった(p=.0056)。健康被験者であるにもかかわらず、OAがない人は痛みの認知速度、性格的傾向が慢性痛患者と同様の傾向がみられたことから、慢性痛のなりやすさに関与している可能性が考えられた。
BibTeX:
@InProceedings{岸本千恵2023a,
  author    = {岸本千恵 and ブオマル・ハニ・マハムード・ムハンマド and 中江文},
  booktitle = {第45回日本疼痛学会},
  title     = {オフセット鎮痛と痛み回復速度の健康被験者を対象とした実験的熱刺激を用いた検討},
  year      = {2023},
  address   = {コラッセふくしま, 福島},
  day       = {8-9},
  etitle    = {Offset Analgesia and Pain Recovery Rate in Healthy Subjects Using Experimental Thermal Stimulation},
  month     = dec,
  url       = {https://www.sasappa.co.jp/jasp45/},
  abstract  = {【背景】オフセット鎮痛 (OA)は、一時的に強い刺激にさらされることで、ほんの僅かに刺激を弱めただけで、痛み感覚が大幅に低下する現象である。慢性痛患者ではこのOAが働きにくく、痛みの認知速度も遅い傾向にあり、内向的で神経症的傾向を有していることが多いと言われている。我々は、健康被験者を対象にOAと痛みの認知速度、慢性痛患者と共通した性格的傾向との関連について調査した。 【方法】インフォームドコンセントに同意した18~87歳の健康被験者468名を対象に、ベース温度36℃、ピーク温度49℃の2山の熱刺激時の痛みについて、Visual Analogue Scale (VAS)を用いて連続的に評価してもらい、OAの有無と1回目のピーク温度から下降後のベース温度における痛みの回復傾向を調査した。性格的傾向はNEO-PI-Rを用い、f検定で分散を確認して2標本t検定を行った。 【結果と考察】全体の8.8%でOAがなく、OAがある人よりも神経症傾向(N)、外向性(E)、調和性(A)の下位次元である衝動性、刺激希求性、信頼が低い傾向がみられた。回復に時間がかかった人のうち、OAがない人はOAがある人よりも回復速度が有意に遅かった(p=.0056)。健康被験者であるにもかかわらず、OAがない人は痛みの認知速度、性格的傾向が慢性痛患者と同様の傾向がみられたことから、慢性痛のなりやすさに関与している可能性が考えられた。},
}
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Using joint training speaker encoder with consitency loss to achieve cross-lingual voice conversion and expressive voice conversion", In 2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023), Taipen, Taiwan, December, 2023.
Abstract: Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study,we propose a novel approach that combines a joint training speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally,we introduce a speaker consistency loss to the joint encoder,which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech.
BibTeX:
@InProceedings{Guo2023a,
  author    = {Houjian Guo and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE Automatic Speech Recognition and Understanding WorkshopSearch formSearch (ASRU 2023)},
  title     = {Using joint training speaker encoder with consitency loss to achieve cross-lingual voice conversion and expressive voice conversion},
  year      = {2023},
  address   = {Taipen, Taiwan},
  day       = {16-21},
  doi       = {10.48550/arXiv.2307.00393},
  month     = dec,
  url       = {979-8-3503-0689-7/23/},
  abstract  = {Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study,we propose a novel approach that combines a joint training speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally,we introduce a speaker consistency loss to the joint encoder,which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech.},
  keywords  = {cross-lingual voice conversion, expressivevoice conversion, joint speaker encoder, speaker consistencyloss},
}
David Achanccaray, Hidenobu Sumioka, "A Physiological Approach of Presence and VR Sickness in Simulated Teleoperated Social Tasks", In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), no. 979-8-3503-3702-0/23/, Maui, Hawaii, USA (online), pp. 4562-4567, October, 2023.
Abstract: The presence (or telepresence) feeling and virtual reality (VR) sickness affect the task execution in teleoperation. Most teleoperation works have assessed these concepts using objective (physiological signals) and subjective (questionnaires) measurements. However, these works did not include social tasks. To the best of our knowledge, there was no physiological approach in teleoperation of social tasks. We measured presence and VR sickness in a simulation of teleoperated social tasks by questionnaires and analyzed the correlation between their scores and multimodal biomarkers. The results showed some different correlations from the findings of non-teleoperation studies. These correlations were between presence and neural biomarkers in the frontal-central and central regions (for the beta and delta bands) and between VR sickness and brain biomarkers in the occipital region (for the alpha and beta bands) and the mean temperature. This work revealed significant correlations to support some biomarkers as predictors of the trend of presence and VR sickness in simulated teleoperated social tasks. These biomarkers might also be valid to predict the trend of telepresence and motion sickness in teleoperated social tasks in a remote environment.
BibTeX:
@InProceedings{Achanccaray2023b,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
  title     = {A Physiological Approach of Presence and VR Sickness in Simulated Teleoperated Social Tasks},
  year      = {2023},
  address   = {Maui, Hawaii, USA (online)},
  day       = {1-4},
  month     = oct,
  number    = {979-8-3503-3702-0/23/},
  pages     = {4562-4567},
  url       = {https://ieeesmc2023.org/},
  abstract  = {The presence (or telepresence) feeling and virtual reality (VR) sickness affect the task execution in teleoperation. Most teleoperation works have assessed these concepts using objective (physiological signals) and subjective (questionnaires) measurements. However, these works did not include social tasks. To the best of our knowledge, there was no physiological approach in teleoperation of social tasks. We measured presence and VR sickness in a simulation of teleoperated social tasks by questionnaires and analyzed the correlation between their scores and multimodal biomarkers. The results showed some different correlations from the findings of non-teleoperation studies. These correlations were between presence and neural biomarkers in the frontal-central and central regions (for the beta and delta bands) and between VR sickness and brain biomarkers in the occipital region (for the alpha and beta bands) and the mean temperature. This work revealed significant correlations to support some biomarkers as predictors of the trend of presence and VR sickness in simulated teleoperated social tasks. These biomarkers might also be valid to predict the trend of telepresence and motion sickness in teleoperated social tasks in a remote environment.},
  keywords  = {Teleoperation, Social tasks, Presence, VR sickness, Biomarkers, Virtual reality},
}
Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Bowen Wu, Hiroshi Ishiguro, "Recognizing Real-World Intentions using A Multimodal Deep Learning Approach with Spatial-Temporal Graph Convolutional Networks", In The 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023), no. 978-1-6654-9190-7/23/, Detroit, Michigan, USA, pp. 3819-3826, October, 2023.
Abstract: Identifying intentions is a critical task for comprehending the actions of others, anticipating their future behavior, and making informed decisions. However, it is challenging to recognize intentions due to the uncertainty of future human activities and the complex influence factors. In this work, we explore the method of recognizing intentions alluded under human behaviors in the real world, aiming to boost intelligent systems’ ability to recognize potential intentions and understand human behaviors. We collect data containing realworld human behaviors before using a hand dispenser and a temperature scanner at the building entrance. These data are processed and labeled into intention categories. A questionnaire is conducted to survey the human ability in inferring the intentions of others. Skeleton data and image features are extracted inspired by the answer to the questionnaire. For skeleton-based intention recognition, we propose a spatial-temporal graph convolutional network that performs graph convolutions on both part-based graphs and adaptive graphs, which achieves the best performance compared with baseline models in the same task. A deep-learning-based method using multimodal features is proposed to automatically infer intentions, which is demonstrated to accurately predict intentions based on past behaviors in the experiment, significantly outperforming humans.
BibTeX:
@InProceedings{Shi2023,
  author    = {Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Bowen Wu and Hiroshi Ishiguro},
  booktitle = {The 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)},
  title     = {Recognizing Real-World Intentions using A Multimodal Deep Learning Approach with Spatial-Temporal Graph Convolutional Networks},
  year      = {2023},
  address   = {Detroit, Michigan, USA},
  day       = {1-5},
  doi       = {10.1109/IROS55552.2023.10341981},
  month     = oct,
  number    = {978-1-6654-9190-7/23/},
  pages     = {3819-3826},
  url       = {https://ieeexplore.ieee.org/document/10341981},
  abstract  = {Identifying intentions is a critical task for comprehending the actions of others, anticipating their future behavior, and making informed decisions. However, it is challenging to recognize intentions due to the uncertainty of future human activities and the complex influence factors. In this work, we explore the method of recognizing intentions alluded under human behaviors in the real world, aiming to boost intelligent systems’ ability to recognize potential intentions and understand human behaviors. We collect data containing realworld human behaviors before using a hand dispenser and a temperature scanner at the building entrance. These data are processed and labeled into intention categories. A questionnaire is conducted to survey the human ability in inferring the intentions of others. Skeleton data and image features are extracted inspired by the answer to the questionnaire. For skeleton-based intention recognition, we propose a spatial-temporal graph convolutional network that performs graph convolutions on both part-based graphs and adaptive graphs, which achieves the best performance compared with baseline models in the same task. A deep-learning-based method using multimodal features is proposed to automatically infer intentions, which is demonstrated to accurately predict intentions based on past behaviors in the experiment, significantly outperforming humans.},
}
中江文, アリザデカシュテバン・エへサン, ブオマル・ハニ・マハムード・ムハンマド, "有酸素運動タスクを介した実験的痛み受容の変化を客観的にとらえる試み", 日本ペインクリニック学会第57回学術集会, 佐賀市文化会館 SAGAアリーナ, 佐賀, July, 2023.
Abstract: 【背景と目的】痛みを伝えるのは難しい場合があり、客観的評価法が望まれる。我々は、健康被検者に実験的痛みを与え、データベース化することで、人工知能を用いたアルゴリズムで痛みを評価することに成功した。今回、健康被検者で有酸素運動により内因性オピオイドであるエンドルフィンの放出を介して痛みの緩和を促し、その評価を行った。 【方法】文書による同意を得た被検者に有酸素運動の前後に段階的な実験的熱刺激で痛みを誘発し前額部の6電極からの脳波データで、Pain Score(PS)を算出した。主観的評価として連続的VASを測定した。運動の前後に採血を行い、エンドルフィンを測定した。統計はJMP14.0 を用いて、相関解析と対応のある検定を行い、有意水準を5%とした。 【結果】評価対象となった77名のタスク前、タスク後ともに、VASとPSには有意な相関を認めた(相関係数Pre0.558 Post0.494)。46度でタスク前後でVASで有意差を認め、PSは有意傾向を呈した。エンドルフィンは有意な上昇を認めた。 【考察】脳波によって痛みの客観的評価は可能であり、有酸素運動でのエンドルフィンの放出を介した痛みの低下を評価し得ると考えられた。
BibTeX:
@InProceedings{中江文2023a,
  author    = {中江文 and アリザデカシュテバン・エへサン and ブオマル・ハニ・マハムード・ムハンマド},
  booktitle = {日本ペインクリニック学会第57回学術集会},
  title     = {有酸素運動タスクを介した実験的痛み受容の変化を客観的にとらえる試み},
  year      = {2023},
  address   = {佐賀市文化会館 SAGAアリーナ, 佐賀},
  day       = {13-15},
  month     = jul,
  url       = {https://site.convention.co.jp/pain57/},
  abstract  = {【背景と目的】痛みを伝えるのは難しい場合があり、客観的評価法が望まれる。我々は、健康被検者に実験的痛みを与え、データベース化することで、人工知能を用いたアルゴリズムで痛みを評価することに成功した。今回、健康被検者で有酸素運動により内因性オピオイドであるエンドルフィンの放出を介して痛みの緩和を促し、その評価を行った。 【方法】文書による同意を得た被検者に有酸素運動の前後に段階的な実験的熱刺激で痛みを誘発し前額部の6電極からの脳波データで、Pain Score(PS)を算出した。主観的評価として連続的VASを測定した。運動の前後に採血を行い、エンドルフィンを測定した。統計はJMP14.0 を用いて、相関解析と対応のある検定を行い、有意水準を5%とした。 【結果】評価対象となった77名のタスク前、タスク後ともに、VASとPSには有意な相関を認めた(相関係数Pre0.558 Post0.494)。46度でタスク前後でVASで有意差を認め、PSは有意傾向を呈した。エンドルフィンは有意な上昇を認めた。 【考察】脳波によって痛みの客観的評価は可能であり、有酸素運動でのエンドルフィンの放出を介した痛みの低下を評価し得ると考えられた。},
}
中江文, ブオマル・ハニ・マハムード・ムハンマド, アリザデカシュテバン・エへサン, 大西裕也, 住岡英信, 塩見昌裕, "コミュニケーションロボットとの触れ合いによる実験的痛みに対する抑制効果の検討", 日本ペインクリニック学会第57回学術集会, 佐賀市文化会館 SAGAアリーナ, 佐賀, July, 2023.
Abstract: 【背景】痛みの認知は孤独感をはじめとしたストレスにより影響を受けることが知られている。人と対話を行うコミュニケーションロボットはストレス軽減効果を示すものがあり、今回ロボットの痛み認知への影響を評価した。 【方法】文書による同意を得た被検者24名に対しTSA-II(Medoc、イスラエル)を用いて実験的熱刺激による痛みを与えた後、ロボットと20分間の対話によるコミュニケーションの時間を確保し、ロボットと触れた状況で同じ実験的熱刺激の痛みを与えた。痛みの評価はNRSとSFMPQ2で行い、ストレスの評価をSRS18用いた。さらに、痛み刺激中の脳波を取得し、人工知能を用いて痛みの度合いを表すPS(Pain Score)を算出した。統計は対応のある片側検定を用い、有意水準を5%と設定した 【結果と考察】最も痛いときのNRS、SFMPQ2の持続的な痛み、脳波をもとに計算したPSはいずれも、ロボット無条件に比べ、ロボット有条件で有意に低値であった(P<0.05)。SRS18は各スコア、合計とも実験前後で有意に低下した(p<0.05)。以上よりコミュニケーションロボットがそばにいることで、ストレスが緩和されることにより痛みの認知に良い影響を与えたと考えられた。
BibTeX:
@InProceedings{中江文2023,
  author    = {中江文 and ブオマル・ハニ・マハムード・ムハンマド and アリザデカシュテバン・エへサン and 大西裕也 and 住岡英信 and 塩見昌裕},
  booktitle = {日本ペインクリニック学会第57回学術集会},
  title     = {コミュニケーションロボットとの触れ合いによる実験的痛みに対する抑制効果の検討},
  year      = {2023},
  address   = {佐賀市文化会館 SAGAアリーナ, 佐賀},
  day       = {13-15},
  month     = jul,
  url       = {https://site.convention.co.jp/pain57/},
  abstract  = {【背景】痛みの認知は孤独感をはじめとしたストレスにより影響を受けることが知られている。人と対話を行うコミュニケーションロボットはストレス軽減効果を示すものがあり、今回ロボットの痛み認知への影響を評価した。 【方法】文書による同意を得た被検者24名に対しTSA-II(Medoc、イスラエル)を用いて実験的熱刺激による痛みを与えた後、ロボットと20分間の対話によるコミュニケーションの時間を確保し、ロボットと触れた状況で同じ実験的熱刺激の痛みを与えた。痛みの評価はNRSとSFMPQ2で行い、ストレスの評価をSRS18用いた。さらに、痛み刺激中の脳波を取得し、人工知能を用いて痛みの度合いを表すPS(Pain Score)を算出した。統計は対応のある片側検定を用い、有意水準を5%と設定した 【結果と考察】最も痛いときのNRS、SFMPQ2の持続的な痛み、脳波をもとに計算したPSはいずれも、ロボット無条件に比べ、ロボット有条件で有意に低値であった(P<0.05)。SRS18は各スコア、合計とも実験前後で有意に低下した(p<0.05)。以上よりコミュニケーションロボットがそばにいることで、ストレスが緩和されることにより痛みの認知に良い影響を与えたと考えられた。},
}
David Achanccaray, Hidenobu Sumioka, "Analysis of Physiological Response of Attention and Stress States in Teleoperation Performance of Social Tasks", In 45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBC2023), Sydney, Australia, July, 2023.
Abstract: Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We analyzed the physiological response of attention and stress mental states by computing the correlation between multimodal biomarkers and performance, pleasure-arousal scale, and workload. Physiological data were recorded during simulated teleoperated social tasks to induce mental states, such as normal, attention, and stress. The results showed that task performance and workload subscales achieved moderate correlations with some multimodal biomarkers. The correlations depended on the induced state. The cognitive workload was related to brain biomarkers of attention in the frontal and frontal-central regions. These regions were close to the frontopolar region, which is commonly reported in attentional studies. Thus, some multimodal biomarkers of attention and stress mental states could monitor or predict metrics related to the performance in teleoperation of social tasks.
BibTeX:
@InProceedings{Achanccaray2023a,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society(EMBC2023)},
  title     = {Analysis of Physiological Response of Attention and Stress States in Teleoperation Performance of Social Tasks},
  year      = {2023},
  address   = {Sydney, Australia},
  day       = {24-27},
  month     = jul,
  url       = {https://embc.embs.org/2023/},
  abstract  = {Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We analyzed the physiological response of attention and stress mental states by computing the correlation between multimodal biomarkers and performance, pleasure-arousal scale, and workload. Physiological data were recorded during simulated teleoperated social tasks to induce mental states, such as normal, attention, and stress. The results showed that task performance and workload subscales achieved moderate correlations with some multimodal biomarkers. The correlations depended on the induced state. The cognitive workload was related to brain biomarkers of attention in the frontal and frontal-central regions. These regions were close to the frontopolar region, which is commonly reported in attentional studies. Thus, some multimodal biomarkers of attention and stress mental states could monitor or predict metrics related to the performance in teleoperation of social tasks.},
}
Changzeng Fu, Zhenghan Chen, Jiaqi Shi, Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "HAG: Hierarchical Attention with Graph Network for Dialogue Act Classification in Conversation", In 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes Island, Greece, pp. 1-5, June, 2023.
Abstract: The prediction of dialogue acts (DA) labels on utterancelevelin conversations can be treated as a sequence labelingproblem, which requires context- and speaker-aware semanticcomprehension, especially for Japanese. In this study, we proposeda hierarchical attention with the graph neural network(HAG) to consider the contextual interconnections as wellas the semantics carried by the sentence itself. Concretely,the model use long-short term memory networks (LSTMs)to perform a context-aware encoding within a dialogue window.Then, we construct the context graph by aggregatingthe neighboring utterances. Subsequently, a speaker featuretransformation is executed with a graph attention network(GAT) to calculate the interconnections, while a context-levelfeature selection is performed with a gated graph convolutionalnetwork (GatedGCN) to select the salient utterancesthat contribute to the DA classification. Finally, we merge therepresentations of different levels and conduct a classificationwith two dense layers. We evaluate the proposed model onJapanese dialogue act dataset (JPS-DA). The experimentalresults show that our method outperforms the baselines.
BibTeX:
@InProceedings{Fu2023,
  author    = {Changzeng Fu and Zhenghan Chen and Jiaqi Shi and Bowen Wu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023)},
  title     = {HAG: Hierarchical Attention with Graph Network for Dialogue Act Classification in Conversation},
  year      = {2023},
  address   = {Rhodes Island, Greece},
  day       = {4-9},
  doi       = {10.1109/ICASSP49357.2023.10096805},
  month     = jun,
  pages     = {1-5},
  url       = {https://ieeexplore.ieee.org/document/10096805/authors#authors},
  abstract  = {The prediction of dialogue acts (DA) labels on utterancelevelin conversations can be treated as a sequence labelingproblem, which requires context- and speaker-aware semanticcomprehension, especially for Japanese. In this study, we proposeda hierarchical attention with the graph neural network(HAG) to consider the contextual interconnections as wellas the semantics carried by the sentence itself. Concretely,the model use long-short term memory networks (LSTMs)to perform a context-aware encoding within a dialogue window.Then, we construct the context graph by aggregatingthe neighboring utterances. Subsequently, a speaker featuretransformation is executed with a graph attention network(GAT) to calculate the interconnections, while a context-levelfeature selection is performed with a gated graph convolutionalnetwork (GatedGCN) to select the salient utterancesthat contribute to the DA classification. Finally, we merge therepresentations of different levels and conduct a classificationwith two dense layers. We evaluate the proposed model onJapanese dialogue act dataset (JPS-DA). The experimentalresults show that our method outperforms the baselines.},
  keywords  = {Semantics, Oral communication, Logic gates, Signal processing, Feature extraction, Graph neural networks, Encoding},
}
David Achanccaray, Hidenobu Sumioka, "Physiological Analysis of Attention and Stress States in Teleoperation of Social Tasks", In 2023 IEEE International Conference on Robotics and Automation, Workshop on 'Avatar-Symbiotic Society'(ICRA2023 Workshop MW25), London, UK (online), pp. 1-2, May, 2023.
Abstract: Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We explored the physiological response of mental states during the simulated teleoperation of social tasks to determine its influence by analyzing statistical differences/correlations in/between multimodal biomarkers, performance metrics, emotional scale, workload, presence, and VR sickness symptoms among tasks to induce normal, attention, and stress mental states. Thus, this work revealed significant correlations to support some biomarkers as predictors of workload, presence, and VR sickness in simulated teleoperated social tasks.
BibTeX:
@InProceedings{Achanccaray2023,
  author    = {David Achanccaray and Hidenobu Sumioka},
  booktitle = {2023 IEEE International Conference on Robotics and Automation, Workshop on 'Avatar-Symbiotic Society'(ICRA2023 Workshop MW25)},
  title     = {Physiological Analysis of Attention and Stress States in Teleoperation of Social Tasks},
  year      = {2023},
  address   = {London, UK (online)},
  day       = {29-2},
  month     = may,
  pages     = {1-2},
  url       = {https://www.icra2023.org/welcome},
  abstract  = {Some studies addressed monitoring mental states by physiological responses analysis in robots’ teleoperation in traditional applications such as inspection and exploration; however, no study analyzed the physiological response during teleoperated social tasks to the best of our knowledge. We explored the physiological response of mental states during the simulated teleoperation of social tasks to determine its influence by analyzing statistical differences/correlations in/between multimodal biomarkers, performance metrics, emotional scale, workload, presence, and VR sickness symptoms among tasks to induce normal, attention, and stress mental states. Thus, this work revealed significant correlations to support some biomarkers as predictors of workload, presence, and VR sickness in simulated teleoperated social tasks.},
  keywords  = {Teleoperation, Social tasks, Workload, Emotions, Presence, Virtual reality sickness, Virtual reality},
}
岸本千恵, 住岡英信, 塩見昌裕, 中江文, "パルスオキシメーターを用いた運動負荷に対するモニタリング運動療法への応用の可能性", 第34回日本臨床モニター学会総会, 高知県立県民文化ホール, 高知, April, 2023.
Abstract: 【背景】IASPはIntegrative Pain Careの1年として、運動療法にも着目している。パルスオキシメーターは一般へ普及し、スマートウォッチの機能として搭載され、身近になりつつある。今回我々は運動負荷に伴う身体変化をパルスオキシメーターで取得した。 【方法】大阪大学のイントラネットで公募した被験者に対し文書によるインフォームドコンセントを行い、ボクシングのトレーニング(ミット打ちとシャドーボクシング)を3分間3セット、インターバル7分で行った。パルスオキシメーター(日本光電製OLV-4202)でSpO2と脈拍数(PR)を、トレーニング前、直後、1分後、2分後、3分後を3セット分同様に記録した。統計はJMP14.0を用い、有意水準5%とした。 【結果と考察】PRは各セッション直後に有意に上昇、SpO2は有意に低下したが、その回復過程は7分のインターバルを設けたにもかかわらず、初回セッション後に比し、2、3回目では回復が得られないことが明らかになった。運動療法は有効性が証明されているものの、臨床現場で患者に促すのは難しい。これらのモニタリング指標が、患者に対する負荷の指標となり得る簡便な方法として確立できる可能性が考えられた。
BibTeX:
@InProceedings{岸本千恵2023,
  author    = {岸本千恵 and 住岡英信 and 塩見昌裕 and 中江文},
  booktitle = {第34回日本臨床モニター学会総会},
  title     = {パルスオキシメーターを用いた運動負荷に対するモニタリング運動療法への応用の可能性},
  year      = {2023},
  address   = {高知県立県民文化ホール, 高知},
  day       = {29-30},
  month     = apr,
  url       = {http://e-g.co.jp/jacm34/index.html},
  abstract  = {【背景】IASPはIntegrative Pain Careの1年として、運動療法にも着目している。パルスオキシメーターは一般へ普及し、スマートウォッチの機能として搭載され、身近になりつつある。今回我々は運動負荷に伴う身体変化をパルスオキシメーターで取得した。 【方法】大阪大学のイントラネットで公募した被験者に対し文書によるインフォームドコンセントを行い、ボクシングのトレーニング(ミット打ちとシャドーボクシング)を3分間3セット、インターバル7分で行った。パルスオキシメーター(日本光電製OLV-4202)でSpO2と脈拍数(PR)を、トレーニング前、直後、1分後、2分後、3分後を3セット分同様に記録した。統計はJMP14.0を用い、有意水準5%とした。 【結果と考察】PRは各セッション直後に有意に上昇、SpO2は有意に低下したが、その回復過程は7分のインターバルを設けたにもかかわらず、初回セッション後に比し、2、3回目では回復が得られないことが明らかになった。運動療法は有効性が証明されているものの、臨床現場で患者に促すのは難しい。これらのモニタリング指標が、患者に対する負荷の指標となり得る簡便な方法として確立できる可能性が考えられた。},
}
Hani Mahmoud Mohammed Bu-Omer, Ehsan Alizadeh Kashtiban, 中江文, "Original Title 機械学習の手法を用いたリアルタイム脳波による痛みのモニタリング", 第34回日本臨床モニター学会総会, 高知県立県民文化ホール, 高知, April, 2023.
Abstract: 痛みは主観的な体験と定義づけられ、自己申告に頼って評価されている。痛みを伝えるのは時に困難で、鎮痛薬の投与量の調節の良否は医師の経験に左右される。痛みの治療の標準化を達成するために、客観的な痛みの推定ツールの開発が望まれている。我々は痛みの見える化を目指して、脳波を用いて客観的に評価する方法の開発を行ってきた。本研究の目的は、リアルタイムで痛みを見える化できるシステムを構築することであった。このシステムを構築するために、健康被検者(年齢層18-36歳)に対し実験的熱刺激による痛みを与えその脳波をデータベース化した。参加者は、Computerized Visual Analog Scale (CoVAS) を用いて、0から100の範囲で主観的な痛みを申告し、0から100の範囲で痛みのスコアを予測する機械学習モデルの学習に用いられた。オンラインで痛みのスコアを予測するために、8秒間のリアルタイムEEGデータ・シーケンスを前処理してオーバーラップしたエポックに分割し、各エポックから痛み特有 の特徴を抽出し、事前に学習した痛み予測用人口ニューラルネットワークに特徴を与えて痛みの量を予測しそれらをリアルタイムに表示することに成功したので報告する。
BibTeX:
@InProceedings{Bu-Omer2023,
  author    = {Hani Mahmoud Mohammed Bu-Omer and Ehsan Alizadeh Kashtiban and 中江文},
  booktitle = {第34回日本臨床モニター学会総会},
  title     = {Original Title 機械学習の手法を用いたリアルタイム脳波による痛みのモニタリング},
  year      = {2023},
  address   = {高知県立県民文化ホール, 高知},
  day       = {29-30},
  month     = apr,
  url       = {http://e-g.co.jp/jacm34/index.html},
  abstract  = {痛みは主観的な体験と定義づけられ、自己申告に頼って評価されている。痛みを伝えるのは時に困難で、鎮痛薬の投与量の調節の良否は医師の経験に左右される。痛みの治療の標準化を達成するために、客観的な痛みの推定ツールの開発が望まれている。我々は痛みの見える化を目指して、脳波を用いて客観的に評価する方法の開発を行ってきた。本研究の目的は、リアルタイムで痛みを見える化できるシステムを構築することであった。このシステムを構築するために、健康被検者(年齢層18-36歳)に対し実験的熱刺激による痛みを与えその脳波をデータベース化した。参加者は、Computerized Visual Analog Scale (CoVAS) を用いて、0から100の範囲で主観的な痛みを申告し、0から100の範囲で痛みのスコアを予測する機械学習モデルの学習に用いられた。オンラインで痛みのスコアを予測するために、8秒間のリアルタイムEEGデータ・シーケンスを前処理してオーバーラップしたエポックに分割し、各エポックから痛み特有 の特徴を抽出し、事前に学習した痛み予測用人口ニューラルネットワークに特徴を与えて痛みの量を予測しそれらをリアルタイムに表示することに成功したので報告する。},
}
Ehsan Alizadeh Kashtiban, Hani Mahmoud Mohammed Bu-Omer, 中江文, "脳波を用いた持続する痛みの客観的評価", 第34回日本臨床モニター学会総会, 高知県立県民文化ホール, 高知, April, 2023.
Abstract: 背景】痛みは主観的な感覚と定義され、本人の申告に基づいて治療が決定されるが時にその判断が難しい場合があり、客観的評価法の開発が望まれている。我々は実験的痛み刺激中の脳波データからのアルゴリズムを開発してきた。脳波は年齢による差やノイズなどの個人差が大きく、従来の方法では精度を上げるために本人の脳波データを用いた補正を必須としていた。今回我々は補正プロセスを経ずに精度高く予測できる方法を開発したので報告する。 【方法】文書による同意を得た被検者21名を対象とした。全ての被験者に実験的熱刺激による痛みを与え脳波を取得した。それぞれの被検者の痛みの自己申告をVASで取得した。前額部の8極の脳波から人口知能を用いたアルゴリズムを使った痛みの予測値(Pain Score: PS)を48度と42度のポイントで最大値を算出し、対応のある検定でVASとPSの温度間の違いを比較した。 【結果】痛みの主観的評価(VAS)、脳波を用いた痛みの予測値(PS)ともに、48度と42度ですべて有意差を認めた(p<0.0001)。 【結語】今回の予測法で個人データによる補正を経ずにPSは痛みの違いを区別することができた。
BibTeX:
@InProceedings{Kashitiban2023,
  author    = {Ehsan Alizadeh Kashtiban and Hani Mahmoud Mohammed Bu-Omer and 中江文},
  booktitle = {第34回日本臨床モニター学会総会},
  title     = {脳波を用いた持続する痛みの客観的評価},
  year      = {2023},
  address   = {高知県立県民文化ホール, 高知},
  day       = {29-30},
  month     = apr,
  url       = {http://e-g.co.jp/jacm34/index.html},
  abstract  = {背景】痛みは主観的な感覚と定義され、本人の申告に基づいて治療が決定されるが時にその判断が難しい場合があり、客観的評価法の開発が望まれている。我々は実験的痛み刺激中の脳波データからのアルゴリズムを開発してきた。脳波は年齢による差やノイズなどの個人差が大きく、従来の方法では精度を上げるために本人の脳波データを用いた補正を必須としていた。今回我々は補正プロセスを経ずに精度高く予測できる方法を開発したので報告する。 【方法】文書による同意を得た被検者21名を対象とした。全ての被験者に実験的熱刺激による痛みを与え脳波を取得した。それぞれの被検者の痛みの自己申告をVASで取得した。前額部の8極の脳波から人口知能を用いたアルゴリズムを使った痛みの予測値(Pain Score: PS)を48度と42度のポイントで最大値を算出し、対応のある検定でVASとPSの温度間の違いを比較した。 【結果】痛みの主観的評価(VAS)、脳波を用いた痛みの予測値(PS)ともに、48度と42度ですべて有意差を認めた(p<0.0001)。 【結語】今回の予測法で個人データによる補正を経ずにPSは痛みの違いを区別することができた。},
}
張維娟, 住岡英信, 塩見昌裕, 中江文, "運動負荷による酸素飽和度、脈拍数の変化と酸化ストレスとの関係の検討~効果的な運動療法を目指して~", 第34回日本臨床モニター学会総会, 高知県立県民文化ホール, 高知, April, 2023.
Abstract: 【背景】IASPはIntegrative Pain Careの1年として、運動療法にも着目している。今回我々は運動負荷に伴う身体変化をパルスオキシメーターで評価し、酸化ストレスとの関係を明らかにすることを目的とした。 【方法】大阪大学のイントラネットで公募した被験者に対し文書によるインフォームドコンセントを行い、ボクシングトレーニングのミット打ちを3分間3セット、インターバル7分で行った。前後で採血を行いdROMsとBAPの測定を行った。ミット打ち参加前にパルスオキシメーター(日本光電製OLV-4202)でSpO2と脈拍数を、ミット打ち直後、1分後、2分後、3分後の3セット分を同様に記録した。統計はJMP14.0を用い相関をペアワイズ法で推定し、前後の変化は対応のある検定を行い、有意水準5%とした。【結果】運動負荷後にdROMs, BAPは有意に上昇した。運動負荷後のSpO2, 脈拍数とdROMs、 BAPの間に有意な相関を認めた。 【考察】ミット打ちは有酸素運動と無酸素運動の混合型の運動であり、酸素需要が亢進する。SpO2の低下、脈拍数の上昇とその回復過程とdROMs、BAPが有意な相関を認めたことから、より負荷のかかる状況で抗酸化機能で相対的に高まることが示唆された。
BibTeX:
@InProceedings{張維娟2023,
  author    = {張維娟 and 住岡英信 and 塩見昌裕 and 中江文},
  booktitle = {第34回日本臨床モニター学会総会},
  title     = {運動負荷による酸素飽和度、脈拍数の変化と酸化ストレスとの関係の検討~効果的な運動療法を目指して~},
  year      = {2023},
  address   = {高知県立県民文化ホール, 高知},
  day       = {29-30},
  month     = apr,
  url       = {http://e-g.co.jp/jacm34/index.htm},
  abstract  = {【背景】IASPはIntegrative Pain Careの1年として、運動療法にも着目している。今回我々は運動負荷に伴う身体変化をパルスオキシメーターで評価し、酸化ストレスとの関係を明らかにすることを目的とした。
【方法】大阪大学のイントラネットで公募した被験者に対し文書によるインフォームドコンセントを行い、ボクシングトレーニングのミット打ちを3分間3セット、インターバル7分で行った。前後で採血を行いdROMsとBAPの測定を行った。ミット打ち参加前にパルスオキシメーター(日本光電製OLV-4202)でSpO2と脈拍数を、ミット打ち直後、1分後、2分後、3分後の3セット分を同様に記録した。統計はJMP14.0を用い相関をペアワイズ法で推定し、前後の変化は対応のある検定を行い、有意水準5%とした。【結果】運動負荷後にdROMs, BAPは有意に上昇した。運動負荷後のSpO2, 脈拍数とdROMs、
BAPの間に有意な相関を認めた。
【考察】ミット打ちは有酸素運動と無酸素運動の混合型の運動であり、酸素需要が亢進する。SpO2の低下、脈拍数の上昇とその回復過程とdROMs、BAPが有意な相関を認めたことから、より負荷のかかる状況で抗酸化機能で相対的に高まることが示唆された。},
}
秋吉拓斗, 住岡英信, 熊崎博一, 中西惇也, 大西祐美, 前田洋佐, 前田沙和, 加藤博一, 塩見昌裕, "思考の整理を支援する対話ロボットの精神科デイケアにおける実践的な開発", 第41回日本社会精神医学会, 神戸商工会議所会館, 兵庫 (online), pp. 117, March, 2023.
Abstract: 【目的】患者が自身の気分や考えを整理し表出する作業は、自身の状態に関する理解や新しい気づきを得るために重要である。しかし、患者に思考の整理を促す機会を気軽に提供することは、専門家の数が限られている現状では困難である。そこで、本研究は対話ロボットによる患者の思考の整理を支援するシステムの実現を目的とする。 【方法】提案システムは認知行動療法のコラム法を基に、患者の悩みや目標について、内容や気分、行動、考え等の観点でロボットが患者に質問し、患者が回答することで整理を促す。提案システムの実践的な開発のために、ありまこうげんホスピタル・精神科デイケアの利用者に、ロボットとの対話実験に参加していただき、実験後の感想や要望を収集した。期間は令和3年10月から令和4年12月で、毎月1回から2回、合計16回実施し、各実験の間に改良した。 【結果】期間内に40人に参加してもらい、延べ114回対話を行った。複数回体験した患者の中には「コミュニケーションの良い練習になるだけでなく、気分や体調が良くなった」と感想を述べた方もいた。また、実験を観察したデイケアスタッフは「複数回対話した参加者は、ロボットとの会話により気分が良くなることを理解しているように見える」と述べた。 【結論】本稿では、コラム法を基に患者の思考の整理を支援するロボットを提案した。ロボットとの対話実験に参加した精神科デイケア利用者から得た利用時の留意点や改善点を踏まえ、実践的な開発を通して提案システムを実現した。今後、開発したロボットの利用による気分状態の改善や自己開示量の促進等の効果の検証を行う。
BibTeX:
@InProceedings{秋吉拓斗2023,
  author    = {秋吉拓斗 and 住岡英信 and 熊崎博一 and 中西惇也 and 大西祐美 and 前田洋佐 and 前田沙和 and 加藤博一 and 塩見昌裕},
  booktitle = {第41回日本社会精神医学会},
  title     = {思考の整理を支援する対話ロボットの精神科デイケアにおける実践的な開発},
  year      = {2023},
  address   = {神戸商工会議所会館, 兵庫 (online)},
  day       = {16-17},
  month     = mar,
  pages     = {117},
  url       = {http://jssp41.umin.jp/index.html},
  abstract  = {【目的】患者が自身の気分や考えを整理し表出する作業は、自身の状態に関する理解や新しい気づきを得るために重要である。しかし、患者に思考の整理を促す機会を気軽に提供することは、専門家の数が限られている現状では困難である。そこで、本研究は対話ロボットによる患者の思考の整理を支援するシステムの実現を目的とする。 【方法】提案システムは認知行動療法のコラム法を基に、患者の悩みや目標について、内容や気分、行動、考え等の観点でロボットが患者に質問し、患者が回答することで整理を促す。提案システムの実践的な開発のために、ありまこうげんホスピタル・精神科デイケアの利用者に、ロボットとの対話実験に参加していただき、実験後の感想や要望を収集した。期間は令和3年10月から令和4年12月で、毎月1回から2回、合計16回実施し、各実験の間に改良した。 【結果】期間内に40人に参加してもらい、延べ114回対話を行った。複数回体験した患者の中には「コミュニケーションの良い練習になるだけでなく、気分や体調が良くなった」と感想を述べた方もいた。また、実験を観察したデイケアスタッフは「複数回対話した参加者は、ロボットとの会話により気分が良くなることを理解しているように見える」と述べた。 【結論】本稿では、コラム法を基に患者の思考の整理を支援するロボットを提案した。ロボットとの対話実験に参加した精神科デイケア利用者から得た利用時の留意点や改善点を踏まえ、実践的な開発を通して提案システムを実現した。今後、開発したロボットの利用による気分状態の改善や自己開示量の促進等の効果の検証を行う。},
}
Shi Feng, 大和信夫, 石黒浩, 塩見昌裕, 住岡英信, "ロボットの赤ちゃんらしさは人にどんな影響を与えるのか? -赤ちゃんらしい見た目と声の影響調査-", 第27回 一般社団法人情報処理学会シンポジウム インタラクション2023, no. 1B-42, 学術総合センター内 一橋記念講堂, 東京, pp. 304-306, March, 2023.
Abstract: 本研究では,赤ちゃん型対話ロボットを用いた高齢者へのメンタルサポートを目指し,人から赤ちゃんとのインタラクションで見られるような行動や楽しみを引き出すための要素を調査するために,形状に着目した予備的検討を行った.乳児の音声を発する形状の異なる5種類のロボットを用意し,非高齢被験者に各ロボットと1分間遊んでもらった.実験後,それらのロボットに対して「遊びやすさ」,「楽しさ」,「赤ちゃんらしさ」を順位付けしてもらった.その結果,「赤ちゃんらしさ」「遊びやすさ」「楽しさ」ともに赤ちゃん形状をしているロボットが丸など他の形状よりも上位に選ばれることが示された.また,人が見せる赤ちゃんに対する特徴的な行動に着目した検討など,今後の研究の方向性についても議論する.
BibTeX:
@InProceedings{Feng2023,
  author    = {Shi Feng and 大和信夫 and 石黒浩 and 塩見昌裕 and 住岡英信},
  booktitle = {第27回 一般社団法人情報処理学会シンポジウム インタラクション2023},
  title     = {ロボットの赤ちゃんらしさは人にどんな影響を与えるのか? -赤ちゃんらしい見た目と声の影響調査-},
  year      = {2023},
  address   = {学術総合センター内 一橋記念講堂, 東京},
  day       = {8-10},
  month     = mar,
  number    = {1B-42},
  pages     = {304-306},
  url       = {https://www.interaction-ipsj.org/2023/},
  abstract  = {本研究では,赤ちゃん型対話ロボットを用いた高齢者へのメンタルサポートを目指し,人から赤ちゃんとのインタラクションで見られるような行動や楽しみを引き出すための要素を調査するために,形状に着目した予備的検討を行った.乳児の音声を発する形状の異なる5種類のロボットを用意し,非高齢被験者に各ロボットと1分間遊んでもらった.実験後,それらのロボットに対して「遊びやすさ」,「楽しさ」,「赤ちゃんらしさ」を順位付けしてもらった.その結果,「赤ちゃんらしさ」「遊びやすさ」「楽しさ」ともに赤ちゃん形状をしているロボットが丸など他の形状よりも上位に選ばれることが示された.また,人が見せる赤ちゃんに対する特徴的な行動に着目した検討など,今後の研究の方向性についても議論する.},
}
住岡英信, 大和信夫, 塩見昌裕, "家族とのつながりを強める「私の分身 『 ひろちゃん 』 ワークショップ 」の提案", 第27回 一般社団法人情報処理学会シンポジウム インタラクション2023, no. 2P-27, 学術総合センター内 一橋記念講堂, 東京, pp. 782-783, March, 2023.
Abstract: 本研究では, 遠く離れた親族,特に祖父母との 社会的つながりを 強め るための取り組みとして 児童が自分の 分身ロボットを 制作し,それを親族に送る「私の分身ひろちゃんワークショップ」を提案する. 実際に 学童保育施設 に通う小学校低学年の生徒に対して ワークショップを行った 結果, 保護者や参加した生徒からはポジティブな反応が 見られた. ロボットを 送られた親族に対する調査など,ワークショップがもたらす効果 検討についての今後の 計画 に ついても述べる.
BibTeX:
@InProceedings{住岡英信2023,
  author    = {住岡英信 and 大和信夫 and 塩見昌裕},
  booktitle = {第27回 一般社団法人情報処理学会シンポジウム インタラクション2023},
  title     = {家族とのつながりを強める「私の分身 『 ひろちゃん 』 ワークショップ 」の提案},
  year      = {2023},
  address   = {学術総合センター内 一橋記念講堂, 東京},
  day       = {8-10},
  month     = mar,
  number    = {2P-27},
  pages     = {782-783},
  url       = {https://www.interaction-ipsj.org/2023/},
  abstract  = {本研究では, 遠く離れた親族,特に祖父母との 社会的つながりを 強め るための取り組みとして 児童が自分の 分身ロボットを 制作し,それを親族に送る「私の分身ひろちゃんワークショップ」を提案する. 実際に 学童保育施設 に通う小学校低学年の生徒に対して ワークショップを行った 結果, 保護者や参加した生徒からはポジティブな反応が 見られた. ロボットを 送られた親族に対する調査など,ワークショップがもたらす効果 検討についての今後の 計画 に ついても述べる.},
}
Takuto Akiyoshi, Hidenobu Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Masahiro Shiomi, Hirokazu Kato, "Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care", In the 18th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI 2023), Stockholm, Sweden (online), pp. 572-575, March, 2023.
Abstract: One of the important roles of social robots is to support mentalhealth through conversations with people. In this study, we focusedon the column method to support cognitive restructuring, whichis also used as one of the programs in psychiatric day care, and tohelp patients think flexibly and understand their own characteristics.To develop a robot that assists psychiatric day care patients inorganizing their thoughts about their worries and goals throughconversation, we designed the robot’s conversation content basedon the column method and implemented its autonomous conversationfunction. This paper reports on the preliminary experimentsconducted to evaluate and improve the effectiveness of this prototypesystem in an actual psychiatric day care setting, and on thecomments from participants in the experiments and day care staff.
BibTeX:
@InProceedings{Akiyoshi2023,
  author    = {Takuto Akiyoshi and Hidenobu Sumioka and Hirokazu Kumazaki and Junya Nakanishi and Masahiro Shiomi and Hirokazu Kato},
  booktitle = {the 18th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI 2023)},
  title     = {Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care},
  year      = {2023},
  address   = {Stockholm, Sweden (online)},
  day       = {13-16},
  month     = mar,
  pages     = {572-575},
  url       = {https://humanrobotinteraction.org/2023/},
  abstract  = {One of the important roles of social robots is to support mentalhealth through conversations with people. In this study, we focusedon the column method to support cognitive restructuring, whichis also used as one of the programs in psychiatric day care, and tohelp patients think flexibly and understand their own characteristics.To develop a robot that assists psychiatric day care patients inorganizing their thoughts about their worries and goals throughconversation, we designed the robot’s conversation content basedon the column method and implemented its autonomous conversationfunction. This paper reports on the preliminary experimentsconducted to evaluate and improve the effectiveness of this prototypesystem in an actual psychiatric day care setting, and on thecomments from participants in the experiments and day care staff.},
  keywords  = {human-robot interaction, cognitive reconstruction, stress-coping, psychiatric day care},
}
Carlos Toshinori Ishi, Chaoran Liu, Takashi Minato, "An attention-based sound selective hearing support system: evaluation by subjects with age-related hearing loss", In 2023 IEEE/SICE international Symposium on Sustem Integration (SII2023), Atlanta, USA, pp. 1-6, January, 2023.
Abstract: In order to overcome the problems of current hearing aid devices, we proposed an attention-based sound selective hearing support system, where individual target and anti-target sound sources in the environment can be selected, and the target sources in the facing direction are emphasized. New functions were implemented by accounting for system’s practicability and usability. The performance of the proposed system was evaluated under different noise conditions, by elderly subjects with different levels of hearing loss. Intelligibility tests and subjective impressions in three-party dialogue interactions indicated clear improvements by using the proposed hearing support system under noisy conditions.
BibTeX:
@InProceedings{Ishi2023,
  author    = {Carlos Toshinori Ishi and Chaoran Liu and Takashi Minato},
  booktitle = {2023 IEEE/SICE international Symposium on Sustem Integration (SII2023)},
  title     = {An attention-based sound selective hearing support system: evaluation by subjects with age-related hearing loss},
  year      = {2023},
  address   = {Atlanta, USA},
  day       = {17-20},
  doi       = {10.1109/SII55687.2023.10039165},
  month     = jan,
  pages     = {1-6},
  url       = {https://www.sice-si.org/conf/SII2023/index.html},
  abstract  = {In order to overcome the problems of current hearing aid devices, we proposed an attention-based sound selective hearing support system, where individual target and anti-target sound sources in the environment can be selected, and the target sources in the facing direction are emphasized. New functions were implemented by accounting for system’s practicability and usability. The performance of the proposed system was evaluated under different noise conditions, by elderly subjects with different levels of hearing loss. Intelligibility tests and subjective impressions in three-party dialogue interactions indicated clear improvements by using the proposed hearing support system under noisy conditions.},
}
Chaoran Liu, Carlos Toshinori Ishi, "A Smartphone Pose Auto-calibration Method using Hash-based DOA Estimation", In The 2023 IEEE/SICE International Symposium on System Integrations (SII 2023), Atlanta , USA, pp. 1-6, January, 2023.
Abstract: This paper presents a method to utilize multiple off-the-shelf smartphones to localize speakers. For DOA (direction of arrival) estimation on every single smartphone, we proposed an O(1) complexity hash table-based modified phase transform (PHAT) estimation method without scanning all possible directions to achieve lower CPU usage and longer battery life. Additionally, to increase DOA estimation accuracy, we measured two types of smartphone impulse responses and made them publicly available. In the auto-calibration process,each smartphone detects a pure tone emitted from another smartphone’s speaker. Assuming that all smartphones are on the same desktop surface, each smartphone’s 2D position and rotation are estimated using these detected DOAs and the speaker position relative to their central point. A bundle adjustment-like optimization method is employed to reduce the re-projection error in this process. After auto-calibration, we can easily integrate the DOAs found by each smartphone and estimate the speaker’s position using simple triangulation. The experimental results show that the proposed hash table-based DOA estimation method and 2D version bundle adjustment can perform auto-calibration precisely.
BibTeX:
@InProceedings{Liu2023,
  author    = {Chaoran Liu and Carlos Toshinori Ishi},
  booktitle = {The 2023 IEEE/SICE International Symposium on System Integrations (SII 2023)},
  title     = {A Smartphone Pose Auto-calibration Method using Hash-based DOA Estimation},
  year      = {2023},
  address   = {Atlanta , USA},
  day       = {17-20},
  doi       = {10.1109/SII55687.2023.10039085},
  month     = jan,
  pages     = {1-6},
  url       = {https://www.sice-si.org/conf/SII2023/approved_special_session.html},
  abstract  = {This paper presents a method to utilize multiple off-the-shelf smartphones to localize speakers. For DOA (direction of arrival) estimation on every single smartphone, we proposed an O(1) complexity hash table-based modified phase transform (PHAT) estimation method without scanning all possible directions to achieve lower CPU usage and longer battery life. Additionally, to increase DOA estimation accuracy, we measured two types of smartphone impulse responses and made them publicly available. In the auto-calibration process,each smartphone detects a pure tone emitted from another smartphone’s speaker. Assuming that all smartphones are on the same desktop surface, each smartphone’s 2D position and rotation are estimated using these detected DOAs and the speaker position relative to their central point. A bundle adjustment-like optimization method is employed to reduce the re-projection error in this process. After auto-calibration, we can easily integrate the DOAs found by each smartphone and estimate the speaker’s position using simple triangulation. The experimental results show that the proposed hash table-based DOA estimation method and 2D version bundle adjustment can perform auto-calibration precisely.},
}
中江文, 住岡英信, 中井 國博, "PainVisionのいたみ研究への応用 Aβ刺激の特性と痛みの数値化を生かした戦略", 第44回日本疼痛学会, 長良川国際会議場, 岐阜, December, 2022.
Abstract: 知覚痛覚定量分析装置Pain Visionはこれまで本人の主観により表現されていた感覚を機器上で数値化するという点で画期的な医療機器である。Visual Analogue Scale(VAS)やNumerical Rating Scale(NRS)も数値で表す点では同じであるが、VAS, NRSは本人がその数値そのものを直接申告するのに対し、PainVisionは 徐々に上昇する電気刺激に対して痛みに対応する電流値を記録することで痛みを数値化することから、VAS, NRSで時に問題となる、上限に近い数値(例えばNRS10)を申告した後、痛みが増悪した場合申告に困る(NRS12等が定義上存在しない)状況がない。さらに、Aβ線維への特異的な刺激であることから、刺激が痛みを伴いにくい特徴がある。その特性を用いた我々の代表的な研究を紹介する。 健康被検者とAutism Spectrum Disorder(ASD)患者に対し、最小感知電流値(初めて電流を感じた電流値)、痛み対応電流値(刺激を初めて痛みと感じた電流値)、痛み耐性電流値(刺激に対して初めて耐えがたいと感じた電流値)を測定し、各時点でのVASを測定し、ASD痛みに対する感受性の特徴を明らかにすることができた。 鎮静による痛みへの影響を明らかにする試みでは、痛み耐性電流値を用いて、鎮静前に測定した痛み耐性電流値の刺激を鎮静後に与えることでNRS,VASを用いた痛みの主観的評価は下がるが、痛みに対する自律神経反応を反映するPerfusion Indexの変化は不変であったことから、痛みの認知過程には高次脳機能がかかわっていることを改めて確かめることができた。 PainVision&210;が採用しているAβ線維刺激は比較的不快感が少ない刺激で、熱による不快な痛み刺激のようにAδ線維やC線維に対する刺激が限定的である一方で痛みの強さは、不快感とは独立して評価可能である。我々は、刺激の強さを合わせた状況で不快感の強い熱刺激とPainVisionの刺激を用いて刺激中の脳活動を脳波を用いて比較した。その結果、同じ強さの刺激でも不快感の強い刺激と弱い刺激では脳活動に違いがあることを明らかにすることができた。その成果から、痛みでも治療対象とすべきな不快な刺激を中心に脳活動の把握を進めていく必要があることを確認できた。
BibTeX:
@InProceedings{中江文2022b,
  author    = {中江文 and 住岡英信 and 中井 國博},
  booktitle = {第44回日本疼痛学会},
  title     = {PainVisionのいたみ研究への応用 Aβ刺激の特性と痛みの数値化を生かした戦略},
  year      = {2022},
  address   = {長良川国際会議場, 岐阜},
  day       = {2-3},
  etitle    = {Application of PainVision to the Study of Pain Strategies utilizing the characteristics of Aβ stimulation and quantification of pain},
  month     = dec,
  url       = {https://www.congre.co.jp/jasp2022/index.html},
  abstract  = {知覚痛覚定量分析装置Pain Visionはこれまで本人の主観により表現されていた感覚を機器上で数値化するという点で画期的な医療機器である。Visual Analogue Scale(VAS)やNumerical Rating Scale(NRS)も数値で表す点では同じであるが、VAS, NRSは本人がその数値そのものを直接申告するのに対し、PainVisionは 徐々に上昇する電気刺激に対して痛みに対応する電流値を記録することで痛みを数値化することから、VAS, NRSで時に問題となる、上限に近い数値(例えばNRS10)を申告した後、痛みが増悪した場合申告に困る(NRS12等が定義上存在しない)状況がない。さらに、Aβ線維への特異的な刺激であることから、刺激が痛みを伴いにくい特徴がある。その特性を用いた我々の代表的な研究を紹介する。 健康被検者とAutism Spectrum Disorder(ASD)患者に対し、最小感知電流値(初めて電流を感じた電流値)、痛み対応電流値(刺激を初めて痛みと感じた電流値)、痛み耐性電流値(刺激に対して初めて耐えがたいと感じた電流値)を測定し、各時点でのVASを測定し、ASD痛みに対する感受性の特徴を明らかにすることができた。 鎮静による痛みへの影響を明らかにする試みでは、痛み耐性電流値を用いて、鎮静前に測定した痛み耐性電流値の刺激を鎮静後に与えることでNRS,VASを用いた痛みの主観的評価は下がるが、痛みに対する自律神経反応を反映するPerfusion Indexの変化は不変であったことから、痛みの認知過程には高次脳機能がかかわっていることを改めて確かめることができた。 PainVisionÒが採用しているAβ線維刺激は比較的不快感が少ない刺激で、熱による不快な痛み刺激のようにAδ線維やC線維に対する刺激が限定的である一方で痛みの強さは、不快感とは独立して評価可能である。我々は、刺激の強さを合わせた状況で不快感の強い熱刺激とPainVisionの刺激を用いて刺激中の脳活動を脳波を用いて比較した。その結果、同じ強さの刺激でも不快感の強い刺激と弱い刺激では脳活動に違いがあることを明らかにすることができた。その成果から、痛みでも治療対象とすべきな不快な刺激を中心に脳活動の把握を進めていく必要があることを確認できた。},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "C-CycleTransGAN: A Non-parallel Controllable Cross-gender Voice Conversion Model with CycleGAN and Transformer", In Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2022 (APSIPA ASC 2022), no. 978-616-590-477-3, Chiang Mai, Thailand, pp. 1-7, November, 2022.
Abstract: In this study, we propose a conversion intensitycontrollable model for the cross-gender voice conversion (VC)1.In particular, we combine the CycleGAN and transformer module,and build a condition embedding network as an intensitycontroller. The model is firstly pre-trained with self-supervisedlearning on the single-gender voice reconstruction task, withthe condition set to male-to-male or female-to-female. Then, wefine-tune the model on the cross-gender voice conversion taskafter the pretraining is completed, with the condition set tomale-to-female or female-to-male. In the testing procedure, thecondition is expected to be employed as a controllable parameter(scale) to adjust the conversion intensity. The proposed methodwas evaluated on the Voice Conversion Challenge dataset andcompared to two baselines (CycleGAN, CycleTransGAN) withobjective and subjective evaluations. The results show that ourproposed model is able to equip the model with an additionalfunction of cross-gender controllability and without hurting thevoice conversion performance.
BibTeX:
@InProceedings{Fu2022c,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2022 (APSIPA ASC 2022)},
  title     = {C-CycleTransGAN: A Non-parallel Controllable Cross-gender Voice Conversion Model with CycleGAN and Transformer},
  year      = {2022},
  address   = {Chiang Mai, Thailand},
  day       = {7-10},
  doi       = {10.23919/APSIPAASC55919.2022.9979821},
  month     = nov,
  number    = {978-616-590-477-3},
  pages     = {1-7},
  url       = {https://www.apsipa2022.org/},
  abstract  = {In this study, we propose a conversion intensitycontrollable model for the cross-gender voice conversion (VC)1.In particular, we combine the CycleGAN and transformer module,and build a condition embedding network as an intensitycontroller. The model is firstly pre-trained with self-supervisedlearning on the single-gender voice reconstruction task, withthe condition set to male-to-male or female-to-female. Then, wefine-tune the model on the cross-gender voice conversion taskafter the pretraining is completed, with the condition set tomale-to-female or female-to-male. In the testing procedure, thecondition is expected to be employed as a controllable parameter(scale) to adjust the conversion intensity. The proposed methodwas evaluated on the Voice Conversion Challenge dataset andcompared to two baselines (CycleGAN, CycleTransGAN) withobjective and subjective evaluations. The results show that ourproposed model is able to equip the model with an additionalfunction of cross-gender controllability and without hurting thevoice conversion performance.},
  keywords  = {controllable cross-gender voice conversion, cycle-consistent adversarial networks, transformer},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "A CONTROLLABLE CROSS-GENDER VOICE CONVERSION FOR SOCIAL ROBOT", In ACII2022 WORKSHOP ON AFFECTIVE HUMAN-ROBOT INTERACTION (AHRI), online, October, 2022.
Abstract: In this study, we propose a conversion intensity controllablemodel for voice conversion (VC). In particular, we combinethe CycleGAN and transformer module, and build a conditionembedding network as a control parameter. The modelis first pre-trained with self-supervised learning on the voicereconstruction task, with the condition set to male-to-male orfemale-to-female. Then, we retrain the model on the crossgendervoice conversion task after the pretraining is completed,with the condition set to male-to-female or femaleto-male. In the testing procedure, the condition is expectedto be employed as a controllable parameter (scale). The proposedmethod was evaluated on the Voice Conversion Challengedataset and compared to two baselines (CycleGAN, CycleTransGAN)with objective and subjective evaluations. Theresults show that our proposed model is able to convert voicewith competitive performance, with the additional function ofcross-gender controllability.
BibTeX:
@InProceedings{Fu2022b,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {ACII2022 WORKSHOP ON AFFECTIVE HUMAN-ROBOT INTERACTION (AHRI)},
  title     = {A CONTROLLABLE CROSS-GENDER VOICE CONVERSION FOR SOCIAL ROBOT},
  year      = {2022},
  address   = {online},
  day       = {17},
  month     = oct,
  url       = {https://www.a-hri.me/},
  abstract  = {In this study, we propose a conversion intensity controllablemodel for voice conversion (VC). In particular, we combinethe CycleGAN and transformer module, and build a conditionembedding network as a control parameter. The modelis first pre-trained with self-supervised learning on the voicereconstruction task, with the condition set to male-to-male orfemale-to-female. Then, we retrain the model on the crossgendervoice conversion task after the pretraining is completed,with the condition set to male-to-female or femaleto-male. In the testing procedure, the condition is expectedto be employed as a controllable parameter (scale). The proposedmethod was evaluated on the Voice Conversion Challengedataset and compared to two baselines (CycleGAN, CycleTransGAN)with objective and subjective evaluations. Theresults show that our proposed model is able to convert voicewith competitive performance, with the additional function ofcross-gender controllability.},
  keywords  = {speech conversion, cycle-consistent adversarialnetworks},
}
Qi An, Akito Tanaka, Kazuto Nakashima, Hidenobu Sumioka, Masahiro Shiomi, Ryo Kurazume, "Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors", In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC2022), Prague, Czech Republic, pp. 1866-1871, October, 2022.
Abstract: Assisting patients with dementia is an importantsocial issue, and currently a mutli-modal care technique calledHumanitude is attracting attention. In Humanitude, it isimportant to have the patient stand up by utilizing theirown motor functions as much as possible. Humanitude caretechnique encourages caregivers to increase the area of contactwith patients during the sit-to-stand motion, but this caretechnique is not well understood for novice caregivers. Here, wedeveloped smock-type wearable sensors to measure proximitybetween caregivers and care recipients while assisting sit-tostandmotion. A measurement experiment was conducted toevaluate how proximity differs when the caregivers performsHumanitude care or they simulated care of novice. In addition,the effect of different care techniques on center of mass(CoM) trajectory and muscle activity of the care recipient wereinvestigated. As a result, it was found that the caregivers tendto bring their top and middle trunk closer in Humanitude carethan in novice simulated care. Furthermore, it resulted thatCoM trajectory and muscle activity under Humanitude carebecame more similar to those when the care recipient stood-upindependently than the condition with novice care. These resultsvalidate the effectiveness of Humanitude care and provideimportant aspect for learning techniques in Humanitude.
BibTeX:
@InProceedings{An2022,
  author    = {Qi An and Akito Tanaka and Kazuto Nakashima and Hidenobu Sumioka and Masahiro Shiomi and Ryo Kurazume},
  booktitle = {2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC2022)},
  title     = {Understanding Humanitude Care for Sit-to-stand Motion by Wearable Sensors},
  year      = {2022},
  address   = {Prague, Czech Republic},
  day       = {9-12},
  month     = oct,
  pages     = {1866-1871},
  url       = {https://ieeesmc2022.org/},
  abstract  = {Assisting patients with dementia is an importantsocial issue, and currently a mutli-modal care technique calledHumanitude is attracting attention. In Humanitude, it isimportant to have the patient stand up by utilizing theirown motor functions as much as possible. Humanitude caretechnique encourages caregivers to increase the area of contactwith patients during the sit-to-stand motion, but this caretechnique is not well understood for novice caregivers. Here, wedeveloped smock-type wearable sensors to measure proximitybetween caregivers and care recipients while assisting sit-tostandmotion. A measurement experiment was conducted toevaluate how proximity differs when the caregivers performsHumanitude care or they simulated care of novice. In addition,the effect of different care techniques on center of mass(CoM) trajectory and muscle activity of the care recipient wereinvestigated. As a result, it was found that the caregivers tendto bring their top and middle trunk closer in Humanitude carethan in novice simulated care. Furthermore, it resulted thatCoM trajectory and muscle activity under Humanitude carebecame more similar to those when the care recipient stood-upindependently than the condition with novice care. These resultsvalidate the effectiveness of Humanitude care and provideimportant aspect for learning techniques in Humanitude.},
  keywords  = {Wearable tactile sensor, Humanitude care, Sitto-stand},
}
Bowem Wu, Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Controlling the Impression of Robots via GAN-based Gesture Generation", In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto International Conference Center, Kyoto, pp. 9288-9295, October, 2022.
Abstract: As a type of body language, gestures can largelyaffect the impressions of human-like robots perceived byusers. Recent data-driven approaches to the generation of cospeechgestures have successfully promoted the naturalnessof produced gestures. These approaches also possess greatergeneralizability to work under various contexts than rule-basedmethods. However, most have no direct control over the humanimpressions of robots. The main obstacle is that creating adataset that covers various impression labels is not trivial. Inthis study, based on previous findings in cognitive science onrobot impressions, we present a heuristic method to controlthem without manual labeling, and demonstrate its effectivenesson a virtual agent and partially on a humanoid robot throughsubjective experiments with 50 participants.
BibTeX:
@InProceedings{Wu2022,
  author    = {Bowem Wu and Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)},
  title     = {Controlling the Impression of Robots via GAN-based Gesture Generation},
  year      = {2022},
  address   = {Kyoto International Conference Center, Kyoto},
  day       = {23-27},
  month     = oct,
  pages     = {9288-9295},
  url       = {https://iros2022.org/},
  abstract  = {As a type of body language, gestures can largelyaffect the impressions of human-like robots perceived byusers. Recent data-driven approaches to the generation of cospeechgestures have successfully promoted the naturalnessof produced gestures. These approaches also possess greatergeneralizability to work under various contexts than rule-basedmethods. However, most have no direct control over the humanimpressions of robots. The main obstacle is that creating adataset that covers various impression labels is not trivial. Inthis study, based on previous findings in cognitive science onrobot impressions, we present a heuristic method to controlthem without manual labeling, and demonstrate its effectivenesson a virtual agent and partially on a humanoid robot throughsubjective experiments with 50 participants.},
}
Ryuichiro Higashinaka, Takashi Minato, Kurima Sakai, Tomo Funayama, Hiromitsu Nishizaki, Takuya Nagai, "Dialogue Robot Competition for Developing Android Robot with Hospitality", In 2022 IEEE 11th Global Conference on Consumer Electronics (GCCE 2022), Senri Life Science Center, Osaka, October, 2022.
Abstract: To promote the research and development of an android robot with hospitality, we organized the Dialogue Robot Competition where the task is to serve a customer in a travel destination recommendation task. The robot acts as a salesperson at a travel agency and needs to help customers choose their desired destinations. This paper describes the task setting, software distributed for the competition, evaluation procedure, and results of the preliminary and final rounds of the competition.
BibTeX:
@InProceedings{Higashinaka2022,
  author    = {Ryuichiro Higashinaka and Takashi Minato and Kurima Sakai and Tomo Funayama and Hiromitsu Nishizaki and Takuya Nagai},
  booktitle = {2022 IEEE 11th Global Conference on Consumer Electronics (GCCE 2022)},
  title     = {Dialogue Robot Competition for Developing Android Robot with Hospitality},
  year      = {2022},
  address   = {Senri Life Science Center, Osaka},
  day       = {18-21},
  doi       = {10.1109/GCCE56475.2022.10014410},
  month     = oct,
  url       = {https://www.ieee-gcce.org/2022/index.html},
  abstract  = {To promote the research and development of an android robot with hospitality, we organized the Dialogue Robot Competition where the task is to serve a customer in a travel destination recommendation task. The robot acts as a salesperson at a travel agency and needs to help customers choose their desired destinations. This paper describes the task setting, software distributed for the competition, evaluation procedure, and results of the preliminary and final rounds of the competition.},
  keywords  = {Human-robot interaction, spoken-language processing, competition},
}
Aya Nakae, Ehsan Alizadeh Kashtiban, Tetsuro Honda, Chie Kishimoto, Kunihiro Nakai, "Objective evaluation of pain from experimental pressure stimulation by EEG", In IASP 2022 World Congress on Pain, Tronto, Canada, September, 2022.
Abstract: As pain is subjective symptom and moreover, to communicating the amount of pain is sometimes difficult, to prescribe appropriate amounts of analgesics is often challenging for doctors. To avoid the misuse of analgesics, the system of objective evaluation of pain will contribute to standardize pain treatment. By using the pooled EEG data from healthy volunteers with experimental heat pain stimulation, the absolute amplitudes, frequency power and frequency coherence were amplified and then, the features of the EEG were extracted and the EEG-based pain score algorithm by regression model was developed. The aim of this study is to evaluate the experimental ischemic pain with two different grades objectively by our EEG-based pain score algorithm. The qualities of pain evoked by KAATSU MASTER which could control the amount of blood flow and could imitate ischemic pain were Numbness, Throbbing pain, Shooting pain, Aching pain, and Electric -shock pain. Different levels of experimental pressure pain were successfully discriminated by the electroencephalogram data using machine learning technique.
BibTeX:
@InProceedings{Nakae2022a,
  author    = {Aya Nakae and Ehsan Alizadeh Kashtiban and Tetsuro Honda and Chie Kishimoto and Kunihiro Nakai},
  booktitle = {IASP 2022 World Congress on Pain},
  title     = {Objective evaluation of pain from experimental pressure stimulation by EEG},
  year      = {2022},
  address   = {Tronto, Canada},
  day       = {19-23},
  month     = sep,
  url       = {https://iaspworldcongress2022.org/},
  abstract  = {As pain is subjective symptom and moreover, to communicating the amount of pain is sometimes difficult, to prescribe appropriate amounts of analgesics is often challenging for doctors. To avoid the misuse of analgesics, the system of objective evaluation of pain will contribute to standardize pain treatment. By using the pooled EEG data from healthy volunteers with experimental heat pain stimulation, the absolute amplitudes, frequency power and frequency coherence were amplified and then, the features of the EEG were extracted and the EEG-based pain score algorithm by regression model was developed. The aim of this study is to evaluate the experimental ischemic pain with two different grades objectively by our EEG-based pain score algorithm. The qualities of pain evoked by KAATSU MASTER which could control the amount of blood flow and could imitate ischemic pain were Numbness, Throbbing pain, Shooting pain, Aching pain, and Electric -shock pain. Different levels of experimental pressure pain were successfully discriminated by the electroencephalogram data using machine learning technique.},
}
Taiken Shintani, carlos Toshinori Ishi, Hiroshi Ishiguro, "Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues", In 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2022), Naples, Italy, pp. 1534-1541, August, 2022.
Abstract: In this study, we describe an improved versionof our proposed model to generate gaze movements (eye andhead movements) of a dialogue robot in multi-party dialoguesituations, and investigated how the impressions change formodels created by data of speakers with different personalities.For that purpose, we used a multimodal three-party dialoguedata, and first analyzed the distributions of (1) the gaze target(towards dialogue partners or gaze aversion), (2) the gazeduration, and (3) the eyeball direction during gaze aversion.We then generated gaze behaviors in an android robot (Nikola)with the data of two people who were found to have distinctivepersonalities, and conducted subjective evaluation experiments.Results showed that a significant difference was found in theperceived personalities between the motions generated by thetwo models.
BibTeX:
@InProceedings{Shintani2022,
  author    = {Taiken Shintani and carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {31st IEEE International Conference on Robot \& Human Interactive Communication (RO-MAN 2022)},
  title     = {Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues},
  year      = {2022},
  address   = {Naples, Italy},
  day       = {29-2},
  month     = aug,
  pages     = {1534-1541},
  url       = {http://www.smile.unina.it/ro-man2022/},
  abstract  = {In this study, we describe an improved versionof our proposed model to generate gaze movements (eye andhead movements) of a dialogue robot in multi-party dialoguesituations, and investigated how the impressions change formodels created by data of speakers with different personalities.For that purpose, we used a multimodal three-party dialoguedata, and first analyzed the distributions of (1) the gaze target(towards dialogue partners or gaze aversion), (2) the gazeduration, and (3) the eyeball direction during gaze aversion.We then generated gaze behaviors in an android robot (Nikola)with the data of two people who were found to have distinctivepersonalities, and conducted subjective evaluation experiments.Results showed that a significant difference was found in theperceived personalities between the motions generated by thetwo models.},
}
阿部かおり, 島津研三, 冨田興一, 田港見布江, 宮前誠, 井國博, 中江文, "乳がん術後痛の後ろ向き調査~日本における乳がん術後遷延性疼痛の実態調査~", 日本ペインクリニック学会 第56回学術集会, 東京国際フォーラム, 東京 (online), July, 2022.
Abstract: 【背景】乳がん手術で術後遷延性疼痛となる割合は高く、罹患年齢が若いので長期に苦痛となってしまうが、日本において十分な調査がなされていない。術後遷延性疼痛のリスクファクターの一つとして、周術期の不十分な鎮痛が挙げられている。今回我々は、乳がん術後痛の後ろ向き調査を実施したので、その中間結果を報告する。 【方法】文書による同意を得た患者92名についてのデータを解析した。手術当日、翌日、1週間後、1か月後、3か月後、6か月後、1年後、2年後についてNumerical Rating Scale(NRS)の回答を得た。統計はピアソンの相関分析を行った。 【結果】痛みのある割合は、手術当日、翌日、1週間後、1か月後、3か月後、1年後、2年後で(単位%)84、87、85、88、79、64、51であった。相関係数は、翌日‐1週間(0.883)、1週間後‐1か月後(0.802)、1か月後‐3か月後(0.840)、3か月後‐1年後(0.848)、1年後‐2年後(0.687)であった。 【考察】術後経過で相関を認めることから、術後早期の痛みの管理で術後遷延性疼痛の発生率を抑えられる可能性があると考えられた。
BibTeX:
@InProceedings{中江文2022a,
  author    = {阿部かおり and 島津研三 and 冨田興一 and 田港見布江 and 宮前誠 and 井國博 and 中江文},
  booktitle = {日本ペインクリニック学会 第56回学術集会},
  title     = {乳がん術後痛の後ろ向き調査~日本における乳がん術後遷延性疼痛の実態調査~},
  year      = {2022},
  address   = {東京国際フォーラム, 東京 (online)},
  day       = {7-9},
  month     = jul,
  url       = {https://site2.convention.co.jp/pain56/},
  abstract  = {【背景】乳がん手術で術後遷延性疼痛となる割合は高く、罹患年齢が若いので長期に苦痛となってしまうが、日本において十分な調査がなされていない。術後遷延性疼痛のリスクファクターの一つとして、周術期の不十分な鎮痛が挙げられている。今回我々は、乳がん術後痛の後ろ向き調査を実施したので、その中間結果を報告する。 【方法】文書による同意を得た患者92名についてのデータを解析した。手術当日、翌日、1週間後、1か月後、3か月後、6か月後、1年後、2年後についてNumerical Rating Scale(NRS)の回答を得た。統計はピアソンの相関分析を行った。 【結果】痛みのある割合は、手術当日、翌日、1週間後、1か月後、3か月後、1年後、2年後で(単位%)84、87、85、88、79、64、51であった。相関係数は、翌日‐1週間(0.883)、1週間後‐1か月後(0.802)、1か月後‐3か月後(0.840)、3か月後‐1年後(0.848)、1年後‐2年後(0.687)であった。 【考察】術後経過で相関を認めることから、術後早期の痛みの管理で術後遷延性疼痛の発生率を抑えられる可能性があると考えられた。},
}
中井國博, 宮前誠, 中江文, "痛み判定補助システムPMS-1を用いた全身麻酔術後の痛みの客観的評価の探索的治験", 日本ペインクリニック学会 第56回学術集会, 東京国際フォーラム, 東京 (online), July, 2022.
Abstract: 【背景】痛みの評価は患者の申告に頼っており、客観的な評価のできる機器は存在しない。患者の痛みの表出には個人差があり、時に鎮痛薬の過少あるいは過剰投与につながる問題がある。今回我々は患者の脳波に基づいた痛みを数値化するシステムPMS-1(PaMeLa株式会社)を用いた全身麻酔手術後の患者の痛みに対する客観的評価の探索的治験を行ったのでその結果を報告する。 【方法】文書による同意を得た、全身麻酔で手術を受けた患者30名に対し、本探索的治験を行った。PMS-1は脳波計から脳波信号を取り込み解析処理しPain Score(PS)を0-100の値で算出するシステムである。脳波は前額部6電極で測定した。手術室より帰室後の鎮痛薬投薬前、投与1時間後、2時間後において、PMS-1が表示するPSとVAS、NRSを測定した。統計は対応のあるt検定と相関分析を行った。 【結果と考察】鎮痛薬の投与が行われた21名について分析した。投薬前‐投与1時間後、投薬前‐投与2時間後において、PSはVAS、NRSと同様に有意に変化した(p<0.05)。投与前‐2時間後においてPSはVAS、NRSと有意な相関を認めた。
BibTeX:
@InProceedings{中江文2022,
  author    = {中井國博 and 宮前誠 and 中江文},
  booktitle = {日本ペインクリニック学会 第56回学術集会},
  title     = {痛み判定補助システムPMS-1を用いた全身麻酔術後の痛みの客観的評価の探索的治験},
  year      = {2022},
  address   = {東京国際フォーラム, 東京 (online)},
  day       = {7-9},
  month     = jul,
  url       = {https://site2.convention.co.jp/pain56/},
  abstract  = {【背景】痛みの評価は患者の申告に頼っており、客観的な評価のできる機器は存在しない。患者の痛みの表出には個人差があり、時に鎮痛薬の過少あるいは過剰投与につながる問題がある。今回我々は患者の脳波に基づいた痛みを数値化するシステムPMS-1(PaMeLa株式会社)を用いた全身麻酔手術後の患者の痛みに対する客観的評価の探索的治験を行ったのでその結果を報告する。 【方法】文書による同意を得た、全身麻酔で手術を受けた患者30名に対し、本探索的治験を行った。PMS-1は脳波計から脳波信号を取り込み解析処理しPain Score(PS)を0-100の値で算出するシステムである。脳波は前額部6電極で測定した。手術室より帰室後の鎮痛薬投薬前、投与1時間後、2時間後において、PMS-1が表示するPSとVAS、NRSを測定した。統計は対応のあるt検定と相関分析を行った。 【結果と考察】鎮痛薬の投与が行われた21名について分析した。投薬前‐投与1時間後、投薬前‐投与2時間後において、PSはVAS、NRSと同様に有意に変化した(p<0.05)。投与前‐2時間後においてPSはVAS、NRSと有意な相関を認めた。},
}
Xinyue Li, Carlos Toshinori Ishi, Changzeng Fu, Ryoko Hayashi, "Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation by Chinese learners and Japanese Native Speakers", In Speech Prosody 2022, Lisbon, Portugal, pp. 550-554, May, 2022.
Abstract: The present study documents (1) how Japanese nativespeakers and L1-Chinese learners of L2 Japanese differ in theproduction of filled pauses during spontaneous conversations,and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation.Prosodic and voice quality measurements were extractedfrom vowels in filled pauses and ordinary lexical itemsproduced by Japanese native speakers and Chinese learners ofL2 Japanese. Statistical results revealed that there aresignificant differences in prosodic and voice qualitymeasurements including duration, F0mean, intensity, spectraltilt-related indices, jitter and shimmer, (1) between Japanesenative speakers and Chinese learners of L2 Japanese, as wellas (2) between filled pauses and ordinary lexical items. Inaddition, random forest analysis was conducted to examinehow much the measurements contribute to the classification offilled pauses and ordinary lexical items. Results indicate thatduration and intensity play the most significant role, whilevoice quality related features make a secondary contribution tothe classification. Results also suggest that the filled pauseproduction patterns of Chinese learners of L2 Japanese areinfluenced by L1 background.
BibTeX:
@InProceedings{Li2022a,
  author    = {Xinyue Li and Carlos Toshinori Ishi and Changzeng Fu and Ryoko Hayashi},
  booktitle = {Speech Prosody 2022},
  title     = {Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation by Chinese learners and Japanese Native Speakers},
  year      = {2022},
  address   = {Lisbon, Portugal},
  day       = {23-26},
  doi       = {10.21437/SpeechProsody.2022-112},
  month     = may,
  pages     = {550-554},
  url       = {http://labfon.letras.ulisboa.pt/sp2022/about.html},
  abstract  = {The present study documents (1) how Japanese nativespeakers and L1-Chinese learners of L2 Japanese differ in theproduction of filled pauses during spontaneous conversations,and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation.Prosodic and voice quality measurements were extractedfrom vowels in filled pauses and ordinary lexical itemsproduced by Japanese native speakers and Chinese learners ofL2 Japanese. Statistical results revealed that there aresignificant differences in prosodic and voice qualitymeasurements including duration, F0mean, intensity, spectraltilt-related indices, jitter and shimmer, (1) between Japanesenative speakers and Chinese learners of L2 Japanese, as wellas (2) between filled pauses and ordinary lexical items. Inaddition, random forest analysis was conducted to examinehow much the measurements contribute to the classification offilled pauses and ordinary lexical items. Results indicate thatduration and intensity play the most significant role, whilevoice quality related features make a secondary contribution tothe classification. Results also suggest that the filled pauseproduction patterns of Chinese learners of L2 Japanese areinfluenced by L1 background.},
  keywords  = {filled pauses, second language acquisition, spontaneous conversation, prosody, voice quality},
}
Ehsan Alizadeh Kashtiban, Tetsuro Honda, Chie Kishimoto, Yuya Onishi, Hidenobu Sumioka, Masahiro Shiomi, Aya Nakae, "THE EFFECT OF BEING HUGGED BY A ROBOT ON PAIN", In 12th Congress of the European Pain Federation(EFIC2022), online, April, 2022.
Abstract: As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain.Pain is a subjective symptom; however, it is sometimes difficult to prescribe analgesics based on subjective complaints. The development of an objective evaluation method is desired. We have developed an algorithm based on EEG data with experimental pain stimuli.The purpose of this study was to objectively evaluate the effect of hugging by a robot on pain, using pain score (PS). PS could allow us to objectively evaluate the effect of hugging by the robot on pain.
BibTeX:
@InProceedings{Alizadeh2022,
  author    = {Ehsan Alizadeh Kashtiban and Tetsuro Honda and Chie Kishimoto and Yuya Onishi and Hidenobu Sumioka and Masahiro Shiomi and Aya Nakae},
  booktitle = {12th Congress of the European Pain Federation(EFIC2022)},
  title     = {THE EFFECT OF BEING HUGGED BY A ROBOT ON PAIN},
  year      = {2022},
  address   = {online},
  day       = {27-30},
  month     = apr,
  url       = {https://efic-congress.org/},
  abstract  = {As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain.Pain is a subjective symptom; however, it is sometimes difficult to prescribe analgesics based on subjective complaints. The development of an objective evaluation method is desired. We have developed an algorithm based on EEG data with experimental pain stimuli.The purpose of this study was to objectively evaluate the effect of hugging by a robot on pain, using pain score (PS). PS could allow us to objectively evaluate the effect of hugging by the robot on pain.},
}
Aya Nakae, Ikan Chou, Tetsuro Honda, Chie Kishimoto, Hidenobu Sumioka, Yuya Onishi, Masahiro Shiomi, "CAN ROBOT’S HUG ALLEVIATE HUMAN PAIN?", In 12th Congress of the European Pain Federation(EFIC2022), Dublin (online), April, 2022.
Abstract: As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain. It has been reported that growth hormone secretion is decreased in fibromyalgia patients, and may be involved in the pain mechanism. We investigated the possibility that robot's hug could alleviate pain, along with changes in the secretion of growth hormone (GH). The results show that robots' hug has the potential to alleviate human pain. Its effects may be egulated via GH secretion.
BibTeX:
@InProceedings{Nakae2022,
  author    = {Aya Nakae and Ikan Chou and Tetsuro Honda and Chie Kishimoto and Hidenobu Sumioka and Yuya Onishi and Masahiro Shiomi},
  booktitle = {12th Congress of the European Pain Federation(EFIC2022)},
  title     = {CAN ROBOT’S HUG ALLEVIATE HUMAN PAIN?},
  year      = {2022},
  address   = {Dublin (online)},
  day       = {27-30},
  month     = apr,
  url       = {https://efic-congress.org/},
  abstract  = {As human-to-human contact is limited in Covid_19, the role of robots is gaining attention. It has been reported that hugging can reduce people's mental stress and alleviate pain. It has been reported that growth hormone secretion is decreased in fibromyalgia patients, and may be involved in the pain mechanism. We investigated the possibility that robot's hug could alleviate pain, along with changes in the secretion of growth hormone (GH). The results show that robots' hug has the potential to alleviate human pain. Its effects may be egulated via GH secretion.},
}
住岡英信, 倉爪亮, 塩見昌裕, "マスクを用いたユマニチュード訓練用近接センシングシステムの開発", 第26回一般社団法人情報処理学会シンポジウム インタラクション2022, no. 5D09, オンライン, pp. 670-672, March, 2022.
Abstract: ユマニチュードに基づく認知症ケアにおいて、介護者と被介護者の距離関係は重要な要素の一つであり、介護者の顔を被介護者の顔に20cm程度まで近づけ、極めて近距離からアイコンタクトを確立することが求められる。通常、健常者同士のコミュニケーションでは、これほど近くまで顔を近づけることがないため、この距離間隔を把握、維持することは、専門的知識に乏しい一般の人々には困難が伴う。このため、ユマニチュードの習得には有識者による専門的なトレーニングを受ける必要があり、普及の障害の一つとなっていた。そこで本研究では、学習者が介護、被介護者間の距離感を自ら学習できる、マスクに簡単に後付けできる近接センサを開発する。顔の近接状態を自動で検出、通知することで、誰でも簡単にユマニチュードのトレーニングを行えるシステムを目指す。
BibTeX:
@InProceedings{住岡英信2022,
  author    = {住岡英信 and 倉爪亮 and 塩見昌裕},
  booktitle = {第26回一般社団法人情報処理学会シンポジウム インタラクション2022},
  title     = {マスクを用いたユマニチュード訓練用近接センシングシステムの開発},
  year      = {2022},
  address   = {オンライン},
  day       = {2},
  month     = mar,
  number    = {5D09},
  pages     = {670-672},
  url       = {https://www.interaction-ipsj.org/2022/},
  abstract  = {ユマニチュードに基づく認知症ケアにおいて、介護者と被介護者の距離関係は重要な要素の一つであり、介護者の顔を被介護者の顔に20cm程度まで近づけ、極めて近距離からアイコンタクトを確立することが求められる。通常、健常者同士のコミュニケーションでは、これほど近くまで顔を近づけることがないため、この距離間隔を把握、維持することは、専門的知識に乏しい一般の人々には困難が伴う。このため、ユマニチュードの習得には有識者による専門的なトレーニングを受ける必要があり、普及の障害の一つとなっていた。そこで本研究では、学習者が介護、被介護者間の距離感を自ら学習できる、マスクに簡単に後付けできる近接センサを開発する。顔の近接状態を自動で検出、通知することで、誰でも簡単にユマニチュードのトレーニングを行えるシステムを目指す。},
}
秋吉拓斗, 住岡英信, 熊崎博一, 中西淳也, 塩見昌裕, 加藤博一, "精神科デイケアにおける考え方の整理を支援するロボットの開発に向けた印象調査", 第26回一般社団法人情報処理学会シンポジウム インタラクション2022, no. 1D04, online, pp. 146-149, February, 2022.
Abstract: 社会的なコミュニケーションロボットの重要な役割の一つは,人との対話によって人のメンタルヘルスの支援を行うことである.本研究では,精神科デイケアにおいてプログラムの一環として取り入れられている考え方の整理を支援する「コラム法」により,患者が柔軟な考え方や自身の特性を理解することに着目した.本研究では音声対話によって考え方の整理を支援するロボットの開発に向け,コラム法に基づいたロボットの対話内容を設計し,自律的な音声対話機能を実装した.本論文では,実際の精神科デイケアにおいて本プロトタイプシステムの有効性を評価し改善するために行った予備実験について報告する.
BibTeX:
@InProceedings{秋吉拓斗2022,
  author    = {秋吉拓斗 and 住岡英信 and 熊崎博一 and 中西淳也 and 塩見昌裕 and 加藤博一},
  booktitle = {第26回一般社団法人情報処理学会シンポジウム インタラクション2022},
  title     = {精神科デイケアにおける考え方の整理を支援するロボットの開発に向けた印象調査},
  year      = {2022},
  address   = {online},
  day       = {28},
  month     = feb,
  number    = {1D04},
  pages     = {146-149},
  url       = {https://www.interaction-ipsj.org/2022/},
  abstract  = {社会的なコミュニケーションロボットの重要な役割の一つは,人との対話によって人のメンタルヘルスの支援を行うことである.本研究では,精神科デイケアにおいてプログラムの一環として取り入れられている考え方の整理を支援する「コラム法」により,患者が柔軟な考え方や自身の特性を理解することに着目した.本研究では音声対話によって考え方の整理を支援するロボットの開発に向け,コラム法に基づいたロボットの対話内容を設計し,自律的な音声対話機能を実装した.本論文では,実際の精神科デイケアにおいて本プロトタイプシステムの有効性を評価し改善するために行った予備実験について報告する.},
}
住岡英信, 安琪, 倉爪亮, 塩見昌裕, "ユマニチュードによる立ち上がり動作介助の理解に向けた接触・近接インタラクション計測システムの開発", 第26回一般社団法人情報処理学会シンポジウム インタラクション2022, no. 1D09, オンライン, pp. 168-170, February, 2022.
Abstract: ユマニチュードに基づく認知症ケアにおいて立たせる技術は重要な要素である 。被介護者を立たせる際、通常の介護では腰や腕を掴み、上に引っ張るが、ユマニチュードでは、介護者と被介護者の胸を密着させ、両者の足を結ぶ多角形内に重心を移動させて立ち上がりを介助する。この際、立ち上がるに従い胸や腹間の距離が近づくように引き上げるのがコツであり、体の密着度や距離を計測することで正しい動作かを評価できる。しかし、両者の距離が近いため、オクルージョンが発生しやすく、既存の画像による姿勢推定では、これらの情報を計測することが困難であった。そこで本研究では、ユマニチュードの技術に基づく立ち上がり介助動作訓練システムの実現を目指し、簡単に装着できる服型の近接・接触センサを開発した。これにより、立ち上がり動作を介助する際 の介護者と被介護者両者の体の密着度や距離の計測が可能となる。本提案システムは、動作の良し悪しを実時間で評価し学習者へ提示することを可能にするため、習得が難しいといわれるユマニチュードの訓練支援システムの実現につながる。
BibTeX:
@InProceedings{住岡英信2022a,
  author    = {住岡英信 and 安琪 and 倉爪亮 and 塩見昌裕},
  booktitle = {第26回一般社団法人情報処理学会シンポジウム インタラクション2022},
  title     = {ユマニチュードによる立ち上がり動作介助の理解に向けた接触・近接インタラクション計測システムの開発},
  year      = {2022},
  address   = {オンライン},
  day       = {28},
  month     = feb,
  number    = {1D09},
  pages     = {168-170},
  url       = {https://www.interaction-ipsj.org/2022/},
  abstract  = {ユマニチュードに基づく認知症ケアにおいて立たせる技術は重要な要素である 。被介護者を立たせる際、通常の介護では腰や腕を掴み、上に引っ張るが、ユマニチュードでは、介護者と被介護者の胸を密着させ、両者の足を結ぶ多角形内に重心を移動させて立ち上がりを介助する。この際、立ち上がるに従い胸や腹間の距離が近づくように引き上げるのがコツであり、体の密着度や距離を計測することで正しい動作かを評価できる。しかし、両者の距離が近いため、オクルージョンが発生しやすく、既存の画像による姿勢推定では、これらの情報を計測することが困難であった。そこで本研究では、ユマニチュードの技術に基づく立ち上がり介助動作訓練システムの実現を目指し、簡単に装着できる服型の近接・接触センサを開発した。これにより、立ち上がり動作を介助する際 の介護者と被介護者両者の体の密着度や距離の計測が可能となる。本提案システムは、動作の良し悪しを実時間で評価し学習者へ提示することを可能にするため、習得が難しいといわれるユマニチュードの訓練支援システムの実現につながる。},
}
Takashi Takuma, Koki Haruno, Kosuke Yamada, Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, "Stretchable Multi-modal Sensor using Capacitive Cloth for Soft Mobile Robot Passing through Gap", In 2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2021), Sanya, China (online), pp. 1960-1967, December, 2021.
Abstract: A challenging issue for soft robots is developingsoft sensors that measure such non-contact informationas the distance between a robot and obstaclesas well as contact information such as stretch lengthby external force. Another issue is to adopt the sensorto the mobile robot to measure topography of pathway.We adopt capacitive cloth, which contains conductiveand insulation layers, and measure not only suchcontact information as the robot’s deformation but alsosuch non-contact information as the distance betweenthe cloth and objects. Because the cloth cannot stretchthough it deforms, it is processed by the Kirigami structureand embedded into a silicone plate. This papershows the cloth’s basic specifications by measuring therelationship between the capacitance and the stretchlength that corresponds to the contact information andthe one and distance that corresponds to the noncontactinformation. The cloth is also embedded ina soft mobile robot that passes through a narrowgap while making contact with it. The pathway’sshape is estimated by observing the profile of thecloth’s capacitance by using contact information. Fromthe results of the first experiment, which measuredthe stretch length, we observed a strong correlationbetween the stretch length and the capacitance. Inthe second experiment on non-contact information anddistance, the capacitance greatly changed when the conductive material was close to cloth, although lessconductivematerial did not greatly affect the capacitance. In the last experiment in which we embeddedthe cloth into the soft robot, the gap’s height andlength of the pathway were detected by observing theprofile of the cloth’s capacitance. These results suggestthat capacitive cloth has multi-modal sensing ability,including both conventional contact and novel non-contact information.
BibTeX:
@InProceedings{Takuma2021,
  author    = {Takashi Takuma and Koki Haruno and Kosuke Yamada and Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi},
  booktitle = {2021 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2021)},
  title     = {Stretchable Multi-modal Sensor using Capacitive Cloth for Soft Mobile Robot Passing through Gap},
  year      = {2021},
  address   = {Sanya, China (online)},
  day       = {27-31},
  month     = dec,
  pages     = {1960-1967},
  url       = {https://ieee-robio.org/2021/},
  abstract  = {A challenging issue for soft robots is developingsoft sensors that measure such non-contact informationas the distance between a robot and obstaclesas well as contact information such as stretch lengthby external force. Another issue is to adopt the sensorto the mobile robot to measure topography of pathway.We adopt capacitive cloth, which contains conductiveand insulation layers, and measure not only suchcontact information as the robot’s deformation but alsosuch non-contact information as the distance betweenthe cloth and objects. Because the cloth cannot stretchthough it deforms, it is processed by the Kirigami structureand embedded into a silicone plate. This papershows the cloth’s basic specifications by measuring therelationship between the capacitance and the stretchlength that corresponds to the contact information andthe one and distance that corresponds to the noncontactinformation. The cloth is also embedded ina soft mobile robot that passes through a narrowgap while making contact with it. The pathway’sshape is estimated by observing the profile of thecloth’s capacitance by using contact information. Fromthe results of the first experiment, which measuredthe stretch length, we observed a strong correlationbetween the stretch length and the capacitance. Inthe second experiment on non-contact information anddistance, the capacitance greatly changed when the conductive material was close to cloth, although lessconductivematerial did not greatly affect the capacitance. In the last experiment in which we embeddedthe cloth into the soft robot, the gap’s height andlength of the pathway were detected by observing theprofile of the cloth’s capacitance. These results suggestthat capacitive cloth has multi-modal sensing ability,including both conventional contact and novel non-contact information.},
}
Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN", In The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2021 (The workshop is as an official workshop of ACM ICMI’21), Virtual, pp. 194-201, October, 2021.
Abstract: Gestures are crucial for increasing the human-likeness of agents and robots to achieve smoother interactions with humans. The realization of an effective system to model human gestures, which are matched with the speech utterances, is necessary to be embedded in these agents. In this work, we propose a GRU-based autoregressive generation model for gesture generation, which is trained with a CNN-based discriminator in an adversarial manner using a WGAN-based learning algorithm. The model is trained to output the rotation angles of the joints in the upper body, and implemented to animate a CG avatar. The motions synthesized by the proposed system are evaluated via an objective measure and a subjective experiment, showing that the proposed model outperforms a baseline model which is trained by a state-of-the-art GAN-based algorithm, using the same dataset. This result reveals that it is essential to develop a stable and robust learning algorithm for training gesture generation models. Our code can be found in https://github.com/wubowen416/gesture-generation.
BibTeX:
@Inproceedings{Wu2021,
  author    = {Bowen Wu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  title     = {Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN},
  booktitle = {The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2021 (The workshop is as an official workshop of ACM ICMI’21)},
  year      = {2021},
  pages     = {194-201},
  address   = {Virtual},
  month     = oct,
  day       = {22},
  doi       = {doi.org/10.1145/3461615.3485407},
  url       = {https://dl.acm.org/doi/10.1145/3461615.3485407},
  abstract  = {Gestures are crucial for increasing the human-likeness of agents and robots to achieve smoother interactions with humans. The realization of an effective system to model human gestures, which are matched with the speech utterances, is necessary to be embedded in these agents. In this work, we propose a GRU-based autoregressive generation model for gesture generation, which is trained with a CNN-based discriminator in an adversarial manner using a WGAN-based learning algorithm. The model is trained to output the rotation angles of the joints in the upper body, and implemented to animate a CG avatar. The motions synthesized by the proposed system are evaluated via an objective measure and a subjective experiment, showing that the proposed model outperforms a baseline model which is trained by a state-of-the-art GAN-based algorithm, using the same dataset. This result reveals that it is essential to develop a stable and robust learning algorithm for training gesture generation models. Our code can be found in https://github.com/wubowen416/gesture-generation.},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Takamasa Iio, Hiroshi Ishiguro, "Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.
BibTeX:
@InProceedings{Fu2021c,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Takamasa Iio and Hiroshi Ishiguro},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Using an Android Robot to Improve Social Connectedness by Sharing Recent Experiences of Group Members in Human-Robot Conversations},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {Social connectedness is vital for developing group cohesion and strengthening belongingness. However, with the accelerating pace of modern life, people have fewer opportunities to participate in group-building activities. Furthermore, owing to the teleworking and quarantine requirements necessitated by the Covid-19 pandemic, the social connectedness of group members may become weak. To address this issue, in this study, we used an android robot to conduct daily conversations, and as an intermediary to increase intra-group connectedness. Specifically, we constructed an android robot system for collecting and sharing recent member-related experiences. The system has a chatbot function based on BERT and a memory function with a neural-network-based dialog action analysis model. We conducted a 3-day human-robot conversation experiment to verify the effectiveness of the proposed system. The results of a questionnaire-based evaluation and empirical analysis demonstrate that the proposed system can increase the familiarity and closeness of group members. This suggests that the proposed method is useful for enhancing social connectedness. Moreover, it can improve the closeness of the user-robot relation, as well as the performance of robots in conducting conversations with people.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Chinenye Augustine Ajibo, Carlos Toshinori Ishi, Hiroshi Ishiguro, "Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: Motivated by the fact that some human emotional expression promotes affiliating functions such as signaling, social change and support which have social benefits, we investigate how these behaviors can be extended to Human-Robot Interaction (HRI) scenario. Specifically, we explored how an android robot could be furnished with socially motivated expressions geared towards eliciting adherence to COVID-19 guidelines. To this effect, we analyzed how different behaviors associated with the social expressions in this kind of situation occur in Human-Human Interaction (HHI), and designed a scenario with context-inspired behaviors (polite, gentle, displeased and angry) to enforce social compliance to a violator. We then implemented these behaviors in an android robot, and subjectively evaluated how effectively these behaviors could be expressed by the robot, and how these behaviors are perceived in terms of their appropriateness, effectiveness and tendency to enforce social compliance to WHO COVID-19 guidelines.
BibTeX:
@InProceedings{Ajibo2021a,
  author    = {Chinenye Augustine Ajibo and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {Motivated by the fact that some human emotional expression promotes affiliating functions such as signaling, social change and support which have social benefits, we investigate how these behaviors can be extended to Human-Robot Interaction (HRI) scenario. Specifically, we explored how an android robot could be furnished with socially motivated expressions geared towards eliciting adherence to COVID-19 guidelines. To this effect, we analyzed how different behaviors associated with the social expressions in this kind of situation occur in Human-Human Interaction (HHI), and designed a scenario with context-inspired behaviors (polite, gentle, displeased and angry) to enforce social compliance to a violator. We then implemented these behaviors in an android robot, and subjectively evaluated how effectively these behaviors could be expressed by the robot, and how these behaviors are perceived in terms of their appropriateness, effectiveness and tendency to enforce social compliance to WHO COVID-19 guidelines.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Takuto Akiyoshi, Junya Nakanishi, Hiroshi Ishiguro Hidenobu Sumioka, Masahiro Shiomi, "A Robot that Encourages Self-Disclosure to Reduce Anger Mood", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, September, 2021.
Abstract: One essential role of social robots is supporting human mental health by interaction with people. In this study, we focused on making people’s moods more positive through conversations about their problems as our first step to achieving a robot that cares about mental health. We employed the column method, which is typical stress coping technique in Japan, and designed conversational contents for a robot. We implemented conversational functions based on the column method for a social robot as well as a self-schema estimation function using conversational data. In addition, we proposed conversational strategies to support noticing their self-schemas and automatic thoughts, which are related to mental health support. We experimentally evaluated our system’s effectiveness and found that participants who used our system with the proposed conversational strategies made more self-disclosures and experienced less anger compared to those who did not use the proposed conversational strategies. On the other hand, the strategies did not significantly increase the performance of the self-schema estimation function.
BibTeX:
@InProceedings{Akiyoshi2021a,
  author    = {Takuto Akiyoshi and Junya Nakanishi and Hiroshi Ishiguro Hidenobu Sumioka and Masahiro Shiomi},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {A Robot that Encourages Self-Disclosure to Reduce Anger Mood},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  url       = {https://www.iros2021.org/},
  abstract  = {One essential role of social robots is supporting human mental health by interaction with people. In this study, we focused on making people’s moods more positive through conversations about their problems as our first step to achieving a robot that cares about mental health. We employed the column method, which is typical stress coping technique in Japan, and designed conversational contents for a robot. We implemented conversational functions based on the column method for a social robot as well as a self-schema estimation function using conversational data. In addition, we proposed conversational strategies to support noticing their self-schemas and automatic thoughts, which are related to mental health support. We experimentally evaluated our system’s effectiveness and found that participants who used our system with the proposed conversational strategies made more self-disclosures and experienced less anger compared to those who did not use the proposed conversational strategies. On the other hand, the strategies did not significantly increase the performance of the self-schema estimation function.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Hidenobu Sumioka, Kohei Nakajima, Kurima Sakai, Minato Takashi, Mashiro Shiomi, "Wearable Tactile Sensor Suit for Natural Body Dynamics Extraction: Case Study on Posture Prediction Based on Physical Reservoir Computing", In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, pp. 9481-9488, September, 2021.
Abstract: We propose a wearable tactile sensor suit, which can be regarded as tactile sensor networks, for monitoring natural body dynamics to be exploited as a computational resource for estimating the posture of a human or robot that wears it. We emulated the periodic motions of a wearer (a human and an android robot) using a novel sensor suit with a 9-channel fabric tactile sensor on the left arm. The emulation was conducted by using a linear regression (LR) model of sensor states as readout modules that predict the next wearer’s movement using the current sensor data. Our result shows that the LR performance is comparable with other recurrent neural network approaches, suggesting that a fabric tactile sensor network is capable of monitoring the natural body motions, and further, this natural body dynamics itself can be used as an effective computational resource.
BibTeX:
@InProceedings{Sumioka2021c,
  author    = {Hidenobu Sumioka and Kohei Nakajima and Kurima Sakai and Minato Takashi and Mashiro Shiomi},
  booktitle = {2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)},
  title     = {Wearable Tactile Sensor Suit for Natural Body Dynamics Extraction: Case Study on Posture Prediction Based on Physical Reservoir Computing},
  year      = {2021},
  address   = {Prague, Czech Republic},
  day       = {27-01},
  month     = sep,
  pages     = {9481-9488},
  url       = {https://www.iros2021.org/},
  abstract  = {We propose a wearable tactile sensor suit, which can be regarded as tactile sensor networks, for monitoring natural body dynamics to be exploited as a computational resource for estimating the posture of a human or robot that wears it. We emulated the periodic motions of a wearer (a human and an android robot) using a novel sensor suit with a 9-channel fabric tactile sensor on the left arm. The emulation was conducted by using a linear regression (LR) model of sensor states as readout modules that predict the next wearer’s movement using the current sensor data. Our result shows that the LR performance is comparable with other recurrent neural network approaches, suggesting that a fabric tactile sensor network is capable of monitoring the natural body motions, and further, this natural body dynamics itself can be used as an effective computational resource.},
}
Nobuo Yamato, Hidenobu Sumioka, Masahiro Shiomi, Hiroshi Ishiguro, Youji Kohda, "Robotic Baby Doll with Minimal Designfor Interactive Doll Therapy in ElderlyDementia Care", In 12th International Conference on Applied Human Factors and Ergonomics (AHFE 2021), Virtual Conference, pp. 417-422, July, 2021.
Abstract: We designed HIRO, a robotic baby doll, to be used in an interactive, non-pharmacological intervention that combines doll therapy with robot technol-ogy for elderly people with dementia. We took a minimal design approach; on-ly the most basic human-like features are represented on the robotic system to encourage users to use their imagination to fill in the missing details. The ro-bot emits baby voice recordings as the user interacts with it, giving the robot more realistic mannerisms and enhancing the interaction between user and ro-bot. In addition, the minimal design simplifies the system configuration of the robot, making it inexpensive and intuitive for users to handle. In this paper, we discuss the benefits of the developed robot for elderly dementia patients and their caregivers.
BibTeX:
@InProceedings{Yamato2021,
  author    = {Nobuo Yamato and Hidenobu Sumioka and Masahiro Shiomi and Hiroshi Ishiguro and Youji Kohda},
  booktitle = {12th International Conference on Applied Human Factors and Ergonomics (AHFE 2021)},
  title     = {Robotic Baby Doll with Minimal Designfor Interactive Doll Therapy in ElderlyDementia Care},
  year      = {2021},
  address   = {Virtual Conference},
  day       = {25-29},
  doi       = {10.1007/978-3-030-80840-2_48},
  month     = jul,
  pages     = {417-422},
  url       = {https://link.springer.com/chapter/10.1007%2F978-3-030-80840-2_48},
  abstract  = {We designed HIRO, a robotic baby doll, to be used in an interactive, non-pharmacological intervention that combines doll therapy with robot technol-ogy for elderly people with dementia. We took a minimal design approach; on-ly the most basic human-like features are represented on the robotic system to encourage users to use their imagination to fill in the missing details. The ro-bot emits baby voice recordings as the user interacts with it, giving the robot more realistic mannerisms and enhancing the interaction between user and ro-bot. In addition, the minimal design simplifies the system configuration of the robot, making it inexpensive and intuitive for users to handle. In this paper, we discuss the benefits of the developed robot for elderly dementia patients and their caregivers.},
  keywords  = {Elderly care, Therapy robot, Human-robot interaction, Welfare care, Dementia},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "MAEC: Multi-instance learning with an Adversarial Auto-encoder-based Classifier for Speech Emotion Recognition", In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021), vol. SPE-24, no. 3, Toronto, Ontario, Canada, pp. 6299-6303, June, 2021.
Abstract: In this paper, we propose an adversarial auto-encoder based classifier, which can regularize the distribution of latent representation to smooth the boundaries among categories. Moreover, we adopt multi-instance learning by dividing speech into a bag of segments to capture the most salient moments for presenting an emotion. The proposed model was trained on the IEMOCAP dataset and evaluated on the in-corpus validation set (IEMOCAP) and the cross-corpus validation set (MELD). The experiment results show that our model outperforms the baseline on in-corpus validation and increases the scores on cross-corpus validation with regularization.
BibTeX:
@InProceedings{Fu2021a,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021)},
  title     = {MAEC: Multi-instance learning with an Adversarial Auto-encoder-based Classifier for Speech Emotion Recognition},
  year      = {2021},
  address   = {Toronto, Ontario, Canada},
  day       = {6-11},
  doi       = {10.1109/ICASSP39728.2021.9413640},
  month     = jun,
  number    = {3},
  pages     = {6299-6303},
  url       = {https://2021.ieeeicassp.org/},
  volume    = {SPE-24},
  abstract  = {In this paper, we propose an adversarial auto-encoder based classifier, which can regularize the distribution of latent representation to smooth the boundaries among categories. Moreover, we adopt multi-instance learning by dividing speech into a bag of segments to capture the most salient moments for presenting an emotion. The proposed model was trained on the IEMOCAP dataset and evaluated on the in-corpus validation set (IEMOCAP) and the cross-corpus validation set (MELD). The experiment results show that our model outperforms the baseline on in-corpus validation and increases the scores on cross-corpus validation with regularization.},
  keywords  = {speech emotion recognition, multi-instance, adversarial auto-encoder},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "An End-to-End Multitask Learning Model to Improve Speech Emotion Recognition", In EUSIPCO 2020 28th European Signal Processing Conference, Amsterdam, The Netherlands (Virtual), pp. 351-355, January, 2021.
Abstract: Speech Emotion Recognition (SER) has been shown to benefit from many of the recent advances in deep learning but still have some space to grow. In this paper, we propose an attention-based CNN-BLSTM model with the end-to-end (E2E) learning method. We first extract Mel-spectrogram from wav file instead of using hand-crafted features. Then we adopt two types of attention mechanisms to let the model focuses on salient periods of speech emotions over the temporal dimension. Considering that there are many individual differences among people in expressing emotions, we incorporate speaker recognition as an auxiliary task. Moreover, since the training data set has a small sample size, we include data from another language as data augmentation. We evaluated the proposed method on SAVEE dataset by training it with single task, multitask, and cross-language. The evaluation shows that our proposed model achieves 73.62% for weighted accuracy and 71.11% for unweighted accuracy in the task of speech emotion recognition, which outperforms the baseline with 11.13 points.
BibTeX:
@InProceedings{Fu2021,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {EUSIPCO 2020 28th European Signal Processing Conference},
  title     = {An End-to-End Multitask Learning Model to Improve Speech Emotion Recognition},
  year      = {2021},
  address   = {Amsterdam, The Netherlands (Virtual)},
  day       = {18-22},
  doi       = {10.23919/Eusipco47968.2020.9287484},
  month     = jan,
  pages     = {351-355},
  url       = {https://eusipco2020.org/},
  abstract  = {Speech Emotion Recognition (SER) has been shown to benefit from many of the recent advances in deep learning but still have some space to grow. In this paper, we propose an attention-based CNN-BLSTM model with the end-to-end (E2E) learning method. We first extract Mel-spectrogram from wav file instead of using hand-crafted features. Then we adopt two types of attention mechanisms to let the model focuses on salient periods of speech emotions over the temporal dimension. Considering that there are many individual differences among people in expressing emotions, we incorporate speaker recognition as an auxiliary task. Moreover, since the training data set has a small sample size, we include data from another language as data augmentation. We evaluated the proposed method on SAVEE dataset by training it with single task, multitask, and cross-language. The evaluation shows that our proposed model achieves 73.62% for weighted accuracy and 71.11% for unweighted accuracy in the task of speech emotion recognition, which outperforms the baseline with 11.13 points.},
}
李歆玥, 石井カルロス寿憲, 林良子, "中国語を母語とする日本語学習者による態度音声の音声分析:F0曲線と声質に焦点をあてて", 日本音声学会第35回全国大会, sep, no. A3, オンライン, pp. 65-70, 2021.
Abstract: 外国人日本語学習者は日本語で表出したパラ言語情報は母語話者に伝わりにくいことが指摘されている。このため,日本語学習者は鈍感,乱暴などのマイナス評価を受けたり,コミュニケーション自体を避けられたりすることが多い。なぜ日本語学習者によるパラ言語情報が母語話者に伝わりにくいのか,どのように改善できるのかについて理解するためには,日本語学習者による発話におけるパラ言語表現の使用実態をつかむ必要がある。そこで本研究では,日本語母語話者8名による日本語態度音声と,中国語を母語とする日本語学習者8名(全員N1合格者)による日本語および中国語態度音声を分析することで,態度のペアである「友好/敵対」,「丁寧/失礼」,「本気/冗談」,「賞賛/非難」の発話が態度および発話者群によってどのように変化するのかについて検討した。  態度音声(平叙文と疑問文)において,時系列に離散化したF0曲線および声質特徴量(H1-H2, jitter, shimmer, HNR)を調べた。その結果,日本語母語話者と中国人学習者では異なる態度表出パタンが多数見られた。F0曲線について,中国人学習者による発話にはF0が激しく変化することが特徴的であり, F0上昇と下降のタイミングも日本語母語話者と大幅に異なる。特に,中国人学習者が「本気/冗談」と「賞賛/非難」の態度を母語である中国語のF0曲線に近い形で表出しており,中国語の態度表出方法に影響されている可能性を示した。さらに,声質分析を行なった結果,中国人学習者による「友好」「丁寧」発話のH1-H2とHNRは日本語母語話者より顕著に低く,jitterは日本語母語話者より顕著に高かった。この結果は中国人学習者による「友好」「丁寧」発話は日本語母語話者と異なり,声帯が緊張した発声に近く,非周期性が強いことが示唆された。以上のことから,外国人日本語学習者は母語話者と異なる音声を用いる可能性があるため,態度によるF0曲線の変化と発声練習の指導が重要である。
BibTeX:
@InProceedings{Li2021_3,
  author    = {李歆玥 and 石井カルロス寿憲 and 林良子},
  booktitle = {日本音声学会第35回全国大会},
  title     = {中国語を母語とする日本語学習者による態度音声の音声分析:F0曲線と声質に焦点をあてて},
  year      = {2021},
  address   = {オンライン},
  day       = {25-26},
  number    = {A3},
  pages     = {65-70},
  publisher = sep,
  url       = {http://www.psj.gr.jp/jpn/},
  abstract  = {外国人日本語学習者は日本語で表出したパラ言語情報は母語話者に伝わりにくいことが指摘されている。このため,日本語学習者は鈍感,乱暴などのマイナス評価を受けたり,コミュニケーション自体を避けられたりすることが多い。なぜ日本語学習者によるパラ言語情報が母語話者に伝わりにくいのか,どのように改善できるのかについて理解するためには,日本語学習者による発話におけるパラ言語表現の使用実態をつかむ必要がある。そこで本研究では,日本語母語話者8名による日本語態度音声と,中国語を母語とする日本語学習者8名(全員N1合格者)による日本語および中国語態度音声を分析することで,態度のペアである「友好/敵対」,「丁寧/失礼」,「本気/冗談」,「賞賛/非難」の発話が態度および発話者群によってどのように変化するのかについて検討した。  態度音声(平叙文と疑問文)において,時系列に離散化したF0曲線および声質特徴量(H1-H2, jitter, shimmer, HNR)を調べた。その結果,日本語母語話者と中国人学習者では異なる態度表出パタンが多数見られた。F0曲線について,中国人学習者による発話にはF0が激しく変化することが特徴的であり, F0上昇と下降のタイミングも日本語母語話者と大幅に異なる。特に,中国人学習者が「本気/冗談」と「賞賛/非難」の態度を母語である中国語のF0曲線に近い形で表出しており,中国語の態度表出方法に影響されている可能性を示した。さらに,声質分析を行なった結果,中国人学習者による「友好」「丁寧」発話のH1-H2とHNRは日本語母語話者より顕著に低く,jitterは日本語母語話者より顕著に高かった。この結果は中国人学習者による「友好」「丁寧」発話は日本語母語話者と異なり,声帯が緊張した発声に近く,非周期性が強いことが示唆された。以上のことから,外国人日本語学習者は母語話者と異なる音声を用いる可能性があるため,態度によるF0曲線の変化と発声練習の指導が重要である。},
}
Jiaqi Shi Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "3D Skeletal Movement enhanced Emotion Recognition Network", In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2020 (APSIPA ASC 2020), Virtual Conference, pp. 1060-1066, December, 2020.
Abstract: Automatic emotion recognition has become an important trend in the field of human-computer natural interaction and artificial intelligence. Although gesture is one of the most important components of nonverbal communication, which has a considerable impact on emotion recognition, motion modalities are rarely considered in the study of affective computing. An important reason is the lack of large open emotion databases containing skeletal movement data. In this paper, we extract 3D skeleton information from video, and apply the method to IEMOCAP database to add a new modality. We propose an attention based convolutional neural network which takes the extracted data as input to predict the speaker's emotion state. We also combine our model with models using other modalities to provide complementary information in the emotion classification task. The combined model utilizes audio signals, text information and skeletal data simultaneously. The performance of the model significantly outperforms the bimodal model, proving the effectiveness of the method.
BibTeX:
@InProceedings{Shi2020d,
  author    = {Jiaqi Shi Fu and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  booktitle = {Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2020 (APSIPA ASC 2020)},
  title     = {3D Skeletal Movement enhanced Emotion Recognition Network},
  year      = {2020},
  address   = {Virtual Conference},
  day       = {7-10},
  month     = dec,
  pages     = {1060-1066},
  url       = {http://www.apsipa2020.org/},
  abstract  = {Automatic emotion recognition has become an important trend in the field of human-computer natural interaction and artificial intelligence. Although gesture is one of the most important components of nonverbal communication, which has a considerable impact on emotion recognition, motion modalities are rarely considered in the study of affective computing. An important reason is the lack of large open emotion databases containing skeletal movement data. In this paper, we extract 3D skeleton information from video, and apply the method to IEMOCAP database to add a new modality. We propose an attention based convolutional neural network which takes the extracted data as input to predict the speaker's emotion state. We also combine our model with models using other modalities to provide complementary information in the emotion classification task. The combined model utilizes audio signals, text information and skeletal data simultaneously. The performance of the model significantly outperforms the bimodal model, proving the effectiveness of the method.},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots", In International Conference on Intelligent Robots and Systems (IROS) 2020, Las Vegas, USA (Virtual), October, 2020.
Abstract: Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions in several different forms. In this study, we first analyzed pointing gestures appearing in human-human dialogue interactions, and observed different trends in the use of different gesture types, according to the inter-personal relationship between the dialogue partners. Then, we conducted multiple subjective experiments by systematically creating behaviors in an android robot, in order to investigate the effects of different types of pointing gestures on the impression of the robot’s attitudes. Several factors were taken into account: sentence type (formal or colloquial), pointing gesture motion type (hand shape, such as open palm or index finger, hand orientation and motion direction), gesture speed and gesture hold duration. Evaluation results indicated that the impression of careful/polite or careless/casual is affected by all analyzed factors, and the appropriateness of a behavior depends on the inter-personal relationship to the dialogue partner.
BibTeX:
@InProceedings{Ishi2020c,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {International Conference on Intelligent Robots and Systems (IROS) 2020},
  title     = {Person-directed pointing gestures and inter-personal relationship: Expression of politeness to friendliness by android robots},
  year      = {2020},
  address   = {Las Vegas, USA (Virtual)},
  day       = {25-29},
  month     = oct,
  url       = {http://www.iros2020.org/},
  abstract  = {Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions in several different forms. In this study, we first analyzed pointing gestures appearing in human-human dialogue interactions, and observed different trends in the use of different gesture types, according to the inter-personal relationship between the dialogue partners. Then, we conducted multiple subjective experiments by systematically creating behaviors in an android robot, in order to investigate the effects of different types of pointing gestures on the impression of the robot’s attitudes. Several factors were taken into account: sentence type (formal or colloquial), pointing gesture motion type (hand shape, such as open palm or index finger, hand orientation and motion direction), gesture speed and gesture hold duration. Evaluation results indicated that the impression of careful/polite or careless/casual is affected by all analyzed factors, and the appropriateness of a behavior depends on the inter-personal relationship to the dialogue partner.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Changzeng Fu, Jiaqi Shi, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro, "AAEC: An Adversarial Autoencoder-based Classifier for AudioEmotion Recognition", In MuSe 2020-The Multimodal Sentiment in Real-life Media Challenge (Conference: ACM Multimedia Conference 2020 ), Seattle, United States, pp. 45-51, October, 2020.
Abstract: In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide im-plementations in supporting humans’ activities. Given that the data about emotions is difficult to collect and organize into a large database like the dataset of text or images, the true distribution would be difficult to be completely covered by the training set, which affects the model’s robustness and generalization in subse-quent applications. In this paper, we proposed a model, Adversarial Autoencoder-based Classifier (AAEC), that can not only augment the data within real data distribution but also reasonably extend the boundary of the current data distribution to a possible space. Such an extended space would be better to fit the distribution of training and testing sets. In addition to comparing with baseline models, we modified our proposed model into different configura-tions and conducted a comprehensive self-comparison with audio modality. The results of our experiment show that our proposed model outperforms the baselines.
BibTeX:
@Inproceedings{Fu2020a,
  author    = {Changzeng Fu and Jiaqi Shi and Chaoran Liu and Carlos Toshinori Ishi and Hiroshi Ishiguro},
  title     = {AAEC: An Adversarial Autoencoder-based Classifier for AudioEmotion Recognition},
  booktitle = {MuSe 2020-The Multimodal Sentiment in Real-life Media Challenge (Conference: ACM Multimedia Conference 2020 )},
  year      = {2020},
  pages     = {45-51},
  address   = {Seattle, United States},
  month     = oct,
  day       = {12-16},
  doi       = {10.1145/3423327.3423669},
  url       = {https://dl.acm.org/doi/10.1145/3423327.3423669},
  abstract  = {In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide im-plementations in supporting humans’ activities. Given that the data about emotions is difficult to collect and organize into a large database like the dataset of text or images, the true distribution would be difficult to be completely covered by the training set, which affects the model’s robustness and generalization in subse-quent applications. In this paper, we proposed a model, Adversarial Autoencoder-based Classifier (AAEC), that can not only augment the data within real data distribution but also reasonably extend the boundary of the current data distribution to a possible space. Such an extended space would be better to fit the distribution of training and testing sets. In addition to comparing with baseline models, we modified our proposed model into different configura-tions and conducted a comprehensive self-comparison with audio modality. The results of our experiment show that our proposed model outperforms the baselines.},
  keywords  = {audio modality, neural networks, adversarial auto-encoder, emotion recognition},
}
Hidenobu Sumioka, Masahiro Shiomi, Nobuo Yamato, Hiroshi Ishiguro, "Acceptance of a minimal design of a human infant for facilitating affective interaction with older adults: A case study toward interactive doll therapy", In The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2020), no. WeP1P.19, Virtual Conference, pp. 775-780, August, 2020.
Abstract: We introduce a minimal design approach to achieve a robot for interactive doll therapy. Our approach aims for positive interactions with older adults with dementia by just expressing the most basic elements of human-like features and relying on the user’s imagination to supplement the missing information. Based on this approach, we developed HIRO, a baby-sized robot with abstract body representation and without facial expressions. The recorded voice of a real human infant emitted by robots enhance human-like features of the robot and then facilitate emotional interaction between older people and the robot. A field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination.
BibTeX:
@InProceedings{Sumioka2020,
  author    = {Hidenobu Sumioka and Masahiro Shiomi and Nobuo Yamato and Hiroshi Ishiguro},
  booktitle = {The 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN2020)},
  title     = {Acceptance of a minimal design of a human infant for facilitating affective interaction with older adults: A case study toward interactive doll therapy},
  year      = {2020},
  address   = {Virtual Conference},
  day       = {31-4},
  month     = aug,
  number    = {WeP1P.19},
  pages     = {775-780},
  url       = {https://ras.papercept.net/conferences/conferences/ROMAN20/program/ROMAN20_ContentListWeb_3.html},
  abstract  = {We introduce a minimal design approach to achieve a robot for interactive doll therapy. Our approach aims for positive interactions with older adults with dementia by just expressing the most basic elements of human-like features and relying on the user’s imagination to supplement the missing information. Based on this approach, we developed HIRO, a baby-sized robot with abstract body representation and without facial expressions. The recorded voice of a real human infant emitted by robots enhance human-like features of the robot and then facilitate emotional interaction between older people and the robot. A field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination.},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Analysis of the factors involved in person-directed pointing gestures in dialogue speech", In Speech Prosody 2020, Tokyo, Japan, pp. 309-313, May, 2020.
Abstract: Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we extracted pointing gestures appearing in a three-party spontaneous dialogue database, and analyzed several factors including gesture type (hand shape, orientation, motion direction), dialogue acts, inter-personal relationship and attitudes. Analysis results indicate that more than half of the observed pointing gestures use the index finger towards the interlocutor, but are not particularly perceived as impolite. Pointing with the index finger moving in the forward direction was found to be predominant towards interlocutors with close relationship, while pointing with the open palm was found to be more frequent towards first-met person or older person. The majority of the pointing gestures were found to be used along with utterances whose contents are related or directed to the pointed person, while part were accompanied with attitudinal expressions such as yielding the turn, attention drawing, sympathizing, and joking/bantering.
BibTeX:
@InProceedings{Ishi2020a,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {Speech Prosody 2020},
  title     = {Analysis of the factors involved in person-directed pointing gestures in dialogue speech},
  year      = {2020},
  address   = {Tokyo, Japan},
  day       = {25-28},
  doi       = {10.21437/SpeechProsody.2020-63},
  month     = may,
  pages     = {309-313},
  url       = {https://sp2020.jpn.org/},
  abstract  = {Pointing gestures directed to a person are usually taken as an impolite manner. However, such person-directed pointing gestures commonly appear in casual dialogue interactions. In this study, we extracted pointing gestures appearing in a three-party spontaneous dialogue database, and analyzed several factors including gesture type (hand shape, orientation, motion direction), dialogue acts, inter-personal relationship and attitudes. Analysis results indicate that more than half of the observed pointing gestures use the index finger towards the interlocutor, but are not particularly perceived as impolite. Pointing with the index finger moving in the forward direction was found to be predominant towards interlocutors with close relationship, while pointing with the open palm was found to be more frequent towards first-met person or older person. The majority of the pointing gestures were found to be used along with utterances whose contents are related or directed to the pointed person, while part were accompanied with attitudinal expressions such as yielding the turn, attention drawing, sympathizing, and joking/bantering.},
}
Xinyue Li, Carlos Toshinori Ishi, Ryoko Hayashi, "Prosodic and Voice Quality Feature of Japanese Speech Conveying Attitudes: Mandarin Chinese Learners and Japanese Native Speakers", In Speech Prosody 2020, The University of Tokyo, Tokyo, pp. 41-45, May, 2020.
Abstract: To clarify the cross-linguistic differences in attitudinal speech and how L2 learners express attitudinal speech, in the present study Japanese speech representing four classes of attitudes was recorded: friendly/hostile, polite/rude, serious/joking and praising/blaming, elicited from Japanese native speakers and Mandarin Chinese learners of L2 Japanese. Accounting for language transfer, Mandarin Chinese speech was also recorded. Acoustic analyses including F0, duration and voice quality features revealed different patterns of utterances by Japanese native speakers and Mandarin Chinese learners. Analysis of sentence final tones also differentiate native speakers from L2 learners in the production of attitudinal speech. Furthermore, as for the word carrying sentential stress, open quotient-valued voice range profiles based on Electroglottography signals suggest that the attitudinal expression of Mandarin Chinese learners are affected by their mother tongue.
BibTeX:
@InProceedings{Li2020a,
  author    = {Xinyue Li and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {Speech Prosody 2020},
  title     = {Prosodic and Voice Quality Feature of Japanese Speech Conveying Attitudes: Mandarin Chinese Learners and Japanese Native Speakers},
  year      = {2020},
  address   = {The University of Tokyo, Tokyo},
  day       = {24-28},
  doi       = {10.21437/speechProsody.2020-9},
  month     = may,
  pages     = {41-45},
  url       = {https://sp2020.jpn.org/},
  abstract  = {To clarify the cross-linguistic differences in attitudinal speech and how L2 learners express attitudinal speech, in the present study Japanese speech representing four classes of attitudes was recorded: friendly/hostile, polite/rude, serious/joking and praising/blaming, elicited from Japanese native speakers and Mandarin Chinese learners of L2 Japanese. Accounting for language transfer, Mandarin Chinese speech was also recorded. Acoustic analyses including F0, duration and voice quality features revealed different patterns of utterances by Japanese native speakers and Mandarin Chinese learners. Analysis of sentence final tones also differentiate native speakers from L2 learners in the production of attitudinal speech. Furthermore, as for the word carrying sentential stress, open quotient-valued voice range profiles based on Electroglottography signals suggest that the attitudinal expression of Mandarin Chinese learners are affected by their mother tongue.},
}
Chinenye Augustine Ajibo, Ryusuke Mikata, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Generation and Evaluation of Audio-Visual Anger Emotional Expression for Android Robot", In The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020), Cambrige, UK, pp. 96-98, March, 2020.
Abstract: Recent studies in human-human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions that are beneficial to the expresser and also help to foster cordiality and closeness amongst interlocutors. However, efforts in human-robot interaction (HRI) have not attempted to investigate the consequences of expression of negative emotion by robots on HRI. Thus, the background of this study as a first step is to furnish humanoid robots with natural audio-visual anger expression for HRI. Based on the analysis results from a multimodal HHI corpus, we implemented different types of gestures related to anger expressions for humanoid robots and carried-out subjective evaluation of the generated anger expressions. Findings from this study revealed that the semantic context and functional content of anger-based utterances play a significant role in the choice of gesture to accompany such utterance. Our current result shows that "Pointing" gesture is adjudged more appropriate for utterances with "you" and anger-based "questioning" utterances; while "both arms spread" and "both arm swing" gestures were evaluated more appropriated for "declarative" and ``disagreement`` utterances respectively.
BibTeX:
@InProceedings{Ajibo2020,
  author    = {Chinenye Augustine Ajibo and Ryusuke Mikata and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  booktitle = {The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020)},
  title     = {Generation and Evaluation of Audio-Visual Anger Emotional Expression for Android Robot},
  year      = {2020},
  address   = {Cambrige, UK},
  day       = {23-26},
  doi       = {10.1145/3371382.3378282},
  month     = mar,
  pages     = {96-98},
  url       = {https://humanrobotinteraction.org/2020/},
  abstract  = {Recent studies in human-human interaction (HHI) have revealed the propensity of negative emotional expression to initiate affiliating functions that are beneficial to the expresser and also help to foster cordiality and closeness amongst interlocutors. However, efforts in human-robot interaction (HRI) have not attempted to investigate the consequences of expression of negative emotion by robots on HRI. Thus, the background of this study as a first step is to furnish humanoid robots with natural audio-visual anger expression for HRI. Based on the analysis results from a multimodal HHI corpus, we implemented different types of gestures related to anger expressions for humanoid robots and carried-out subjective evaluation of the generated anger expressions. Findings from this study revealed that the semantic context and functional content of anger-based utterances play a significant role in the choice of gesture to accompany such utterance. Our current result shows that "Pointing" gesture is adjudged more appropriate for utterances with "you" and anger-based "questioning" utterances; while "both arms spread" and "both arm swing" gestures were evaluated more appropriated for "declarative" and ``disagreement`` utterances respectively.},
}
住岡英信, 港隆史, 塩見昌裕, "ソーシャルタッチのためのセンサースーツの開発とその応用", インタラクション2020 第24回一般社団法人情報処理学会シンポジウム, 学術総合センター内一橋講堂, 東京, pp. 327-329, March, 2020.
Abstract: 本研究では,社会生活において重要な要素である他者との触れ合い(ソーシャルタッチ)に着目し,それを理解するロボットを実現するための柔らかい身体をもつロボット用センサースーツを開発した.ソーシャルタッチは相手から触れられた際の状態だけでなく,触れる相手の状態によっても影響を受ける. そのため,圧力センサだけでなく,近接センサとしても機能し,近接距離を計測することができる布型の静電容量方式センサを新たに開発し,それを80 個備えたスーツを開発した.これにより,他のセンサ情報を用いることなく,相手との触れ合いの計測だけでなく,相手との近接距離の計測も可能となり,例えば,初めての相手がロボットに触れようとすれば避け,親しい相手では接触を許して抱擁といった触れ合いを行うといった接触前から接触後にかけてのインタラクションが実現できる.また,布型であるため,人間が着用することも可能であり,新たなインタラクションメディアの開発にも利用が期待される.
BibTeX:
@InProceedings{住岡2020,
  author    = {住岡英信 and 港隆史 and 塩見昌裕},
  booktitle = {インタラクション2020 第24回一般社団法人情報処理学会シンポジウム},
  title     = {ソーシャルタッチのためのセンサースーツの開発とその応用},
  year      = {2020},
  address   = {学術総合センター内一橋講堂, 東京},
  month     = mar,
  pages     = {327-329},
  url       = {https://www.interaction-ipsj.org/2020/},
  abstract  = {本研究では,社会生活において重要な要素である他者との触れ合い(ソーシャルタッチ)に着目し,それを理解するロボットを実現するための柔らかい身体をもつロボット用センサースーツを開発した.ソーシャルタッチは相手から触れられた際の状態だけでなく,触れる相手の状態によっても影響を受ける. そのため,圧力センサだけでなく,近接センサとしても機能し,近接距離を計測することができる布型の静電容量方式センサを新たに開発し,それを80 個備えたスーツを開発した.これにより,他のセンサ情報を用いることなく,相手との触れ合いの計測だけでなく,相手との近接距離の計測も可能となり,例えば,初めての相手がロボットに触れようとすれば避け,親しい相手では接触を許して抱擁といった触れ合いを行うといった接触前から接触後にかけてのインタラクションが実現できる.また,布型であるため,人間が着用することも可能であり,新たなインタラクションメディアの開発にも利用が期待される.},
}
Masahiro Shiomi, Hidenobu Sumioka, Kurima Sakai, Tomo Funayama, Takashi Minato, "SOTO: An Android Platform with a Masculine Appearance for Social Touch Interaction", In The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020), Cambridge, UK, pp. 447-449, March, 2020.
Abstract: In this paper, we report an android platform with a masculine appearance. In the human-human interaction research field, several studies reported the effects of gender in the social touch context. However, in the human-robot interaction research field, gender effects are mainly focused on human genders, i.e., a robot’s perceived gender is less focused. The purpose of developing the android is to investigate gender effects in social touch in the context of the human-robot interaction, comparing to existing android platforms with feminine appearances. For this purpose, we prepared a nonexistent face design in order to avoid appearance effects and fabric-based capacitance type upper-body touch sensors.
BibTeX:
@InProceedings{Shiomi2020,
  author    = {Masahiro Shiomi and Hidenobu Sumioka and Kurima Sakai and Tomo Funayama and Takashi Minato},
  booktitle = {The 15th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI2020)},
  title     = {SOTO: An Android Platform with a Masculine Appearance for Social Touch Interaction},
  year      = {2020},
  address   = {Cambridge, UK},
  day       = {23-26},
  doi       = {10.1145/3371382.3378283},
  month     = mar,
  pages     = {447-449},
  url       = {https://humanrobotinteraction.org/2020/},
  abstract  = {In this paper, we report an android platform with a masculine appearance. In the human-human interaction research field, several studies reported the effects of gender in the social touch context. However, in the human-robot interaction research field, gender effects are mainly focused on human genders, i.e., a robot’s perceived gender is less focused. The purpose of developing the android is to investigate gender effects in social touch in the context of the human-robot interaction, comparing to existing android platforms with feminine appearances. For this purpose, we prepared a nonexistent face design in order to avoid appearance effects and fabric-based capacitance type upper-body touch sensors.},
}
Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Yuichiro Yoshikawa, Hiroshi Ishiguro, "SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text Classification", In 14th IEEE International Conference on Semantic Computing (ICSC 2020), San Diego, California, USA, pp. 123-127, February, 2020.
Abstract: Text categorization is the task of assigning labels to documents written in a natural language, and it has numerous real-world applications including sentiment analysis as well as traditional topic assignment tasks. In this paper, we propose 5 different configurations for the semantic matrix based memory neural network with end to end learning manner and evaluate our proposed method on AG news, Sogou news. The best performance of our proposed method outperforms VDCNN on the text classification task and gives a faster speed for learning semantics. Moreover, we also evaluate our model on small Scale data. The results show that our proposed method can still achieve good results than VDCNN.
BibTeX:
@InProceedings{Fu2019_1,
  author    = {Changzeng Fu and Chaoran Liu and Carlos Toshinori Ishi and Yuichiro Yoshikawa and Hiroshi Ishiguro},
  booktitle = {14th IEEE International Conference on Semantic Computing (ICSC 2020)},
  title     = {SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text Classification},
  year      = {2020},
  address   = {San Diego, California, USA},
  day       = {3-5},
  doi       = {10.1109/ICIS.2020.00024},
  month     = feb,
  pages     = {123-127},
  url       = {https://www.ieee-icsc.org/},
  abstract  = {Text categorization is the task of assigning labels to documents written in a natural language, and it has numerous real-world applications including sentiment analysis as well as traditional topic assignment tasks. In this paper, we propose 5 different configurations for the semantic matrix based memory neural network with end to end learning manner and evaluate our proposed method on AG news, Sogou news. The best performance of our proposed method outperforms VDCNN on the text classification task and gives a faster speed for learning semantics. Moreover, we also evaluate our model on small Scale data. The results show that our proposed method can still achieve good results than VDCNN.},
}
Carlos T. Ishi, Akira Utsumi, Isamu Nagasawa, "Analysis of sound activities and voice activity detection using in-car microphone arrays", In 2020 IEEE/SICE International Symposium on System Integration (SII2020), Honolulu, Hawaii, USA, pp. 640-645, January, 2020.
Abstract: In this study, we evaluate the collaboration of multiple microphone arrays installed in the interior of a car, with the aim of robustly identifying the driver’s voice activities embedded in car environment noises. We first conducted preliminary analysis on the identified sound activities from the sound direction estimations by different microphone arrays arranged under the physical constraints of the car interior. Driving audio data was collected under several car environment conditions, including engine noise, road noise, air conditioner, winker sounds, radio sounds, driver’s voice, passenger voices, and external noises from other cars. The driver’s voice activity intervals could be identified with 97% detection rate by combining two microphone arrays, one around the “eyesight” camera system cover and the other around the driver’s sun visor.
BibTeX:
@InProceedings{Ishi2020,
  author    = {Carlos T. Ishi and Akira Utsumi and Isamu Nagasawa},
  booktitle = {2020 IEEE/SICE International Symposium on System Integration (SII2020)},
  title     = {Analysis of sound activities and voice activity detection using in-car microphone arrays},
  year      = {2020},
  address   = {Honolulu, Hawaii, USA},
  day       = {12-15},
  month     = jan,
  pages     = {640-645},
  url       = {https://sice-si.org/conf/SII2020/index.html},
  abstract  = {In this study, we evaluate the collaboration of multiple microphone arrays installed in the interior of a car, with the aim of robustly identifying the driver’s voice activities embedded in car environment noises. We first conducted preliminary analysis on the identified sound activities from the sound direction estimations by different microphone arrays arranged under the physical constraints of the car interior. Driving audio data was collected under several car environment conditions, including engine noise, road noise, air conditioner, winker sounds, radio sounds, driver’s voice, passenger voices, and external noises from other cars. The driver’s voice activity intervals could be identified with 97% detection rate by combining two microphone arrays, one around the “eyesight” camera system cover and the other around the driver’s sun visor.},
}
Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, "Development of a sensor suit for touch and pre-touch perception toward close human-robot touch interaction", In RoboTac 2019: New Advances in Tactile Sensation, Perception, and Learning in Robotics: Emerging Materials and Technologies for Manipulation in a workshop on The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS2019), Macau, China, November, 2019.
Abstract: In this paper, we propose that recognition of social touch from a human should be considered as both pre-touch inter-action and post-touch interaction. To build a social robot that facilitates both interactions, we aim to develop a touch sensor system that enables a robot to detect situations before being touched by a human as well as ones after being touched. In the rest of the paper, we first summarize a design concept of a sensor system for social touch. Next, as a first step, we develop a sensor suit that detect situations before being touched by a human, using fabric-based proximity sensors. Then, we report a preliminary experiment to evaluate the developed sensor as a proximity sensor for touch interaction. Finally, we discuss future studies.
BibTeX:
@InProceedings{Sumioka2019e,
  author    = {Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi},
  booktitle = {RoboTac 2019: New Advances in Tactile Sensation, Perception, and Learning in Robotics: Emerging Materials and Technologies for Manipulation in a workshop on The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS2019)},
  title     = {Development of a sensor suit for touch and pre-touch perception toward close human-robot touch interaction},
  year      = {2019},
  address   = {Macau, China},
  day       = {4-8},
  month     = nov,
  url       = {https://www.iros2019.org/about https://www.iros2019.org/workshops-and-tutorials https://robotac19.aau.at/},
  abstract  = {In this paper, we propose that recognition of social touch from a human should be considered as both pre-touch inter-action and post-touch interaction. To build a social robot that facilitates both interactions, we aim to develop a touch sensor system that enables a robot to detect situations before being touched by a human as well as ones after being touched. In the rest of the paper, we first summarize a design concept of a sensor system for social touch. Next, as a first step, we develop a sensor suit that detect situations before being touched by a human, using fabric-based proximity sensors. Then, we report a preliminary experiment to evaluate the developed sensor as a proximity sensor for touch interaction. Finally, we discuss future studies.},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro, "Preliminary Investigation about Relationship between Perceived Intimacy and Touch Characteristics", In The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Macau, China, pp. 3409, November, 2019.
Abstract: This study investigated the effects of touch characteristics that change the perceived intimacy of people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. In this study, we investigate the effects of two kinds of touch characteristics (length and touch-part), and the results showed that the touch-part are useful to change the perceived intimacy, although the length did not show significant effects.
BibTeX:
@InProceedings{Zheng2019b,
  author    = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Preliminary Investigation about Relationship between Perceived Intimacy and Touch Characteristics},
  year      = {2019},
  address   = {Macau, China},
  day       = {4-8},
  month     = nov,
  pages     = {3409},
  url       = {https://www.iros2019.org/},
  abstract  = {This study investigated the effects of touch characteristics that change the perceived intimacy of people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. In this study, we investigate the effects of two kinds of touch characteristics (length and touch-part), and the results showed that the touch-part are useful to change the perceived intimacy, although the length did not show significant effects.},
}
Soheil Keshmiri, Hidenobu Sumioka, Takashi Minato, Masahiro Shiomi, Hiroshi Ishiguro, "Exploring the Causal Modeling of Human-Robot Touch Interaction", In The Eleventh International Conference on Social Robotics (ICSR2019), Madrid, Spain, pp. 235-244, November, 2019.
Abstract: Interpersonal touch plays a pivotal role in individuals’ emotional and physical well-being which, despite its psychological and therapeutic effects, has been mostly neglected in such field of research as socially-assistive robotics. On the other hand, the growing emergence of such interactive social robots in our daily lives inevitably entails such interactions as touch and hug between robots and humans. Therefore, derivation of robust models for such physical interactions to enable robots to perform them in naturalistic fashion is highly desirable. In this study, we investigated whether it was possible to realize distinct patterns of different touch interactions that were general representations of their respective types. For this purpose, we adapted three touch interaction paradigms and asked human subjects to perform them on a mannequin that was equipped with a touch sensor on its torso. We then appliedWiener-Granger causality on the time series of activated channels of this touch sensor that were common (per touch paradigm) among all participants. The analyses of these touch time series suggested that different types of touch can be quantified in terms of causal association between sequential steps that form the variation information among their patterns. These results hinted at the potential utility of such generalized touch patterns for devising social robots with robust causal models of naturalistic touch behaviour for their human-robot touch interactions.
BibTeX:
@InProceedings{keshmiri2019f,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Takashi Minato and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {The Eleventh International Conference on Social Robotics (ICSR2019)},
  title     = {Exploring the Causal Modeling of Human-Robot Touch Interaction},
  year      = {2019},
  address   = {Madrid, Spain},
  day       = {26-29},
  doi       = {https://doi.org/10.1007/978-3-030-35888-4_22},
  month     = nov,
  pages     = {235-244},
  url       = {https://link.springer.com/chapter/10.1007%2F978-3-030-35888-4_22},
  abstract  = {Interpersonal touch plays a pivotal role in individuals’ emotional and physical well-being which, despite its psychological and therapeutic effects, has been mostly neglected in such field of research as socially-assistive robotics. On the other hand, the growing emergence of such interactive social robots in our daily lives inevitably entails such interactions as touch and hug between robots and humans. Therefore, derivation of robust models for such physical interactions to enable robots to perform them in naturalistic fashion is highly desirable. In this study, we investigated whether it was possible to realize distinct patterns of different touch interactions that were general representations of their respective types. For this purpose, we adapted three touch interaction paradigms and asked human subjects to perform them on a mannequin that was equipped with a touch sensor on its torso. We then appliedWiener-Granger causality on the time series of activated channels of this touch sensor that were common (per touch paradigm) among all participants. The analyses of these touch time series suggested that different types of touch can be quantified in terms of causal association between sequential steps that form the variation information among their patterns. These results hinted at the potential utility of such generalized touch patterns for devising social robots with robust causal models of naturalistic touch behaviour for their human-robot touch interactions.},
}
Jan Magya, Masahiko Kobayashi, Shuichi Nishio, Peter Sincak, Hiroshi Ishiguro, "Autonomous Robotic Dialogue System with Reinforcement Learning for Elderlies with Dementia", In 2019 IEEE International Conference on Systems, Man, and Cybernetics(SMC), Bari, Italy, pp. 1-6, October, 2019.
Abstract: To learn the pattern of the response of them, we used reinforcement learning to adapt to each elderly individually. Moreover, the robot which does not depend on speech recognition estimates the elderly’s state from their nonverbal information. We experimented with three elderly people with dementia in a care home.
BibTeX:
@Inproceedings{Magya2019,
  author    = {Jan Magya and Masahiko Kobayashi and Shuichi Nishio and Peter Sincak and Hiroshi Ishiguro},
  title     = {Autonomous Robotic Dialogue System with Reinforcement Learning for Elderlies with Dementia},
  booktitle = {2019 IEEE International Conference on Systems, Man, and Cybernetics(SMC)},
  year      = {2019},
  pages     = {1-6},
  address   = {Bari, Italy},
  month     = oct,
  day       = {6-9},
  url       = {http://smc2019.org/index.html},
  abstract  = {To learn the pattern of the response of them, we used reinforcement learning to adapt to each elderly individually. Moreover, the robot which does not depend on speech recognition estimates the elderly’s state from their nonverbal information. We experimented with three elderly people with dementia in a care home.},
}
Carlos Ishi, Ryusuke Mikata, Takashi Minato, Hiroshi Ishiguro, "Online processing for speech-driven gesture motion generation in android robots", In The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, pp. 508-514, October, 2019.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we proposed and implemented an online processing for a speech-driven gesture motion generation in an android robot dialogue system. Issues on motion overlaps and speech interruptions by the dialogue partner were taken into account. We then conducted two experiments to evaluate the effects of occasional dis-synchrony between the generated motions and speech, and the effects of holding duration control after speech interruptions. Evaluation results indicated that beat gestures are more critical in terms of speech-motion synchrony, and should not be delayed by more than 400ms relative to the speech utterances. Evaluation of the second experiment indicated that gesture holding durations around 0.5 to 2 seconds after an interruption look natural, while longer durations may cause impression of displeasure by the robot.
BibTeX:
@InProceedings{Ishi2019c,
  author    = {Carlos Ishi and Ryusuke Mikata and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {The 2019 IEEE-RAS International Conference on Humanoid Robots},
  title     = {Online processing for speech-driven gesture motion generation in android robots},
  year      = {2019},
  address   = {Toronto, Canada},
  day       = {15-17},
  month     = oct,
  pages     = {508-514},
  url       = {http://humanoids2019.loria.fr/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we proposed and implemented an online processing for a speech-driven gesture motion generation in an android robot dialogue system. Issues on motion overlaps and speech interruptions by the dialogue partner were taken into account. We then conducted two experiments to evaluate the effects of occasional dis-synchrony between the generated motions and speech, and the effects of holding duration control after speech interruptions. Evaluation results indicated that beat gestures are more critical in terms of speech-motion synchrony, and should not be delayed by more than 400ms relative to the speech utterances. Evaluation of the second experiment indicated that gesture holding durations around 0.5 to 2 seconds after an interruption look natural, while longer durations may cause impression of displeasure by the robot.},
}
Ryusuke Mikata, Carlos T IShi, Talashi Minato, Hiroshi Ishiguro, "Analysis of factors influencing the impression of speaker individuality in android robots", In The 28th IEEE International Conference on Robot and Human Interactive Communication(IEEE RO-MAN2019), Le Meridien, Windsor Place, New Delhi, India, pp. 1224-1229, October, 2019.
Abstract: Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.
BibTeX:
@InProceedings{Mikata2019,
  author    = {Ryusuke Mikata and Carlos T IShi and Talashi Minato and Hiroshi Ishiguro},
  booktitle = {The 28th IEEE International Conference on Robot and Human Interactive Communication(IEEE RO-MAN2019)},
  title     = {Analysis of factors influencing the impression of speaker individuality in android robots},
  year      = {2019},
  address   = {Le Meridien, Windsor Place, New Delhi, India},
  day       = {14-18},
  month     = oct,
  pages     = {1224-1229},
  url       = {https://ro-man2019.org/},
  abstract  = {Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.},
}
Carlos Ishi, Takayuki Kanda, "Prosodic and voice quality analyses of loud speech: differences of hot anger and far-directed speech", In Speech, Music and Mind 2019 (SMM 2019) Detecting and Influencing Mental States with Audio Satellite Workshop of Interspeech 2019, Vienna, Austria, pp. 1-5, September, 2019.
Abstract: Loud speech may appear in different attitudinal situations, so that in human-robot speech interactions, the robot should be able to understand such situations. In this study, we analyzed the differences in acoustic-prosodic and voice quality features of loud speech in two situations: hot anger (aggressive/frenzy speech) and far-directed speech (i.e., speech addressed to a person in a far distance). Analysis results indicated that both speaking styles are accompanied by louder power and higher pitch, while differences were observed in the intonation: far-directed voices tend to have large power and high pitch over the whole utterance, while angry speech has more pitch movements in a larger pitch range. Regarding voice quality, both styles tend to be tenser (higher vocal effort), but angry speech tends to be more pressed, with local appearance of harsh voices (with irregularities in the vocal fold vibrations).
BibTeX:
@InProceedings{Ishi2019b,
  author    = {Carlos Ishi and Takayuki Kanda},
  booktitle = {Speech, Music and Mind 2019 (SMM 2019) Detecting and Influencing Mental States with Audio Satellite Workshop of Interspeech 2019},
  title     = {Prosodic and voice quality analyses of loud speech: differences of hot anger and far-directed speech},
  year      = {2019},
  address   = {Vienna, Austria},
  day       = {14},
  doi       = {10.21437/SMM.2019-1},
  month     = sep,
  pages     = {1-5},
  url       = {http://smm19.ifs.tuwien.ac.at/},
  abstract  = {Loud speech may appear in different attitudinal situations, so that in human-robot speech interactions, the robot should be able to understand such situations. In this study, we analyzed the differences in acoustic-prosodic and voice quality features of loud speech in two situations: hot anger (aggressive/frenzy speech) and far-directed speech (i.e., speech addressed to a person in a far distance). Analysis results indicated that both speaking styles are accompanied by louder power and higher pitch, while differences were observed in the intonation: far-directed voices tend to have large power and high pitch over the whole utterance, while angry speech has more pitch movements in a larger pitch range. Regarding voice quality, both styles tend to be tenser (higher vocal effort), but angry speech tends to be more pressed, with local appearance of harsh voices (with irregularities in the vocal fold vibrations).},
  keywords  = {loud speech, hot anger, prosody, voice quality, paralinguistics},
}
Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, "A Neural Turn-taking Model without RNN", In the 20th Annual Conference of the International Speech Communication Association INTERSPEECH 2019(Interspeech 2019), Graz, Austria, pp. 4150-4154, September, 2019.
Abstract: Sequential data such as speech and dialog are usually modeled by Recurrent Neural Networks (RNN) and derivatives since the information could travel through time with this kind of architectures. However, there are disadvantages coming with the use of RNNs such as the limited depth of neural networks and the GPU unfriendly training process. Estimating the timing of turn-taking is an important feature of a dialog system. Such a task requires knowledges about the past dialog context and has been modeled using RNNs in several studies. In this paper, we propose a non-RNN model for the timing estimation of turn-taking in dialogs. The proposed model takes lexical and acoustic features as its input to predict the end of a turn. We conduct experiments on four types of Japanese conversation datasets. The experimental results show that with proper neural network designs, the long term information in a dialog could propagate without recurrent structure and the proposed model could outperform canonical RNN-based architectures on the task of turn-taking estimation.
BibTeX:
@InProceedings{Liu2019b,
  author    = {Chaoran Liu and Carlos Ishi and Hiroshi Ishiguro},
  booktitle = {the 20th Annual Conference of the International Speech Communication Association INTERSPEECH 2019(Interspeech 2019)},
  title     = {A Neural Turn-taking Model without RNN},
  year      = {2019},
  address   = {Graz, Austria},
  day       = {15-19},
  doi       = {10.21437/Interspeech.2019-2270},
  month     = sep,
  pages     = {4150-4154},
  url       = {https://www.interspeech2019.org/},
  abstract  = {Sequential data such as speech and dialog are usually modeled by Recurrent Neural Networks (RNN) and derivatives since the information could travel through time with this kind of architectures. However, there are disadvantages coming with the use of RNNs such as the limited depth of neural networks and the GPU unfriendly training process. Estimating the timing of turn-taking is an important feature of a dialog system. Such a task requires knowledges about the past dialog context and has been modeled using RNNs in several studies. In this paper, we propose a non-RNN model for the timing estimation of turn-taking in dialogs. The proposed model takes lexical and acoustic features as its input to predict the end of a turn. We conduct experiments on four types of Japanese conversation datasets. The experimental results show that with proper neural network designs, the long term information in a dialog could propagate without recurrent structure and the proposed model could outperform canonical RNN-based architectures on the task of turn-taking estimation.},
  keywords  = {turn-taking, deep learning, capsule network, CNN, Dilated ConvNet},
}
Carlos Ishi, Takayuki Kanda, "Prosodic and voice quality analyses of offensive speech", In International Congress of Phonetic Sciences (ICPhS 2019), Melbourne, Autralia, pp. 2174-2178, August, 2019.
Abstract: In this study, differences in the acoustic-prosodic features are analyzed in the low-moral or offensive speech. The same contents were spoken by multiple speakers with different speaking styles, including reading out, aggressive speech, extremely aggressive (frenzy), and joking styles. Acoustic-prosodic analyses indicated that different speakers use different speaking styles for expressing offensive speech. Clear changes in voice quality, such as tense and harsh voices, were observed for high levels of expressivity of aggressiveness and threatening.
BibTeX:
@Inproceedings{Ishi2019a,
  author    = {Carlos Ishi and Takayuki Kanda},
  title     = {Prosodic and voice quality analyses of offensive speech},
  booktitle = {International Congress of Phonetic Sciences (ICPhS 2019)},
  year      = {2019},
  pages     = {2174-2178},
  address   = {Melbourne, Autralia},
  month     = Aug,
  day       = {5-9},
  url       = {https://www.icphs2019.org/},
  abstract  = {In this study, differences in the acoustic-prosodic features are analyzed in the low-moral or offensive speech. The same contents were spoken by multiple speakers with different speaking styles, including reading out, aggressive speech, extremely aggressive (frenzy), and joking styles. Acoustic-prosodic analyses indicated that different speakers use different speaking styles for expressing offensive speech. Clear changes in voice quality, such as tense and harsh voices, were observed for high levels of expressivity of aggressiveness and threatening.},
  keywords  = {offensive speech, prosody, voice quality, acoustic features, speaking style},
}
Xinyue Li, Aaron Lee Albin, Carlos Toshinori Ishi, Ryoko Hayashi, "JAPANESE EMOTIONAL SPEECH PRODUCED BY CHINESE LEARNERS AND JAPANESE NATIVE SPEAKERS: DIFFERENCES IN PERCEPTION AND VOICE QUALITY", In International Congress of Phonetic Sciences(ICPhS2019), Melbourne, Australia, pp. 2183-2187, August, 2019.
Abstract: The present study leverages L2 learner data to contribute to the debate whether the perception and production of emotions is universal vs. language-specific. Japanese native speakers and Chinese learners of L2 Japanese were recorded producing single-word Japanese utterances with seven emotions. A different set of listeners representing the same two groups were then asked to identify the emotion produced in each token. Results suggest that identification accuracy was highest within groups (i.e., for learner+learner and for native+native). Furthermore, more confusions were observed in Japanese native speech, e.g., with 'angry' vs. 'disgusted' confused for Japanese native, but not Chinese learner, productions. Analyses of the electroglottography signal suggest these perceptual results stem from crosslinguistic differences in the productions themselves (e.g., Chinese learners using a tenser glottal configuration to distinguish 'angry' from 'disgusted'). Taken together, these results support the hypothesis that the encoding and recognition of emotions does indeed depend on L1 background.
BibTeX:
@InProceedings{Li2019,
  author    = {Xinyue Li and Aaron Lee Albin and Carlos Toshinori Ishi and Ryoko Hayashi},
  booktitle = {International Congress of Phonetic Sciences(ICPhS2019)},
  title     = {JAPANESE EMOTIONAL SPEECH PRODUCED BY CHINESE LEARNERS AND JAPANESE NATIVE SPEAKERS: DIFFERENCES IN PERCEPTION AND VOICE QUALITY},
  year      = {2019},
  address   = {Melbourne, Australia},
  day       = {5-9},
  month     = aug,
  pages     = {2183-2187},
  url       = {https://www.icphs2019.org/},
  abstract  = {The present study leverages L2 learner data to contribute to the debate whether the perception and production of emotions is universal vs. language-specific. Japanese native speakers and Chinese learners of L2 Japanese were recorded producing single-word Japanese utterances with seven emotions. A different set of listeners representing the same two groups were then asked to identify the emotion produced in each token. Results suggest that identification accuracy was highest within groups (i.e., for learner+learner and for native+native). Furthermore, more confusions were observed in Japanese native speech, e.g., with 'angry' vs. 'disgusted' confused for Japanese native, but not Chinese learner, productions. Analyses of the electroglottography signal suggest these perceptual results stem from crosslinguistic differences in the productions themselves (e.g., Chinese learners using a tenser glottal configuration to distinguish 'angry' from 'disgusted'). Taken together, these results support the hypothesis that the encoding and recognition of emotions does indeed depend on L1 background.},
}
Christian Penaloza, David Hernandez-Carmona, "Decoding Visual Representations of Objects from Brain Data during Object-Grasping Task with a BMI-controlled Robotic Arm", In 4th International Brain Technology Conference, BrainTech 2019, Tel Aviv, Israel, March, 2019.
Abstract: Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic arms using brainwaves alone to do simple tasks such as grasping an object, but the low throughput information of brain-data decoding does not allow the robotic arm to achieve multiple grasp configurations. On the other hand, computer vision researchers have mostly solved the problem of robot arm configuration for object-grasping given visual object recognition. It is then natural to think that if we could decode from brain data the image of the object that the user intends to grasp, then the robotic arm could automatically decide the type of grasping to execute. For this reason, in this paper we propose a method to decode visual representations of the objects from brain data towards improving robot arm grasp configurations. More specifically, we recorded EEG data during an object-grasping experiment in which the participant had to control a robotic arm using a BMI to grasp an object. We also recorded images of the object and developed a multimodal representation of the encoded brain data and object image. Given this representation, the objective was to reconstruct the image given that only half of the image (the brain data encoding) was provided. To achieve this goal, we developed a deep stacked convolutional autoencoder that learned a noise-free joint manifold of brain data encoding and object image. After training, the autoencoder was able to reconstruct the missing part of the object image given that only brain data encoding was provided. Performance analysis was conducted using a convolutional neural network (CNN) trained with the original object images. The performance recognition using the reconstructed images was 76.55%.
BibTeX:
@Inproceedings{Penaloza2019,
  author    = {Christian Penaloza and David Hernandez-Carmona},
  title     = {Decoding Visual Representations of Objects from Brain Data during Object-Grasping Task with a BMI-controlled Robotic Arm},
  booktitle = {4th International Brain Technology Conference, BrainTech 2019},
  year      = {2019},
  address   = {Tel Aviv, Israel},
  month     = Mar,
  day       = {4-5},
  url       = {https://braintech.kenes.com/registration/},
  abstract  = {Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic arms using brainwaves alone to do simple tasks such as grasping an object, but the low throughput information of brain-data decoding does not allow the robotic arm to achieve multiple grasp configurations. On the other hand, computer vision researchers have mostly solved the problem of robot arm configuration for object-grasping given visual object recognition. It is then natural to think that if we could decode from brain data the image of the object that the user intends to grasp, then the robotic arm could automatically decide the type of grasping to execute. For this reason, in this paper we propose a method to decode visual representations of the objects from brain data towards improving robot arm grasp configurations. More specifically, we recorded EEG data during an object-grasping experiment in which the participant had to control a robotic arm using a BMI to grasp an object. We also recorded images of the object and developed a multimodal representation of the encoded brain data and object image. Given this representation, the objective was to reconstruct the image given that only half of the image (the brain data encoding) was provided. To achieve this goal, we developed a deep stacked convolutional autoencoder that learned a noise-free joint manifold of brain data encoding and object image. After training, the autoencoder was able to reconstruct the missing part of the object image given that only brain data encoding was provided. Performance analysis was conducted using a convolutional neural network (CNN) trained with the original object images. The performance recognition using the reconstructed images was 76.55%.},
}
Xiqian Zheng, Dylan Glass, Takashi Minato, Hiroshi Ishiguro, "Four memory categories to support socially-appropriate conversations in long-term HRI", In Workshop of Personalization in long-term human-robot interaction at the international conference on Human-Robot Interaction 2019 (HRI2019 Workshop), Daegu, South Korea, March, 2019.
Abstract: In long-term human-robot interaction (HRI), memory is necessary for robots to use information that are collected from past encounters to generate personalized interaction. Although memory has been widely employed as a core component in cognitive systems, they do not provide direct solutions to utilize memorized information in generating socially-appropriate conversations. From a design perspective, many studies have employed the use of memory in social interactions. However, only a few works so far have addressed the issue of how to utilize memorized information to design long-term HRI. This work proposes a category of four types of memory information aiming to allow a robot to directly use memorized information to modify conversation content in long-term HRI. An adaptive memory system was developed and briefly introduced to facilitate the usage of the memory information. In addition, the concept of ways to use these four types of memory in long-term interactions are provided. To demonstrate, a personal assistant robot application and a user study using it are also included. The user study shows that a robot using the proposed memory information can help users perceive positive relationship with the robot.
BibTeX:
@InProceedings{Zheng2019_1,
  author    = {Xiqian Zheng and Dylan Glass and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {Workshop of Personalization in long-term human-robot interaction at the international conference on Human-Robot Interaction 2019 (HRI2019 Workshop)},
  title     = {Four memory categories to support socially-appropriate conversations in long-term HRI},
  year      = {2019},
  address   = {Daegu, South Korea},
  day       = {11-14},
  month     = mar,
  url       = {http://humanrobotinteraction.org/2019/ https://longtermpersonalizationhri.github.io},
  abstract  = {In long-term human-robot interaction (HRI), memory is necessary for robots to use information that are collected from past encounters to generate personalized interaction. Although memory has been widely employed as a core component in cognitive systems, they do not provide direct solutions to utilize memorized information in generating socially-appropriate conversations. From a design perspective, many studies have employed the use of memory in social interactions. However, only a few works so far have addressed the issue of how to utilize memorized information to design long-term HRI. This work proposes a category of four types of memory information aiming to allow a robot to directly use memorized information to modify conversation content in long-term HRI. An adaptive memory system was developed and briefly introduced to facilitate the usage of the memory information. In addition, the concept of ways to use these four types of memory in long-term interactions are provided. To demonstrate, a personal assistant robot application and a user study using it are also included. The user study shows that a robot using the proposed memory information can help users perceive positive relationship with the robot.},
}
Hidenobu Sumioka, Soheil Keshmiri, Junya Nakanishi, "Potential impact of Listening Support for Individuals with Developmental Disorders through A Huggable Communication Medium", In the 6th annual International Conference on Human-Agent Interaction (HAI2018), Southampton, UK, December, 2018.
Abstract: The 6th annual International Conference on Human-Agent Interaction aims to be the premier interdisciplinary venue for discussing and disseminating state-of-the-art research and results that reach across conventional interaction boundaries from people to a wide range of intelligent systems, including physical robots, software agents and digitally-mediated human-human communication. HAI focusses on technical as well as social aspects.
BibTeX:
@Inproceedings{Sumioka2018b,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Junya Nakanishi},
  title     = {Potential impact of Listening Support for Individuals with Developmental Disorders through A Huggable Communication Medium},
  booktitle = {the 6th annual International Conference on Human-Agent Interaction (HAI2018)},
  year      = {2018},
  address   = {Southampton, UK},
  month     = Dec,
  day       = {15-18},
  url       = {http://hai-conference.net/hai2018/},
  abstract  = {The 6th annual International Conference on Human-Agent Interaction aims to be the premier interdisciplinary venue for discussing and disseminating state-of-the-art research and results that reach across conventional interaction boundaries from people to a wide range of intelligent systems, including physical robots, software agents and digitally-mediated human-human communication. HAI focusses on technical as well as social aspects.},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Virtual Hug Induces Modulated Impression on Hearsay Information", In 6th International Conference on Human-Agent Interaction, Southampton, UK, pp. 199-204, December, 2018.
Abstract: In this article, we report the alleviating effect of virtual interpersonal touch on social judgment. In particular, we show that virtual hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, partici- pants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that virtual hug reduces the negative in- ferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via virtual hug.
BibTeX:
@Inproceedings{Nakanishi2018,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title     = {Virtual Hug Induces Modulated Impression on Hearsay Information},
  booktitle = {6th International Conference on Human-Agent Interaction},
  year      = {2018},
  pages     = {199-204},
  address   = {Southampton, UK},
  month     = Dec,
  day       = {15-18},
  abstract  = {In this article, we report the alleviating effect of virtual interpersonal touch on social judgment. In particular, we show that virtual hug with a remote person modulates the impression of the hearsay information about an absentee. In our experiment, partici- pants rate their impressions as well as note down their recall of information about a third person. We communicate this information through either a speaker or a huggable medium. Our results show that virtual hug reduces the negative in- ferences in the recalls of information about a target person. Furthermore, they suggest the potential that the mediated communication offers in moderating the spread of negative information in human community via virtual hug.},
}
Christian Penaloza, David Hernandez-Carmona, Shuichi Nishio, "Towards Intelligent Brain-Controlled Body Augmentation Robotic Limbs", In The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, October, 2018.
Abstract: Supernumerary Robotic Limbs (SRL) are body augmentation robotic devices that will extend the physical capabilities of humans in an unprecedented way. Researchers have explored the possibility to control SRLs in diverse ways - from manual operation through a joystick to myoelectric signals from muscle impulses - but the ultimate goal is to be able to control them with the brain. Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic devices using brainwaves alone, but the low number of brain-based commands that can be decoded does not allow an SRL to achieve a high number of actions. For this reason, in this paper, we present an intelligent brain-controlled SRL that has context-aware capabilities in order to complement BMI-based commands and increase the number of actions that it can perform with the same BMI-based command. The proposed system consists of a human-like robotic limb that can be activated (i.e. grasp action) with a non-invasive EEG-based BMI when the human operator imagines the action. Since there are different ways that the SRL can perform the action (i.e. different grasping configurations) depending on the context (i.e. type of the object), we provided vision capabilities to the SRL so it can recognize the context and optimize its behavior in order to match the user intention. The proposed hybrid BMI-SRL system opens up the possibilities to explore more complex and realistic human augmentation applications.
BibTeX:
@Inproceedings{Penaloza2018b,
  author    = {Christian Penaloza and David Hernandez-Carmona and Shuichi Nishio},
  title     = {Towards Intelligent Brain-Controlled Body Augmentation Robotic Limbs},
  booktitle = {The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00180},
  url       = {http://www.smc2018.org/},
  abstract  = {Supernumerary Robotic Limbs (SRL) are body augmentation robotic devices that will extend the physical capabilities of humans in an unprecedented way. Researchers have explored the possibility to control SRLs in diverse ways - from manual operation through a joystick to myoelectric signals from muscle impulses - but the ultimate goal is to be able to control them with the brain. Brain-machine interface systems (BMI) have allowed the control of prosthetics and robotic devices using brainwaves alone, but the low number of brain-based commands that can be decoded does not allow an SRL to achieve a high number of actions. For this reason, in this paper, we present an intelligent brain-controlled SRL that has context-aware capabilities in order to complement BMI-based commands and increase the number of actions that it can perform with the same BMI-based command. The proposed system consists of a human-like robotic limb that can be activated (i.e. grasp action) with a non-invasive EEG-based BMI when the human operator imagines the action. Since there are different ways that the SRL can perform the action (i.e. different grasping configurations) depending on the context (i.e. type of the object), we provided vision capabilities to the SRL so it can recognize the context and optimize its behavior in order to match the user intention. The proposed hybrid BMI-SRL system opens up the possibilities to explore more complex and realistic human augmentation applications.},
}
Maryam Alimardani, Soheil Keshmiri, Hidenobu Sumioka, Kazuo Hiraki, "Classification of EEG signals for a hypnotrack BCI system", In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: In this paper, we extracted differential entropy (DE) of the recorded EEGs from two groups of subjects with high and low hypnotic susceptibility and built a support vector machine on these DE features for the classification of susceptibility trait. Moreover, we proposed a clustering-based feature refinement strategy to improve the estimation of such trait. Results showed a high classification performance in detection of subjects’level of susceptibility before and during hypnosis. Our results suggest the usefulness of this classifier in development of future BCI systems applied in the domain of therapy and healthcare.
BibTeX:
@Inproceedings{Alimardani2018a,
  author    = {Maryam Alimardani and Soheil Keshmiri and Hidenobu Sumioka and Kazuo Hiraki},
  title     = {Classification of EEG signals for a hypnotrack BCI system},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  year      = {2018},
  address   = {Madrid, Spain},
  month     = Oct,
  day       = {1-5},
  url       = {https://www.iros2018.org/},
  abstract  = {In this paper, we extracted differential entropy (DE) of the recorded EEGs from two groups of subjects with high and low hypnotic susceptibility and built a support vector machine on these DE features for the classification of susceptibility trait. Moreover, we proposed a clustering-based feature refinement strategy to improve the estimation of such trait. Results showed a high classification performance in detection of subjects’level of susceptibility before and during hypnosis. Our results suggest the usefulness of this classifier in development of future BCI systems applied in the domain of therapy and healthcare.},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Masataka Okubo, Hiroshi Ishiguro, "Similarity of Impact of Humanoid and In-Person Communication on Frontal Brain Activity of Elderly Adults", In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, pp. 2286-2291, October, 2018.
Abstract: We report results of the analyses of the effect of communication through a humanoid robot, in comparison with in-person, video-chat, and speaker, on frontal brain activity of elderly adults during an storytelling experiment. Our results suggest that whereas communicating through a physically embodied medium potentially induces a significantly higher pattern of brain activity with respect to video-chat and speaker, its difference is non-significant in comparison with inperson communication. These results imply that communicating through a humanoid robot induces effects on brain activity of elderly adults that are potentially similar in their patterns to in-person communication. Our findings benefit researchers and practitioners in rehabilitation and elderly care facilities in search of effective means of communication with their patients to increase their involvement in the incremental steps of their treatments. Moreover, they imply the utility of brain information as a promising sensory gateway in characterization of the behavioural responses in human-robot interaction.
BibTeX:
@Inproceedings{Keshmiri2018,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Masataka Okubo and Hiroshi Ishiguro},
  title     = {Similarity of Impact of Humanoid and In-Person Communication on Frontal Brain Activity of Elderly Adults},
  booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  year      = {2018},
  pages     = {2286-2291},
  address   = {Madrid, Spain},
  month     = Oct,
  day       = {1-5},
  url       = {https://www.iros2018.org/},
  abstract  = {We report results of the analyses of the effect of communication through a humanoid robot, in comparison with in-person, video-chat, and speaker, on frontal brain activity of elderly adults during an storytelling experiment. Our results suggest that whereas communicating through a physically embodied medium potentially induces a significantly higher pattern of brain activity with respect to video-chat and speaker, its difference is non-significant in comparison with inperson communication. These results imply that communicating through a humanoid robot induces effects on brain activity of elderly adults that are potentially similar in their patterns to in-person communication. Our findings benefit researchers and practitioners in rehabilitation and elderly care facilities in search of effective means of communication with their patients to increase their involvement in the incremental steps of their treatments. Moreover, they imply the utility of brain information as a promising sensory gateway in characterization of the behavioural responses in human-robot interaction.},
}
Soheil Keshmiri, Hidenobu Sumioka, Masataka Okubo, Ryuji Yamazaki, Aya Nakae, Hiroshi Ishiguro, "Potential Health Benefit of Physical Embodiment in Elderly Counselling: a Longitudinal Case Study", In The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, pp. 1022-1028, October, 2018.
Abstract: We present results of the effect of humanoid in comparison with voice-only communication on frontal brain activity of elderly adults. Our results indicate that use of humanoid induces an increase in frontal brain activity. Additionally, these results imply an increase in their Immunoglobulin A antibody (sIgA), thereby suggesting physical embodiment as a potential health factor in communication with elderly individuals. Such increases in hormonal as well as frontal brain activity, as observed in healthy condition, suggest the potential that physical embodiment can offer to the solution concept of sustaining the process of cognitive decline associated with aging and its consequential diseases such as Alzheimer.
BibTeX:
@Inproceedings{Keshmiri2018c,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Masataka Okubo and Ryuji Yamazaki and Aya Nakae and Hiroshi Ishiguro},
  title     = {Potential Health Benefit of Physical Embodiment in Elderly Counselling: a Longitudinal Case Study},
  booktitle = {The 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  pages     = {1022-1028},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00183},
  url       = {http://www.smc2018.org/},
  abstract  = {We present results of the effect of humanoid in comparison with voice-only communication on frontal brain activity of elderly adults. Our results indicate that use of humanoid induces an increase in frontal brain activity. Additionally, these results imply an increase in their Immunoglobulin A antibody (sIgA), thereby suggesting physical embodiment as a potential health factor in communication with elderly individuals. Such increases in hormonal as well as frontal brain activity, as observed in healthy condition, suggest the potential that physical embodiment can offer to the solution concept of sustaining the process of cognitive decline associated with aging and its consequential diseases such as Alzheimer.},
}
Abdelkader Nasreddine Belkacem, Shuichi Nishio, Takafumi Suzuki, Hiroshi Ishiguro, Masayuki Hirata, "Neuromagnetic Geminoid Control by BCI based on Four Bilateral Hand Movements", In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018), Seagaia Convention Center, Miyazaki, pp. 524-527, October, 2018.
Abstract: The present study describes neuromagnetic Geminoid control system by using single-trial decoding of bilateral hand movements as a new approach to enhance a user’s ability to interact with a complex environment through a multidimensional brain-computer interface (BCI).
BibTeX:
@Inproceedings{Belkacem2018b,
  author    = {Abdelkader Nasreddine Belkacem and Shuichi Nishio and Takafumi Suzuki and Hiroshi Ishiguro and Masayuki Hirata},
  title     = {Neuromagnetic Geminoid Control by BCI based on Four Bilateral Hand Movements},
  booktitle = {2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC2018)},
  year      = {2018},
  pages     = {524-527},
  address   = {Seagaia Convention Center, Miyazaki},
  month     = Oct,
  day       = {7-10},
  doi       = {10.1109/SMC.2018.00183},
  url       = {http://www.smc2018.org/},
  abstract  = {The present study describes neuromagnetic Geminoid control system by using single-trial decoding of bilateral hand movements as a new approach to enhance a user’s ability to interact with a complex environment through a multidimensional brain-computer interface (BCI).},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "Does a Robot's Subtle Pause in Reaction Time to People's Touch Contribute to Positive Influences?", In the 27th IEEE International Conference on Robot and Human Interactive Communication, (RO-MAN 2018), Nanjing and Tai'an, China, August, 2018.
Abstract: This paper addresses the effects of a subtle pause in reactions during human-robot touch interactions. Based on the human scientific literature, people's reaction times to touch stimuli range from 150 to 400 msec. Therefore, we decided to use a subtle pause with a similar length for reactions for more natural human-robot touch interactions. On the other hand, in the human-robot interaction research field, a past study reports that people prefer reactions from a robot in touch interaction that are as quick as possible, i.e., a 0- second reaction time is preferred to 1- or 2- second reaction times. We note that since the resolution of the study's time slices was every second, it remains unknown whether a robot should take a pause of hundreds of milliseconds for a more natural reaction time. To investigate the effects of subtle pauses in touch interaction, we experimentally investigated the effects of reaction time to people's touch with a 200-msec resolution of time slices between 0 second and 1 second: 0 second, and 200, 400, 600, and 800 msec. The number of people who preferred the reactions with subtle pauses exceeded the number who preferred the 0- second reactions. However, the questionnaire scores did not show any significant differences because of individual differences, even though the 400-msec pause was slightly preferred to the others from the preference perspective.
BibTeX:
@Inproceedings{Shiomi2018,
  author    = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  title     = {Does a Robot's Subtle Pause in Reaction Time to People's Touch Contribute to Positive Influences?},
  booktitle = {the 27th IEEE International Conference on Robot and Human Interactive Communication, (RO-MAN 2018)},
  year      = {2018},
  address   = {Nanjing and Tai'an, China},
  month     = Aug,
  day       = {27-31},
  url       = {http://ro-man2018.org/},
  abstract  = {This paper addresses the effects of a subtle pause in reactions during human-robot touch interactions. Based on the human scientific literature, people's reaction times to touch stimuli range from 150 to 400 msec. Therefore, we decided to use a subtle pause with a similar length for reactions for more natural human-robot touch interactions. On the other hand, in the human-robot interaction research field, a past study reports that people prefer reactions from a robot in touch interaction that are as quick as possible, i.e., a 0- second reaction time is preferred to 1- or 2- second reaction times. We note that since the resolution of the study's time slices was every second, it remains unknown whether a robot should take a pause of hundreds of milliseconds for a more natural reaction time. To investigate the effects of subtle pauses in touch interaction, we experimentally investigated the effects of reaction time to people's touch with a 200-msec resolution of time slices between 0 second and 1 second: 0 second, and 200, 400, 600, and 800 msec. The number of people who preferred the reactions with subtle pauses exceeded the number who preferred the 0- second reactions. However, the questionnaire scores did not show any significant differences because of individual differences, even though the 400-msec pause was slightly preferred to the others from the preference perspective.},
}
Masataka Okubo, Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Intimate touch conversation through teleoperated android increases interpersonal closeness in elderly people", In The 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2018), Nanjing and Tai'an, China, August, 2018.
Abstract: We propose Intimate Touch Conversation (ITC) as a new remote communication paradigm in which an individual who is holding a telepresence humanoid engages in a conversation-over-distance with a remote partner that is tele-operating the humanoid. We compare the effect of this new communication paradigm on interpersonal closeness in comparison with in-person and video-chat. Our results suggest that ITC significantly enhances the feeling of interpersonal closeness, as opposed to video-chat and in-person. In addition, they show the intimate touch conversation allows elderly people to find their conversation more interesting. These results imply that feeling of intimate touch that is evoked by the presence of teleoperated android enables elderly users to establish a closer relationship with their conversational partners over distance, thereby reducing their feeling of loneliness. Our findings benefit researchers and engineers in elderly care facilities in search of effective means of establishing a social relation with their elderly users to reduce their feeling of social isolation and loneliness.
BibTeX:
@Inproceedings{Okubo2018,
  author    = {Masataka Okubo and Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  title     = {Intimate touch conversation through teleoperated android increases interpersonal closeness in elderly people},
  booktitle = {The 27th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2018)},
  year      = {2018},
  address   = {Nanjing and Tai'an, China},
  month     = Aug,
  day       = {27-31},
  url       = {http://ro-man2018.org/},
  abstract  = {We propose Intimate Touch Conversation (ITC) as a new remote communication paradigm in which an individual who is holding a telepresence humanoid engages in a conversation-over-distance with a remote partner that is tele-operating the humanoid. We compare the effect of this new communication paradigm on interpersonal closeness in comparison with in-person and video-chat. Our results suggest that ITC significantly enhances the feeling of interpersonal closeness, as opposed to video-chat and in-person. In addition, they show the intimate touch conversation allows elderly people to find their conversation more interesting. These results imply that feeling of intimate touch that is evoked by the presence of teleoperated android enables elderly users to establish a closer relationship with their conversational partners over distance, thereby reducing their feeling of loneliness. Our findings benefit researchers and engineers in elderly care facilities in search of effective means of establishing a social relation with their elderly users to reduce their feeling of social isolation and loneliness.},
}
Carlos T. Ishi, Ryusuke Mikata, Hiroshi Ishiguro, "Analysis of relations between hand gestures and dialogue act categories", In Speech Prosody 2018, Poznan, Poland, pp. 473-477, June, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we analyzed a multimodal database of three-party conversations, and investigated the relations between the occurrence of hand gestures and speech, with special focus on dialogue act categories. Analysis results revealed that hand gestures occur with highest frequency in turn-keeping phrases, and seldom occur in backchannel-type utterances. On the other hand, self-touch hand motions (adapters) occur more often in backchannel utterances and in laughter intervals, in comparison to other dialogue act categories.
BibTeX:
@Inproceedings{Ishi2018a,
  author    = {Carlos T. Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  title     = {Analysis of relations between hand gestures and dialogue act categories},
  booktitle = {Speech Prosody 2018},
  year      = {2018},
  pages     = {473-477},
  address   = {Poznan, Poland},
  month     = Jun,
  day       = {13-16},
  url       = {https://www.isca-speech.org/archive/SpeechProsody_2018/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. In this study, we analyzed a multimodal database of three-party conversations, and investigated the relations between the occurrence of hand gestures and speech, with special focus on dialogue act categories. Analysis results revealed that hand gestures occur with highest frequency in turn-keeping phrases, and seldom occur in backchannel-type utterances. On the other hand, self-touch hand motions (adapters) occur more often in backchannel utterances and in laughter intervals, in comparison to other dialogue act categories.},
}
Jakub Zlotowski, Hidenobu Sumioka, Christoph Bartneck, Shuichi Nishio, Hiroshi Ishiguro, "Understanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization", In The Ninth International Conference on Social Robotics (ICSR 2017), Tsukuba, Japan, pp. 618-627, November, 2017.
Abstract: Anthropomorphism plays an important role in affecting human interaction with a robot. However, our understanding of this process is still limited. We argue that it is not possible to understand anthropomorphism without understanding what is humanness. In the previous research, we proposed to look at the work on dehumanization in order to understand what factors can affect a robot's anthropomorphism. Moreover, considering that there are two distinct dimensions of humanness, a two-dimensional model of anthropomorphism was proposed. We conducted a study in which we manipulated the perceived intentionality of a robot and its appearance, and measured how they affected the anthropomorphization of a robot on two dimensions of humanness and its perceived moral agency. The results do not support a two-dimensional model of anthropomorphism and indicate that the distinction between positive and negative traits may be more relevant in HRI studies in Japan.
BibTeX:
@Inproceedings{Zlotowski2017a,
  author    = {Jakub Zlotowski and Hidenobu Sumioka and Christoph Bartneck and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Understanding Anthropomorphism: Anthropomorphism is not a Reverse Process of Dehumanization},
  booktitle = {The Ninth International Conference on Social Robotics (ICSR 2017)},
  year      = {2017},
  series    = {LNAI 10652},
  pages     = {618-627},
  address   = {Tsukuba, Japan},
  month     = Nov,
  day       = {22-24},
  doi       = {10.1007/978-3-319-70022-9_61},
  url       = {http://www.icsr2017.org/index.html},
  abstract  = {Anthropomorphism plays an important role in affecting human interaction with a robot. However, our understanding of this process is still limited. We argue that it is not possible to understand anthropomorphism without understanding what is humanness. In the previous research, we proposed to look at the work on dehumanization in order to understand what factors can affect a robot's anthropomorphism. Moreover, considering that there are two distinct dimensions of humanness, a two-dimensional model of anthropomorphism was proposed. We conducted a study in which we manipulated the perceived intentionality of a robot and its appearance, and measured how they affected the anthropomorphization of a robot on two dimensions of humanness and its perceived moral agency. The results do not support a two-dimensional model of anthropomorphism and indicate that the distinction between positive and negative traits may be more relevant in HRI studies in Japan.},
  file      = {Zlotowski2017a.pdf:pdf/Zlotowski2017a.pdf:PDF},
}
Takashi Suegami, Hidenobu Sumioka, Fumio Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, Aya Nakae, Hiroshi Ishiguro, "Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle - A Pilot Study", In 5th annual International Conference on Human-Agent Interaction (HAI 2017), Bielefeld, Germany, pp. 463-467, October, 2017.
Abstract: This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.
BibTeX:
@Inproceedings{Suegami2017,
  author    = {Takashi Suegami and Hidenobu Sumioka and Fumio Obayashi and Kyonosuke Ichii and Yoshinori Harada and Hiroshi Daimoto and Aya Nakae and Hiroshi Ishiguro},
  title     = {Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle - A Pilot Study},
  booktitle = {5th annual International Conference on Human-Agent Interaction (HAI 2017)},
  year      = {2017},
  pages     = {463-467},
  address   = {Bielefeld, Germany},
  month     = Oct,
  day       = {17-20},
  doi       = {10.1145/3125739.3132588},
  url       = {http://hai-conference.net/hai2017/},
  abstract  = {This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.},
  file      = {Suegami2017.pdf:pdf/Suegami2017.pdf:PDF},
}
Masa Jazbec, Shuichi Nishio, Hiroshi Ishiguro, Hideaki Kuzuoka, Masataka Okubo, Christian Penaloza, "Body-swapping experiment with an android robot Investigation of the relationship between agency and a sense of ownership toward a different body", In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC2017), Banff, Canada, pp. 1471-1476, October, 2017.
Abstract: This study extends existing Rubber Hand Illusion (RHI) experiments to employ life-size full-body humanlike android robot to investigate body ownership illusion and the sense of agency.
BibTeX:
@Inproceedings{Jazbec2017a,
  author    = {Masa Jazbec and Shuichi Nishio and Hiroshi Ishiguro and Hideaki Kuzuoka and Masataka Okubo and Christian Penaloza},
  title     = {Body-swapping experiment with an android robot Investigation of the relationship between agency and a sense of ownership toward a different body},
  booktitle = {2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC2017)},
  year      = {2017},
  pages     = {1471-1476},
  address   = {Banff, Canada},
  month     = Oct,
  day       = {5-8},
  url       = {http://www.smc2017.org/},
  abstract  = {This study extends existing Rubber Hand Illusion (RHI) experiments to employ life-size full-body humanlike android robot to investigate body ownership illusion and the sense of agency.},
  file      = {Jazbec2017a.pdf:pdf/Jazbec2017a.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Probabilistic nod generation model based on estimated utterance categories", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, BC, Canada, pp. 5333-5339, September, 2017.
Abstract: We propose a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability dis- tribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. Subjective experiment results indicate that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.
BibTeX:
@Inproceedings{Liu2017b,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Probabilistic nod generation model based on estimated utterance categories},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  year      = {2017},
  pages     = {5333-5339},
  address   = {Vancouver, BC, Canada},
  month     = Sep,
  day       = {24-28},
  url       = {http://www.iros2017.org/},
  abstract  = {We propose a probabilistic model that generates nod motions based on utterance categories estimated from the speech input. The model comprises two main blocks. In the first block, dialogue act-related categories are estimated from the input speech. Considering the correlations between dialogue acts and head motions, the utterances are classified into three categories having distinct nod distributions. Linguistic information extracted from the input speech is fed to a cluster of classifiers which are combined to estimate the utterance categories. In the second block, nod motion parameters are generated based on the categories estimated by the classifiers. The nod motion parameters are represented as probability dis- tribution functions (PDFs) inferred from human motion data. By using speech energy features, the parameters are sampled from the PDFs belonging to the estimated categories. Subjective experiment results indicate that the motions generated by our proposed approach are considered more natural than those of a previous model using fixed nod shapes and hand-labeled utterance categories.},
  file      = {Liu2017b.pdf:pdf/Liu2017b.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions", In Interspeech 2017, Stockholm, Sweden, pp. 874-878, August, 2017.
Abstract: The background of our research is the generation of natural human-like motions during speech in android robots that have a highly human-like appearance. Mismatches in speech and motion are sources of unnaturalness, especially when emotion expressions are involved. Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. In this study, we analyze facial, head and body motions during several types of vocalized surprise expressions appearing in human-human dialogue interactions. Analysis results indicate inter-dependence between motion types and different types of surprise expression (such as emotional, social or quoted) as well as different degrees of surprise expression. The synchronization between motion and surprise utterances is also analyzed.
BibTeX:
@Inproceedings{Ishi2017b,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Motion analysis in vocalized surprise expressions},
  booktitle = {Interspeech 2017},
  year      = {2017},
  pages     = {874-878},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-631},
  url       = {http://www.interspeech2017.org/},
  abstract  = {The background of our research is the generation of natural human-like motions during speech in android robots that have a highly human-like appearance. Mismatches in speech and motion are sources of unnaturalness, especially when emotion expressions are involved. Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. In this study, we analyze facial, head and body motions during several types of vocalized surprise expressions appearing in human-human dialogue interactions. Analysis results indicate inter-dependence between motion types and different types of surprise expression (such as emotional, social or quoted) as well as different degrees of surprise expression. The synchronization between motion and surprise utterances is also analyzed.},
  file      = {Ishi2017b.pdf:pdf/Ishi2017b.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Turn-taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents", In Interspeech 2017, Stockholm, Sweden, August, 2017.
Abstract: A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation part- ner. This paper proposes a model that estimates the timing of turn-taking during verbal interactions. Unlike previous stud- ies, our proposed model does not rely on a silence region be- tween sentences since a dialog system must respond without large gaps or overlaps. We propose a Recurrent Neural Net- work (RNN) based model that takes the joint embedding of lex- ical and prosodic contents as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. To this end, we trained a neural network to embed the lexical contents, the fundamental frequencies, and the speech power into a joint embedding space. To learn meaningful embedding spaces, the prosodic features from each single utterance are pre- trained using RNN and combined with utterance lexical embed- ding as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it out- performed the use of word embedding-based features.
BibTeX:
@Inproceedings{Liu2017c,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Turn-taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents},
  booktitle = {Interspeech 2017},
  year      = {2017},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-965},
  url       = {http://www.interspeech2017.org/},
  abstract  = {A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation part- ner. This paper proposes a model that estimates the timing of turn-taking during verbal interactions. Unlike previous stud- ies, our proposed model does not rely on a silence region be- tween sentences since a dialog system must respond without large gaps or overlaps. We propose a Recurrent Neural Net- work (RNN) based model that takes the joint embedding of lex- ical and prosodic contents as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. To this end, we trained a neural network to embed the lexical contents, the fundamental frequencies, and the speech power into a joint embedding space. To learn meaningful embedding spaces, the prosodic features from each single utterance are pre- trained using RNN and combined with utterance lexical embed- ding as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it out- performed the use of word embedding-based features.},
  file      = {Liu2017c.pdf:pdf/Liu2017c.pdf:PDF},
}
Carlos T. Ishi, Jun Arai, Norihiro Hagita, "Prosodic analysis of attention-drawing speech", In Interspeech 2017, Stockholm, Sweden, pp. 909-913, August, 2017.
Abstract: The term “attention drawing" refers to the action of sellers who call out to get the attention of people passing by in front of their stores or shops to invite them inside to buy or sample products. Since the speaking styles exhibited in such attention-drawing speech are clearly different from conversational speech, in this study, we focused on prosodic analyses of attention-drawing speech and collected the speech data of multiple people with previous attention-drawing experience by simulating several situations. We then investigated the effects of several factors, including background noise, interaction phases, and shop categories on the prosodic features of attention-drawing utterances. Analysis results indicate that compared to dialogue interaction utterances, attention-drawing utterances usually have higher power, higher mean F0s, smaller F0 ranges, and do not drop at the end of sentences, regardless of the presence or absence of background noise. Analysis of sentence-final syllable intonation indicates the presence of lengthened flat or rising tones in attention-drawing utterances.
BibTeX:
@Inproceedings{Ishi2017c,
  author    = {Carlos T. Ishi and Jun Arai and Norihiro Hagita},
  title     = {Prosodic analysis of attention-drawing speech},
  booktitle = {Interspeech 2017},
  year      = {2017},
  pages     = {909-913},
  address   = {Stockholm, Sweden},
  month     = Aug,
  day       = {20-24},
  doi       = {10.21437/Interspeech.2017-623},
  url       = {http://www.interspeech2017.org/},
  abstract  = {The term “attention drawing" refers to the action of sellers who call out to get the attention of people passing by in front of their stores or shops to invite them inside to buy or sample products. Since the speaking styles exhibited in such attention-drawing speech are clearly different from conversational speech, in this study, we focused on prosodic analyses of attention-drawing speech and collected the speech data of multiple people with previous attention-drawing experience by simulating several situations. We then investigated the effects of several factors, including background noise, interaction phases, and shop categories on the prosodic features of attention-drawing utterances. Analysis results indicate that compared to dialogue interaction utterances, attention-drawing utterances usually have higher power, higher mean F0s, smaller F0 ranges, and do not drop at the end of sentences, regardless of the presence or absence of background noise. Analysis of sentence-final syllable intonation indicates the presence of lengthened flat or rising tones in attention-drawing utterances.},
}
Rosario Sorbello, Salvatore Tramonte, Marcello Giardina, Carmelo Cali, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "Augmented Embodied Emotions by Geminoid Robot induced by Human Bio-feedback Brain Features in a Musical Experience", In Biologically Inspired Cognitive Architectures 2017 (BICA 2017), Moscow, Russia, August, 2017.
Abstract: This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). We discuss the state of the art of the theoretical and the experimental research into the cognitive capacity of music. We overview the results that points to the correspondence between the perceptual structures, the cognitive organization of sounds in music, the motor and a ective behaviour. On such grounds we bring in the concepts of musical tensions and functional connections as the constructs that account for such correspondence in music experience. Finally we describe the architecture as a models generator system whose modules can be employed to test this correspondence from which the perceptual, cognitive, a ective and motor constituents of musical capacity may emerge.
BibTeX:
@Inproceedings{Sorbello2017a,
  author    = {Rosario Sorbello and Salvatore Tramonte and Marcello Giardina and Carmelo Cali and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title     = {Augmented Embodied Emotions by Geminoid Robot induced by Human Bio-feedback Brain Features in a Musical Experience},
  booktitle = {Biologically Inspired Cognitive Architectures 2017 (BICA 2017)},
  year      = {2017},
  address   = {Moscow, Russia},
  month     = Aug,
  day       = {1-6},
  url       = {http://bica2017.bicasociety.org/},
  abstract  = {This paper presents the conceptual framework for a study of musical experience and
the associated architecture centred on Human-Humanoid Interaction (HHI). We discuss
the state of the art of the theoretical and the experimental research into the cognitive
capacity of music. We overview the results that points to the correspondence between the
perceptual structures, the cognitive organization of sounds in music, the motor and aective
behaviour. On such grounds we bring in the concepts of musical tensions and functional
connections as the constructs that account for such correspondence in music experience.
Finally we describe the architecture as a models generator system whose modules can be
employed to test this correspondence from which the perceptual, cognitive, aective and
motor constituents of musical capacity may emerge.},
  file      = {Sorbello2017a.pdf:pdf/Sorbello2017a.pdf:PDF},
}
Rosario Sorbello, Salvatore Tramonte, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, Antonio Chella, "An Android Architecture for Bio-inspired Honest Signalling in Human-Humanoid Interaction", In Biologically Inspired Cognitive Architectures 2017 (BICA 2017), Moscow, Russia, August, 2017.
Abstract: This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.
BibTeX:
@Inproceedings{Sorbello2017,
  author    = {Rosario Sorbello and Salvatore Tramonte and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro and Antonio Chella},
  title     = {An Android Architecture for Bio-inspired Honest Signalling in Human-Humanoid Interaction},
  booktitle = {Biologically Inspired Cognitive Architectures 2017 (BICA 2017)},
  year      = {2017},
  address   = {Moscow, Russia},
  month     = Aug,
  day       = {1-6},
  url       = {http://bica2017.bicasociety.org/},
  abstract  = {This paper outlines an augmented robotic architecture to study the conditions of successful Human-Humanoid Interaction (HHI). The architecture is designed as a testable model generator for interaction centred on the ability to emit, display and detect honest signals. First we overview the biological theory in which the concept of honest signals has been put forward in order to assess its explanatory power. We reconstruct the application of the concept of honest signalling in accounting for interaction in strategic contexts and in laying bare the foundation for an automated social metrics. We describe the modules of the architecture, which is intended to implement the concept of honest signalling in connection with a refinement provided by delivering the sense of co-presence in a shared environment. Finally, an analysis of Honest Signals, in term of body postures, exhibited by participants during the preliminary experiment with the Geminoid Hi-1 is provided.},
  file      = {Sorbello2017.pdf:pdf/Sorbello2017.pdf:PDF},
}
Soheil Keshmiri, Hidenobu Sumioka, Junya Nakanishi, Hiroshi Ishiguro, "Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates", In The International Joint Conference on Neural Networks (IJCNN 2017), Anchorage, Alaska, USA, May, 2017.
Abstract: We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.
BibTeX:
@Inproceedings{Keshmiri2017a,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Junya Nakanishi and Hiroshi Ishiguro},
  title     = {Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates},
  booktitle = {The International Joint Conference on Neural Networks (IJCNN 2017)},
  year      = {2017},
  address   = {Anchorage, Alaska, USA},
  month     = May,
  day       = {18},
  url       = {htp://www.ijcnn.org/},
  abstract  = {We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.},
  file      = {Keshmiri2017a.pdf:pdf/Keshmiri2017a.pdf:PDF},
}
Dylan F. Glas, Malcolm Doering, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction", In 2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation, Vienna, Austria, March, 2017.
Abstract: Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.
BibTeX:
@Inproceedings{Glas2017,
  author    = {Dylan F. Glas and Malcolm Doering and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction},
  booktitle = {2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation},
  year      = {2017},
  address   = {Vienna, Austria},
  month     = Mar,
  doi       = {10.1145/3029798.3036646},
  url       = {https://youtu.be/CY1WIfPJHqI},
  abstract  = {Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.},
  file      = {Glas2017.pdf:pdf/Glas2017.pdf:PDF},
}
Masa Jazbec, Shuichi Nishio, Hiroshi Ishiguro, Masataka Okubo, Christian Penaloza, "Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body", In The 2017 Conference on Human-Robot Interaction (HRI2017), Vienna, Austria, pp. 143-144, March, 2017.
Abstract: The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.
BibTeX:
@Inproceedings{Jazbec2017,
  author    = {Masa Jazbec and Shuichi Nishio and Hiroshi Ishiguro and Masataka Okubo and Christian Penaloza},
  title     = {Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body},
  booktitle = {The 2017 Conference on Human-Robot Interaction (HRI2017)},
  year      = {2017},
  pages     = {143-144},
  address   = {Vienna, Austria},
  month     = Mar,
  url       = {http://humanrobotinteraction.org/2017/},
  abstract  = {The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.},
}
内田貴久, 港隆史, 石黒浩, "対話意欲を喚起する要因の人-アンドロイド間比較", 第171回 情報処理学会ヒューマンコンピュータインタラクション研究会(SIGHCI171), vol. 2017-HCI-171, no. 11, 大濱信泉記念館, 沖縄, pp. 1-5, January, 2017.
Abstract: 本稿では,ユーザの対話意欲を喚起する自律対話アンドロイドの対話戦略を検討する.対話の本質である主観的な意見のやりとりを行う上では,相手の意思性をどれだけ感じるかが対話意欲の喚起に重要である.さらに,相手に親和性を感じなければ,相手との関係構築意欲が生じず,対話意欲も生じない.本稿では,対話における相手との同調割合(対話戦略)と,親和性,意思性,対話意欲の関係を人間同士の対話,人間とアンドロイドの対話で調べたところ,両者間で異なる結果を得た.これをもとに,人間・アンドロイドそれぞれに対する対話の動機や期待する関係性が異なることを考察した.
BibTeX:
@Inproceedings{内田貴久2017,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  title     = {対話意欲を喚起する要因の人-アンドロイド間比較},
  booktitle = {第171回 情報処理学会ヒューマンコンピュータインタラクション研究会(SIGHCI171)},
  year      = {2017},
  volume    = {2017-HCI-171},
  number    = {11},
  pages     = {1-5},
  address   = {大濱信泉記念館, 沖縄},
  month     = Jan,
  url       = {http://www.sighci.jp/events/sig/171},
  abstract  = {本稿では,ユーザの対話意欲を喚起する自律対話アンドロイドの対話戦略を検討する.対話の本質である主観的な意見のやりとりを行う上では,相手の意思性をどれだけ感じるかが対話意欲の喚起に重要である.さらに,相手に親和性を感じなければ,相手との関係構築意欲が生じず,対話意欲も生じない.本稿では,対話における相手との同調割合(対話戦略)と,親和性,意思性,対話意欲の関係を人間同士の対話,人間とアンドロイドの対話で調べたところ,両者間で異なる結果を得た.これをもとに,人間・アンドロイドそれぞれに対する対話の動機や期待する関係性が異なることを考察した.},
  file      = {内田貴久2017.pdf:pdf/内田貴久2017.pdf:PDF},
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots", In the 4th annual International Conference on Human-Agent Interaction (iHAI2016), Singapore, pp. 187-192, October, 2016.
Abstract: This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.
BibTeX:
@Inproceedings{Uchida2016a,
  author    = {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title     = {Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots},
  booktitle = {the 4th annual International Conference on Human-Agent Interaction (iHAI2016)},
  year      = {2016},
  pages     = {187-192},
  address   = {Singapore},
  month     = Oct,
  url       = {http://hai-conference.net/hai2016/},
  abstract  = {This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.},
  file      = {Uchida2016a.pdf:pdf/Uchida2016a.pdf:PDF},
}
Carlos T. Ishi, Tomo Funayama, Takashi Minato, Hiroshi Ishiguro, "Motion generation in android robots during laughing speech", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 3327-3332, October, 2016.
Abstract: We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).
BibTeX:
@Inproceedings{Ishi2016b,
  author    = {Carlos T. Ishi and Tomo Funayama and Takashi Minato and Hiroshi Ishiguro},
  title     = {Motion generation in android robots during laughing speech},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2016},
  pages     = {3327-3332},
  address   = {DaeJeon, Korea},
  month     = Oct,
  url       = {http://www.iros2016.org/},
  abstract  = {We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).},
  file      = {Ishi2016b.pdf:pdf/Ishi2016b.pdf:PDF},
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium", In The 4th annual International Conference on Human-Agent Interaction (HAI 2016), Biopolis, Singapore, pp. 103-106, October, 2016.
Abstract: This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.
BibTeX:
@Inproceedings{Nakanishi2016a,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title     = {Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium},
  booktitle = {The 4th annual International Conference on Human-Agent Interaction (HAI 2016)},
  year      = {2016},
  pages     = {103-106},
  address   = {Biopolis, Singapore},
  month     = Oct,
  url       = {http://hai-conference.net/hai2016/},
  abstract  = {This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.},
  file      = {Nakanishi2016a.pdf:pdf/Nakanishi2016a.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Jani Even, Norihiro Hagita, "Hearing support system using environment sensor network", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 1275-1280, October, 2016.
Abstract: In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.
BibTeX:
@Inproceedings{Ishi2016c,
  author    = {Carlos T. Ishi and Chaoran Liu and Jani Even and Norihiro Hagita},
  title     = {Hearing support system using environment sensor network},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2016},
  pages     = {1275-1280},
  address   = {DaeJeon, Korea},
  month     = Oct,
  url       = {http://www.iros2016.org/},
  abstract  = {In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.},
  file      = {Ishi2016c.pdf:pdf/Ishi2016c.pdf:PDF},
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction", In The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR), New York, NY, USA, August, 2016.
Abstract: Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.
BibTeX:
@Inproceedings{Liu2016a,
  author    = {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction},
  booktitle = {The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR)},
  year      = {2016},
  address   = {New York, NY, USA},
  month     = Aug,
  abstract  = {Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.},
  file      = {Liu2016a.pdf:pdf/Liu2016a.pdf:PDF},
}
Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Speech Driven Trunk Motion Generating System Based on Physical Constraint", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 232-239, August, 2016.
Abstract: We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.
BibTeX:
@Inproceedings{Sakai2016,
  author    = {Kurima Sakai and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Speech Driven Trunk Motion Generating System Based on Physical Constraint},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {232-239},
  address   = {Teachers College, Columbia University, USA},
  month     = Aug,
  url       = {http://ro-man2016.org/},
  abstract  = {We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.},
  file      = {Sakai2016.pdf:pdf/Sakai2016.pdf:PDF},
}
Dylan F. Glas, Takashi Minato, Carlos T. Ishi, Tatsuya Kawahara, Hiroshi Ishiguro, "ERICA: The ERATO Intelligent Conversational Android", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, New York, NY, USA, pp. 22-29, August, 2016.
Abstract: The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.
BibTeX:
@Inproceedings{Glas2016b,
  author    = {Dylan F. Glas and Takashi Minato and Carlos T. Ishi and Tatsuya Kawahara and Hiroshi Ishiguro},
  title     = {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {22-29},
  address   = {New York, NY, USA},
  month     = Aug,
  url       = {http://www.ro-man2016.org/},
  abstract  = {The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.},
  file      = {Glas2016b.pdf:pdf/Glas2016b.pdf:PDF},
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 206-211, August, 2016.
Abstract: The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.
BibTeX:
@Inproceedings{Uchida2016,
  author    = {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title     = {A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year      = {2016},
  pages     = {206-211},
  address   = {Teachers College, Columbia University, USA},
  month     = Aug,
  url       = {http://ro-man2016.org/},
  abstract  = {The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.},
  file      = {Uchida2016.pdf:pdf/Uchida2016.pdf:PDF},
}
Hiroaki Hatano, Carlos T. Ishi, Tsuyoshi Komatsubara, Masahiro Shiomi, Takayuki Kanda, "Analysis of laughter events and social status of children in classrooms", In Speech Prosody 2016 boston (Speech Prosody 8), Boston, USA, pp. 1004-1008, May, 2016.
Abstract: Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.
BibTeX:
@Inproceedings{Hatano2016,
  author          = {Hiroaki Hatano and Carlos T. Ishi and Tsuyoshi Komatsubara and Masahiro Shiomi and Takayuki Kanda},
  title           = {Analysis of laughter events and social status of children in classrooms},
  booktitle       = {Speech Prosody 2016 boston (Speech Prosody 8)},
  year            = {2016},
  pages           = {1004-1008},
  address         = {Boston, USA},
  month           = May,
  url             = {http://sites.bu.edu/speechprosody2016/},
  abstract        = {Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.},
  file            = {Hatano2016.pdf:pdf/Hatano2016.pdf:PDF},
  keywords        = {laughter, social status, children, natural conversation, real environment},
}
Carlos T. Ishi, Hiroaki Hatano, Hiroshi Ishiguro, "Audiovisual analysis of relations between laughter types and laughter motions", In Speech Prosody 2016 Boston (Speech Prosode 8), Boston, USA, pp. 806-810, May, 2016.
Abstract: Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.
BibTeX:
@Inproceedings{Ishi2016,
  author    = {Carlos T. Ishi and Hiroaki Hatano and Hiroshi Ishiguro},
  title     = {Audiovisual analysis of relations between laughter types and laughter motions},
  booktitle = {Speech Prosody 2016 Boston (Speech Prosode 8)},
  year      = {2016},
  pages     = {806-810},
  address   = {Boston, USA},
  month     = May,
  url       = {http://sites.bu.edu/speechprosody2016/},
  abstract  = {Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.},
  file      = {Ishi2016.pdf:pdf/Ishi2016.pdf:PDF},
}
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned", In 11th ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand, pp. 303-310, March, 2016.
Abstract: Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.
BibTeX:
@Inproceedings{Glas2016a,
  author    = {Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned},
  booktitle = {11th ACM/IEEE International Conference on Human-Robot Interaction},
  year      = {2016},
  pages     = {303-310},
  address   = {Christchurch, New Zealand},
  month     = Mar,
  url       = {http://humanrobotinteraction.org/2016/},
  abstract  = {Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.},
  file      = {Glas2016a.pdf:pdf/Glas2016a.pdf:PDF},
}
港隆史, 石黒浩, "人と自然に対話できる自律型アンドロイドの開発", 日本音響学会2016年春季研究発表会, 桐蔭横浜大学, 神奈川, pp. 1481-1482, March, 2016.
Abstract: 石黒ERATOプロジェクトで開発している自律対話型アンドロイドの対話システムについて紹介する
BibTeX:
@Inproceedings{港隆史2016,
  author    = {港隆史 and 石黒浩},
  title     = {人と自然に対話できる自律型アンドロイドの開発},
  booktitle = {日本音響学会2016年春季研究発表会},
  year      = {2016},
  pages     = {1481-1482},
  address   = {桐蔭横浜大学, 神奈川},
  month     = Mar,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {石黒ERATOプロジェクトで開発している自律対話型アンドロイドの対話システムについて紹介する},
  file      = {港隆史2016.pdf:pdf/港隆史2016.pdf:PDF},
}
波多野博顕, 石井カルロス寿憲, 石黒浩, "対話相手の違いに応じた発話スタイルの変化:ジェミノイド対話の分析", 日本音響学会2016年春季研究発表会, 桐蔭横浜大学, 神奈川県, pp. 343-344, March, 2016.
Abstract: 同一の被験者が異なる見かけを持つアンドロイドと対話したとき,相槌などの韻律がどのように変化するのかについて,対人スタイルという観点から分析した結果の報告を行う。
BibTeX:
@Inproceedings{波多野博顕2016,
  author    = {波多野博顕 and 石井カルロス寿憲 and 石黒浩},
  title     = {対話相手の違いに応じた発話スタイルの変化:ジェミノイド対話の分析},
  booktitle = {日本音響学会2016年春季研究発表会},
  year      = {2016},
  pages     = {343-344},
  address   = {桐蔭横浜大学, 神奈川県},
  month     = Mar,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {同一の被験者が異なる見かけを持つアンドロイドと対話したとき,相槌などの韻律がどのように変化するのかについて,対人スタイルという観点から分析した結果の報告を行う。},
}
Hidenobu Sumioka, Yuichiro Yoshikawa, Yasuo Wada, Hiroshi Ishiguro, "Teachers' impressions on robots for therapeutic applications", In International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot, Kanagawa, Japan, pp. (ASD-HR2), November, 2015.
Abstract: Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.
BibTeX:
@Inproceedings{Sumioka2015c,
  author    = {Hidenobu Sumioka and Yuichiro Yoshikawa and Yasuo Wada and Hiroshi Ishiguro},
  title     = {Teachers' impressions on robots for therapeutic applications},
  booktitle = {International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot},
  year      = {2015},
  pages     = {(ASD-HR2)},
  address   = {Kanagawa, Japan},
  month     = NOV,
  url       = {https://sites.google.com/site/asdhr2015/home},
  abstract  = {Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.},
  file      = {Sumioka2015c.pdf:pdf/Sumioka2015c.pdf:PDF},
}
北村達也, 能田由紀子, 吐師道子, 波多野博顕, 梅谷智弘, "磁気センサシステムに基づく調音運動と口蓋形状の関係の観測", 第60回 日本音声言語医学会総会・学術講演会, 愛知県産業労働センター, pp. 63, October, 2015.
Abstract: 本研究では磁気センサシステム(NDI社 Wave Speech Research System)を用いて正中矢状面上および冠状面上の調音運動を観測し,口蓋との位置関係を可視化した.この磁気センサシステムは,磁気を利用して調音器官に貼り付けた小型のセンサ(サイズ:3 mm×3 mm×2 mm)の位置をリアルタイム計測することができる.従来の研究では主として正中矢状面上の調音運動の観測に用いられてきたが,本研究では舌の正中矢状面方向に加え冠状面方向にセンサを貼り付けることによって舌運動の3次元的な計測を実現した.また,歯科用印象材を用いて口蓋形状を含む歯型を採取し,それを利用して咬合面を計測した(昨年度の発表).さらに,3次元プロッタ(Roland社 MDX-20)を用いてこの歯型の3次元形状をスキャンし,得られた3次元口蓋形状を磁気センサシステムにより計測した調音空間上にスーパーインポーズした.この手法によって,3次元口蓋形状と調音運動の関係を明らかにすることができる.
BibTeX:
@Inproceedings{波多野博顕2015,
  author    = {北村達也 and 能田由紀子 and 吐師道子 and 波多野博顕 and 梅谷智弘},
  title     = {磁気センサシステムに基づく調音運動と口蓋形状の関係の観測},
  booktitle = {第60回 日本音声言語医学会総会・学術講演会},
  year      = {2015},
  pages     = {63},
  address   = {愛知県産業労働センター},
  month     = OCT,
  url       = {http://www.jslp.org/soukai/index.htm},
  abstract  = {本研究では磁気センサシステム(NDI社 Wave Speech Research System)を用いて正中矢状面上および冠状面上の調音運動を観測し,口蓋との位置関係を可視化した.この磁気センサシステムは,磁気を利用して調音器官に貼り付けた小型のセンサ(サイズ:3 mm×3 mm×2 mm)の位置をリアルタイム計測することができる.従来の研究では主として正中矢状面上の調音運動の観測に用いられてきたが,本研究では舌の正中矢状面方向に加え冠状面方向にセンサを貼り付けることによって舌運動の3次元的な計測を実現した.また,歯科用印象材を用いて口蓋形状を含む歯型を採取し,それを利用して咬合面を計測した(昨年度の発表).さらに,3次元プロッタ(Roland社 MDX-20)を用いてこの歯型の3次元形状をスキャンし,得られた3次元口蓋形状を磁気センサシステムにより計測した調音空間上にスーパーインポーズした.この手法によって,3次元口蓋形状と調音運動の関係を明らかにすることができる.},
  file      = {波多野博顕2015.pdf:pdf/波多野博顕2015.pdf:PDF},
}
Hiroaki Hatano, Carlos T. Ishi, Makiko Matsuda, "Automatic evaluation for accentuation of Japanese read speech", In International Workshop Construction of Digital Resources for Learning Japanese, Italy, pp. 4-5 (Abstracts), October, 2015.
Abstract: The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70   80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.
BibTeX:
@Inproceedings{Hatano2015a,
  author    = {Hiroaki Hatano and Carlos T. Ishi and Makiko Matsuda},
  title     = {Automatic evaluation for accentuation of Japanese read speech},
  booktitle = {International Workshop Construction of Digital Resources for Learning Japanese},
  year      = {2015},
  pages     = {4-5 (Abstracts)},
  address   = {Italy},
  month     = Oct,
  url       = {https://events.unibo.it/dit-workshop-japanese-digital-resources},
  abstract  = {The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70 ~ 80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.},
  file      = {Hatano2015a.pdf:pdf/Hatano2015a.pdf:PDF},
}
Carlos T. Ishi, Even Jani, Norihiro Hagita, "Speech activity detection and face orientation estimation using multiple microphone arrays and human position information", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 5574-5579, September, 2015.
Abstract: We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.
BibTeX:
@Inproceedings{Ishi2015b,
  author    = {Carlos T. Ishi and Even Jani and Norihiro Hagita},
  title     = {Speech activity detection and face orientation estimation using multiple microphone arrays and human position information},
  booktitle = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2015},
  pages     = {5574-5579},
  address   = {Hamburg, Germany},
  month     = SEP,
  abstract  = {We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.},
  file      = {Ishi2015b:pdf/Ishi2015b.pdf:PDF},
}
Jani Even, Florent B.B. Ferreri, Atsushi Watanabe, Luis Y. S. Morales, Carlos T. Ishi, Norihiro Hagita, "Audio Augmented Point Clouds for Applications in Robotics", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 4846-4851, September, 2015.
Abstract: This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.
BibTeX:
@Inproceedings{Jani2015a,
  author    = {Jani Even and Florent B.B. Ferreri and Atsushi Watanabe and Luis Y. S. Morales and Carlos T. Ishi and Norihiro Hagita},
  title     = {Audio Augmented Point Clouds for Applications in Robotics},
  booktitle = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year      = {2015},
  pages     = {4846-4851},
  address   = {Hamburg, Germany},
  month     = SEP,
  abstract  = {This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.},
  file      = {Jani2015a.pdf:pdf/Jani2015a.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning", In Workshop "Quo Vadis Robotics & Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015, Bratislava, Slovakia, September, 2015.
Abstract: This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.
BibTeX:
@Inproceedings{Alimardani2015,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning},
  booktitle = {Workshop "Quo Vadis Robotics \& Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015},
  year      = {2015},
  address   = {Bratislava, Slovakia},
  month     = Sep,
  abstract  = {This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.},
  file      = {Alimardani2015.pdf:pdf/Alimardani2015.pdf:PDF},
}
Kurima Sakai, Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Online speech-driven head motion generating system and evaluation on a tele-operated robot", In IEEE International Symposium on Robot and Human Interactive Communication, Kobe, Japan, pp. 529-534, August, 2015.
Abstract: We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation
BibTeX:
@Inproceedings{Sakai2015,
  author    = {Kurima Sakai and Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Online speech-driven head motion generating system and evaluation on a tele-operated robot},
  booktitle = {IEEE International Symposium on Robot and Human Interactive Communication},
  year      = {2015},
  pages     = {529-534},
  address   = {Kobe, Japan},
  month     = AUG,
  abstract  = {We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation},
  file      = {Sakai2015.pdf:pdf/Sakai2015.pdf:PDF},
}
Dylan F. Glas, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Can a social robot train itself just by observing human interactions?", In IEEE International Conference on Robotics and Automation, Seattle, WA, USA, May, 2015.
Abstract: In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a na&239;ve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.
BibTeX:
@Inproceedings{Glas2015a,
  author    = {Dylan F. Glas and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title     = {Can a social robot train itself just by observing human interactions?},
  booktitle = {IEEE International Conference on Robotics and Automation},
  year      = {2015},
  address   = {Seattle, WA, USA},
  month     = May,
  abstract  = {In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naïve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.},
  file      = {Glas2015a.pdf:pdf/Glas2015a.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems", In 10th ACM/IEEE International Conference on Human-Robot Interaction 2015, Portland, Oregon, USA, pp. 279-286, March, 2015.
Abstract: n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).
BibTeX:
@Inproceedings{Liu2015,
  author    = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems},
  booktitle = {10th ACM/IEEE International Conference on Human-Robot Interaction 2015},
  year      = {2015},
  pages     = {279-286},
  address   = {Portland, Oregon, USA},
  month     = MAR,
  abstract  = {n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).},
  file      = {Liu2015.pdf:pdf/Liu2015.pdf:PDF},
}
Junya Nakanishi, Hidenobu Sumioka, Kurima Sakai, Daisuke Nakamichi, Masahiro Shiomi, Hiroshi Ishiguro, "Huggable Communication Medium Encourages Listening to Others", In 2nd International Conference on Human-Agent Interraction, Tsukuba, Japan, pp. pp 249-252, October, 2014.
Abstract: We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.
BibTeX:
@Inproceedings{Nakanishi2014,
  author    = {Junya Nakanishi and Hidenobu Sumioka and Kurima Sakai and Daisuke Nakamichi and Masahiro Shiomi and Hiroshi Ishiguro},
  title     = {Huggable Communication Medium Encourages Listening to Others},
  booktitle = {2nd International Conference on Human-Agent Interraction},
  year      = {2014},
  pages     = {pp 249-252},
  address   = {Tsukuba, Japan},
  month     = OCT,
  url       = {http://hai-conference.net/hai2014/},
  abstract  = {We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.},
  file      = {Nakanishi2014.pdf:pdf/Nakanishi2014.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot", In IEEE International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, pp. 403-408, August, 2014.
Abstract: Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.
BibTeX:
@Inproceedings{Alimardani2014,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot},
  booktitle       = {IEEE International Conference on Biomedical Robotics and Biomechatronics},
  year            = {2014},
  pages           = {403-408},
  address         = {Sao Paulo, Brazil},
  month           = Aug,
  day             = {12-15},
  doi             = {10.1109/BIOROB.2014.6913810},
  abstract        = {Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.},
  file            = {Alimardani2014b.pdf:pdf/Alimardani2014b.pdf:PDF},
}
Daisuke Nakamichi, Shuichi Nishio, Hiroshi Ishiguro, "Training of telecommunication through teleoperated android "Telenoid" and its effect", In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, UK, pp. 1083-1088, August, 2014.
Abstract: Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.
BibTeX:
@Inproceedings{Nakamichi2014,
  author          = {Daisuke Nakamichi and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Training of telecommunication through teleoperated android "Telenoid" and its effect},
  booktitle       = {The 23rd IEEE International Symposium on Robot and Human Interactive Communication},
  year            = {2014},
  pages           = {1083-1088},
  address         = {Edinburgh, Scotland, UK},
  month           = Aug,
  day             = {25-29},
  url             = {http://rehabilitationrobotics.net/ro-man14/},
  abstract        = {Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.},
  file            = {Nakamichi2014.pdf:pdf/Nakamichi2014.pdf:PDF},
}
Marco Nørskov, "Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 319-327, August, 2014.
BibTeX:
@Inproceedings{Noerskov2014,
  author    = {Marco N\orskov},
  title     = {Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {319-327},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-319},
  url       = {http://ebooks.iospress.nl/publication/38578},
}
Ryuji Yamazaki, "Conditions of Empathy in Human-Robot Interaction", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 179-186, August, 2014.
BibTeX:
@Inproceedings{Yamazaki2014c,
  author    = {Ryuji Yamazaki},
  title     = {Conditions of Empathy in Human-Robot Interaction},
  booktitle = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  year      = {2014},
  editor    = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  volume    = {273},
  pages     = {179-186},
  address   = {Aarhus, Denmark},
  month     = Aug,
  publisher = {IOS Press},
  doi       = {10.3233/978-1-61499-480-0-179},
  url       = {http://ebooks.iospress.nl/publication/38560},
}
Rosario Sorbello, Antonio Chella, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People", In the 13th International Conference on Intelligent Autonomous Systems, Padova, Italy, July, 2014.
Abstract: In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal de ning characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.
BibTeX:
@Inproceedings{Sorbello2014,
  author    = {Rosario Sorbello and Antonio Chella and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People},
  booktitle = {the 13th International Conference on Intelligent Autonomous Systems},
  year      = {2014},
  address   = {Padova, Italy},
  month     = Jul,
  day       = {15-19},
  abstract  = {In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal dening characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.},
  file      = {Sorbello2014.pdf:pdf/Sorbello2014.pdf:PDF},
  keywords  = {Humanoid Robot; Humanoid Robot Interaction; Life Support Empathic Robot; Telenoid},
}
Kaiko Kuwamura, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Positive Communication with Senior Citizens", In The 13th Intelligent Autonomous Systems conference, Padova, Italy, July, 2014.
Abstract: Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.
BibTeX:
@Inproceedings{Kuwamura2014c,
  author          = {Kaiko Kuwamura and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing Robots for Positive Communication with Senior Citizens},
  booktitle       = {The 13th Intelligent Autonomous Systems conference},
  year            = {2014},
  address         = {Padova, Italy},
  month           = Jul,
  day             = {15-19},
  url             = {http://www.ias-13.org/},
  abstract        = {Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.},
  file            = {Kuwamura2014c.pdf:pdf/Kuwamura2014c.pdf:PDF},
}
Ryuji Yamazaki, Kaiko Kuwamura, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 311, June, 2014.
BibTeX:
@Inproceedings{Yamazaki2014a,
  author    = {Ryuji Yamazaki and Kaiko Kuwamura and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title     = {Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot},
  booktitle = {The 9th World Conference of Gerontechnology},
  year      = {2014},
  volume    = {13},
  number    = {2},
  pages     = {311},
  address   = {Taipei, Taiwan},
  month     = Jun,
  day       = {18-21},
  doi       = {10.4017/gt.2014.13.02.166.00},
  url       = {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.166.00/0},
  file      = {Yamazaki2014a.pdf:pdf/Yamazaki2014a.pdf:PDF},
  keywords  = {Elderly care robot; social isolation; embodied communication; community design},
}
Kaiko Kuwamura, Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, "Elderly Care Using Teleoperated Android Telenoid", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 226, June, 2014.
BibTeX:
@Inproceedings{Kuwamura2014,
  author    = {Kaiko Kuwamura and Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Elderly Care Using Teleoperated Android Telenoid},
  booktitle = {The 9th World Conference of Gerontechnology},
  year      = {2014},
  volume    = {13},
  number    = {2},
  pages     = {226},
  address   = {Taipei, Taiwan},
  month     = Jun,
  day       = {18-21},
  doi       = {10.4017/gt.2014.13.02.091.00},
  url       = {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.091.00},
  file      = {Kuwamura2014.pdf:pdf/Kuwamura2014.pdf:PDF},
  keywords  = {Elderly care robot; teleoperated android; cognitive disorder},
}
Carlos T. Ishi, Hiroaki Hatano, Miyako Kiso, "Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"", In Speech Prosody 7, Dublin, Ireland, pp. 100-104, May, 2014.
Abstract: The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.
BibTeX:
@Inproceedings{Ishi2014,
  author          = {Carlos T. Ishi and Hiroaki Hatano and Miyako Kiso},
  title           = {Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"},
  booktitle       = {Speech Prosody 7},
  year            = {2014},
  pages           = {100-104},
  address         = {Dublin, Ireland},
  month           = May,
  day             = {20-23},
  abstract        = {The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.},
  file            = {Ishi2014.pdf:pdf/Ishi2014.pdf:PDF},
  keywords        = {interjections; acoustic-prosodic features; paralinguistic information; spontaneous conversational speech},
}
Kaiko Kuwamura, Shuichi Nishio, "Modality reduction for enhancing human likeliness", In Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour, London, UK, pp. 83-89, April, 2014.
Abstract: We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.
BibTeX:
@Inproceedings{Kuwamura2014b,
  author          = {Kaiko Kuwamura and Shuichi Nishio},
  title           = {Modality reduction for enhancing human likeliness},
  booktitle       = {Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour},
  year            = {2014},
  pages           = {83-89},
  address         = {London, UK},
  month           = Apr,
  day             = {1-4},
  url             = {http://doc.gold.ac.uk/aisb50/AISB50-S16/AISB50-S16-Kuwamura-paper.pdf},
  abstract        = {We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.},
  file            = {Kuwamura2014b.pdf:pdf/Kuwamura2014b.pdf:PDF},
}
Hidenobu Sumioka, Kensuke Koda, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 726-731, August, 2013.
Abstract: Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.
BibTeX:
@Inproceedings{Sumioka2013b,
  author          = {Hidenobu Sumioka and Kensuke Koda and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title           = {Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {726-731},
  address         = {Gyeongju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628399},
  abstract        = {Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.},
  file            = {Sumioka2013b.pdf:pdf/Sumioka2013b.pdf:PDF},
}
Junya Nakanishi, Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Evoking Affection for a Communication Partner by a Robotic Communication Medium", In the First International Conference on Human-Agent Interaction, Hokkaido University, Sapporo, Japan, pp. III-1-4, August, 2013.
Abstract: This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.
BibTeX:
@Inproceedings{Nakanishi2013,
  author          = {Junya Nakanishi and Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Evoking Affection for a Communication Partner by a Robotic Communication Medium},
  booktitle       = {the First International Conference on Human-Agent Interaction},
  year            = {2013},
  pages           = {III-1-4},
  address         = {Hokkaido University, Sapporo, Japan},
  month           = Aug,
  day             = {7-9},
  url             = {http://hai-conference.net/ihai2013/proceedings/html/paper/paper-III-1-4.html},
  abstract        = {This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.},
  file            = {Nakanishi2013.pdf:pdf/Nakanishi2013.pdf:PDF},
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: A medium that fosters love", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 70-75, August, 2013.
Abstract: We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.
BibTeX:
@Inproceedings{Kuwamura2013,
  author          = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Hugvie: A medium that fosters love},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {70-75},
  address         = {Gyeongju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628533},
  abstract        = {We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.},
  file            = {Kuwamura2013.pdf:pdf/Kuwamura2013.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeonguju, Korea, pp. 565-570, August, 2013.
Abstract: Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.
BibTeX:
@Inproceedings{Nishio2013,
  author          = {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title           = {Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2013},
  pages           = {565-570},
  address         = {Gyeonguju, Korea},
  month           = Aug,
  day             = {26-29},
  doi             = {10.1109/ROMAN.2013.6628539},
  abstract        = {Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.},
  file            = {Nishio2013.pdf:pdf/Nishio2013.pdf:PDF},
}
Rosario Sorbello, Hiroshi Ishiguro, Antonio Chella, Shuichi Nishio, Giovan Battista Presti, Marcello Giardina, "Telenoid mediated ACT Protocol to Increase Acceptance of Disease among Siblings of Autistic Children", In HRI2013 Workshop on Design of Humanlikeness in HRI : from uncanny valley to minimal design, Tokyo, Japan, pp. 26, March, 2013.
Abstract: We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.
BibTeX:
@Inproceedings{Sorbello2013,
  author    = {Rosario Sorbello and Hiroshi Ishiguro and Antonio Chella and Shuichi Nishio and Giovan Battista Presti and Marcello Giardina},
  title     = {Telenoid mediated {ACT} Protocol to Increase Acceptance of Disease among Siblings of Autistic Children},
  booktitle = {{HRI}2013 Workshop on Design of Humanlikeness in {HRI} : from uncanny valley to minimal design},
  year      = {2013},
  pages     = {26},
  address   = {Tokyo, Japan},
  month     = Mar,
  day       = {3},
  abstract  = {We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.},
  file      = {Sorbello2013.pdf:pdf/Sorbello2013.pdf:PDF},
}
Christian Becker-Asano, Severin Gustorff, Kai Oliver Arras, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, Bernhard Nebe, "Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline", In 8th ACM/IEEE International Conference on Human-Robot Interaction, National Museum of Emerging Science and innovation (Miraikan), Tokyo, pp. 79-80, March, 2013.
Abstract: This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University
BibTeX:
@Inproceedings{Becker-Asano2013,
  author          = {Christian Becker-Asano and Severin Gustorff and Kai Oliver Arras and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro and Bernhard Nebe},
  title           = {Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline},
  booktitle       = {8th ACM/IEEE International Conference on Human-Robot Interaction},
  year            = {2013},
  pages           = {79-80},
  address         = {National Museum of Emerging Science and innovation (Miraikan), Tokyo},
  month           = Mar,
  day             = {3-6},
  doi             = {10.1109/HRI.2013.6483510},
  url             = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6483510},
  abstract        = {This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University},
  file            = {Becker-Asano2013.pdf:pdf/Becker-Asano2013.pdf:PDF},
  keywords        = {Tele-existence; Copresence; Tele-robotic; Social robotics},
}
幸田健介, 住岡英信, 西尾修一, 石黒浩, "土偶の変遷から見るコミュニケーションメディアのミニマルデザイン", HAIシンポジウム, 京都工芸繊維大学, pp. 1C-4, December, 2012.
Abstract: Investigation of minimal elements of human-like appearance is important not only for designing communication media but also for considering how appearances affect interaction. This paper addresses design issues of communication media to convey person's presence to a distant location. Inspired from analogies between design strategies of such media and "Dogu", a clay doll built during the Jomon period, we investigate minimal elements of human-like appearance by tracing the evolution of Dogu. The survey of Dogu's evolution suggests that 1) human-like torso had a priority over other body representations including facial expression; 2) the arms and legs were represented in abstract forms; 3) eyes, mouth, and nose were essential; and 4) portability was a key feature to change design strategy. Minimal elements for communication media to convey person's presence are discussed based on the results of the survey.
BibTeX:
@Inproceedings{幸田健介2012,
  author          = {幸田健介 and 住岡英信 and 西尾修一 and 石黒浩},
  title           = {土偶の変遷から見るコミュニケーションメディアのミニマルデザイン},
  booktitle       = {{HAI}シンポジウム},
  year            = {2012},
  pages           = {1{C}-4},
  address         = {京都工芸繊維大学},
  month           = Dec,
  day             = {7-9},
  url             = {http://www.ii.is.kit.ac.jp/hai2012/proceedings/pdf/1C-4.pdf},
  etitle          = {Minimal Design of Communication Media Based on "Dogu" Evolution},
  abstract        = {Investigation of minimal elements of human-like appearance is important not only for designing communication media but also for considering how appearances affect interaction. This paper addresses design issues of communication media to convey person's presence to a distant location. Inspired from analogies between design strategies of such media and "Dogu", a clay doll built during the Jomon period, we investigate minimal elements of human-like appearance by tracing the evolution of Dogu. The survey of Dogu's evolution suggests that 1) human-like torso had a priority over other body representations including facial expression; 2) the arms and legs were represented in abstract forms; 3) eyes, mouth, and nose were essential; and 4) portability was a key feature to change design strategy. Minimal elements for communication media to convey person's presence are discussed based on the results of the survey.},
  file            = {幸田健介2012.pdf:pdf/幸田健介2012.pdf:PDF},
}
桑村海光, 境くりま, 港隆史, 西尾修一, 石黒浩, "遠隔コミュニケーションにおける抱擁の効果", HAIシンポジウム, 京都工芸繊維大学, pp. 1B-3, December, 2012.
Abstract: The act of physical interaction, such as hugging, is one important factor in communication for establishing and maintaining good relationship. This is, at the same time, one of the factors that are often lost in telecommunication between distant locations. Recently, several studies attempted to provide feeling of being hugged by person in remote and showed their effectiveness. However, there have been no study which focused on the effect of hugging a telecommunication medium. In this study, we investigated the effect of hugging on telecommunication by using Hugvie. The experiment with Hugvie revealed that the one who talks to his first meeting while virtually hugging through the medium feels like being loved rather than being liked.
BibTeX:
@Inproceedings{桑村海光2012,
  author          = {桑村海光 and 境くりま and 港隆史 and 西尾修一 and 石黒浩},
  title           = {遠隔コミュニケーションにおける抱擁の効果},
  booktitle       = {{HAI}シンポジウム},
  year            = {2012},
  pages           = {1B-3},
  address         = {京都工芸繊維大学},
  month           = Dec,
  day             = {7-9},
  url             = {http://www.ii.is.kit.ac.jp/hai2012/proceedings/pdf/1B-3.pdf},
  etitle          = {The Effect of Hugging on Telecommunication},
  abstract        = {The act of physical interaction, such as hugging, is one important factor in communication for establishing and maintaining good relationship. This is, at the same time, one of the factors that are often lost in telecommunication between distant locations. Recently, several studies attempted to provide feeling of being hugged by person in remote and showed their effectiveness. However, there have been no study which focused on the effect of hugging a telecommunication medium. In this study, we investigated the effect of hugging on telecommunication by using Hugvie. The experiment with Hugvie revealed that the one who talks to his first meeting while virtually hugging through the medium feels like being loved rather than being liked.},
  file            = {桑村海光2012.pdf:pdf/桑村海光2012.pdf:PDF},
}
田浦康一, 住岡英信, 西尾修一, 石黒浩, "遠隔操作アンドロイドへの身体感覚転移における対話の影響", HAIシンポジウム, 京都工芸繊維大学, pp. 2C-3, December, 2012.
Abstract: Body ownership transfer is illusion that we feel an external object as a part of our own body, and is evoked by operating android robot. We considered body ownership transfer as a crucial factor for communication in distance places in order to feel telepresence.This paper examines the influence of social interaction on body ownership transfer and telepresence during operating android robot, and investigate relation between body ownership transfer and telepresence. Participants talked about given topics with/without their partners and with/without operating the android robot. Our results show that feelings of the operation of the robot and partner's presence are crucial to experience a strong sense of body ownership transfer, and there are positive correlation between body ownership transfer and telepresence. Furthermore, the response reactions from partners enhance body ownership transfer when they observed the android and partner were in the same place during operation.
BibTeX:
@Inproceedings{田浦康一2012,
  author          = {田浦康一 and 住岡英信 and 西尾修一 and 石黒浩},
  title           = {遠隔操作アンドロイドへの身体感覚転移における対話の影響},
  booktitle       = {{HAI}シンポジウム},
  year            = {2012},
  pages           = {2C-3},
  address         = {京都工芸繊維大学},
  month           = Dec,
  day             = {7-9},
  url             = {http://www.ii.is.kit.ac.jp/hai2012/proceedings/pdf/2C-3.pdf},
  etitle          = {Social interaction enhance body ownership transfer to android robot},
  abstract        = {Body ownership transfer is illusion that we feel an external object as a part of our own body, and is evoked by operating android robot. We considered body ownership transfer as a crucial factor for communication in distance places in order to feel telepresence.This paper examines the influence of social interaction on body ownership transfer and telepresence during operating android robot, and investigate relation between body ownership transfer and telepresence. Participants talked about given topics with/without their partners and with/without operating the android robot. Our results show that feelings of the operation of the robot and partner's presence are crucial to experience a strong sense of body ownership transfer, and there are positive correlation between body ownership transfer and telepresence. Furthermore, the response reactions from partners enhance body ownership transfer when they observed the android and partner were in the same place during operation.},
  eabstract       = {Body ownership transfer is illusion that we feel an external object as a part of our own body, and is evoked by operating android robot. We considered body ownership transfer as a crucial factor for communication in distance places in order to feel telepresence.This paper examines the influence of social interaction on body ownership transfer and telepresence during operating android robot, and investigate relation between body ownership transfer and telepresence. Participants talked about given topics with/without their partners and with/without operating the android robot. Our results show that feelings of the operation of the robot and partner's presence are crucial to experience a strong sense of body ownership transfer, and there are positive correlation between body ownership transfer and telepresence. Furthermore, the response reactions from partners enhance body ownership transfer when they observed the android and partner were in the same place during operation.},
  file            = {田浦康一2012.pdf:pdf/田浦康一2012.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hiroshi Ishiguro, "Regulating Emotion by Facial Feedback from Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 388-397, October, 2012.
Abstract: In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.
BibTeX:
@Inproceedings{Nishio2012b,
  author    = {Shuichi Nishio and Koichi Taura and Hiroshi Ishiguro},
  title     = {Regulating Emotion by Facial Feedback from Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year      = {2012},
  pages     = {388-397},
  address   = {Chengdu, China},
  month     = Oct,
  day       = {29-31},
  doi       = {10.1007/978-3-642-34103-8_39},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_39},
  abstract  = {In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.},
  file      = {Nishio2012b.pdf:pdf/Nishio2012b.pdf:PDF},
}
Shuichi Nishio, Tetsuya Watanabe, Kohei Ogawa, Hiroshi Ishiguro, "Body Ownership Transfer to Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 398-407, October, 2012.
Abstract: Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.
BibTeX:
@Inproceedings{Nishio2012a,
  author    = {Shuichi Nishio and Tetsuya Watanabe and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Body Ownership Transfer to Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year      = {2012},
  pages     = {398-407},
  address   = {Chengdu, China},
  month     = Oct,
  day       = {29-31},
  doi       = {10.1007/978-3-642-34103-8_40},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_40},
  abstract  = {Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.},
  file      = {Nishio2012a.pdf:pdf/Nishio2012a.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark", In International Conference on Social Robotics, Chengdu, China, pp. 428-437, October, 2012.
Abstract: We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@Inproceedings{Yamazaki2012c,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title           = {Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark},
  booktitle       = {International Conference on Social Robotics},
  year            = {2012},
  pages           = {428-437},
  address         = {Chengdu, China},
  month           = Oct,
  day             = {29-31},
  doi             = {10.1007/978-3-642-34103-8_43},
  url             = {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_43},
  abstract        = {We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  file            = {Yamazaki2012c.pdf:pdf/Yamazaki2012c.pdf:PDF},
  keywords        = {android;teleoperation;minimal design;communication;embodiment;inclusion;acceptability;elderly care},
}
Hiroshi Ishiguro, Shuichi Nishi, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting", In Biologically Inspired Cognitive Architectures 2012, Palermo, Italy, October, 2012.
Abstract: The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.
BibTeX:
@Inproceedings{Ishiguro2012a,
  author    = {Hiroshi Ishiguro and Shuichi Nishi and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  title     = {Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting},
  booktitle = {Biologically Inspired Cognitive Architectures 2012},
  year      = {2012},
  address   = {Palermo, Italy},
  month     = Oct,
  abstract  = {The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Takashi Minato, Marco Nørskov, Nobu Ishiguro, Masaru Nishikawa, Tsutomu Fujinami, "Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation", In 2012 IEEE International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 53-58, October, 2012.
Abstract: As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.
BibTeX:
@Inproceedings{Yamazaki2012d,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Takashi Minato and Marco N\orskov and Nobu Ishiguro and Masaru Nishikawa and Tsutomu Fujinami},
  title     = {Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation},
  booktitle = {2012 {IEEE} International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems},
  year      = {2012},
  pages     = {53--58},
  address   = {Vilamoura, Algarve, Portugal},
  month     = Oct,
  day       = {7-12},
  abstract  = {As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.},
  file      = {Yamazaki2012d.pdf:Yamazaki2012d.pdf:PDF},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Recognizing Affection for a Touch-based Interaction with a Humanoid Robot", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 1420-1427, October, 2012.
Abstract: In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.
BibTeX:
@Inproceedings{Cooney2012a,
  author          = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Recognizing Affection for a Touch-based Interaction with a Humanoid Robot},
  booktitle       = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year            = {2012},
  pages           = {1420--1427},
  address         = {Vilamoura, Algarve, Portugal},
  month           = Oct,
  day             = {7-12},
  abstract        = {In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.},
  file            = {Cooney2012a.pdf:Cooney2012a.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of formant-based lip motion generation in tele-operated humanoid robots", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 2377-2382, October, 2012.
Abstract: Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.
BibTeX:
@Inproceedings{Ishi2012,
  author    = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Evaluation of formant-based lip motion generation in tele-operated humanoid robots},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2012},
  pages     = {2377--2382},
  address   = {Vilamoura, Algarve, Portugal},
  month     = Oct,
  day       = {7-12},
  abstract  = {Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.},
  file      = {Ishi2012.pdf:pdf/Ishi2012.pdf:PDF},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "From an Object to a Subject -- Transitions of an Android Robot into a Social Being", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 821-826, September, 2012.
Abstract: What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.
BibTeX:
@Inproceedings{Straub2012,
  author          = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {From an Object to a Subject -- Transitions of an Android Robot into a Social Being},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {821--826},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343853},
  abstract        = {What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.},
  file            = {Straub2012.pdf:Strabu2012.pdf:PDF},
}
Shuichi Nishio, Kohei Ogawa, Yasuhiro Kanakogi, Shoji Itakura, Hiroshi Ishiguro, "Do robot appearance and speech affect people's attitude? Evaluation through the Ultimatum Game", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 809-814, September, 2012.
Abstract: In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.
BibTeX:
@Inproceedings{Nishio2012,
  author          = {Shuichi Nishio and Kohei Ogawa and Yasuhiro Kanakogi and Shoji Itakura and Hiroshi Ishiguro},
  title           = {Do robot appearance and speech affect people's attitude? Evaluation through the {U}ltimatum {G}ame},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {809--814},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343851},
  abstract        = {In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.},
  file            = {Nishio2012.pdf:Nishio2012.pdf:PDF},
}
Kohei Ogawa, Koichi Taura, Shuichi Nishio, Hiroshi Ishiguro, "Effect of perspective change in body ownership transfer to teleoperated android robot", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1072-1077, September, 2012.
Abstract: We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.
BibTeX:
@Inproceedings{Ogawa2012c,
  author          = {Kohei Ogawa and Koichi Taura and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Effect of perspective change in body ownership transfer to teleoperated android robot},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {1072--1077},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343891},
  abstract        = {We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.},
  file            = {Ogawa2012c.pdf:Ogawa2012c.pdf:PDF},
}
Kohei Ogawa, Koichi Taura, Hiroshi Ishiguro, "Possibilities of Androids as Poetry-reciting Agent", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 565-570, September, 2012.
Abstract: In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.
BibTeX:
@Inproceedings{Ogawa2012d,
  author          = {Kohei Ogawa and Koichi Taura and Hiroshi Ishiguro},
  title           = {Possibilities of Androids as Poetry-reciting Agent},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {565--570},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343811},
  abstract        = {In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.},
  file            = {Ogawa2012d.pdf:Ogawa2012d.pdf:PDF},
  keywords        = {Robot; Android; Art; Geminoid; Poetry},
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Personality Distortion in Communication through Teleoperated Robots", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 49-54, September, 2012.
Abstract: Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.
BibTeX:
@Inproceedings{Kuwamura2012,
  author    = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Personality Distortion in Communication through Teleoperated Robots},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2012},
  pages     = {49--54},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.},
  file      = {Kuwamura2012.pdf:pdf/Kuwamura2012.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of a formant-based speech-driven lip motion generation", In 13th Annual Conference of the International Speech Communication Association, Portland, Oregon, pp. P1a.04, September, 2012.
Abstract: The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.
BibTeX:
@Inproceedings{Ishi2012b,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Evaluation of a formant-based speech-driven lip motion generation},
  booktitle       = {13th Annual Conference of the International Speech Communication Association},
  year            = {2012},
  pages           = {P1a.04},
  address         = {Portland, Oregon},
  month           = Sep,
  day             = {9-13},
  abstract        = {The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.},
  file            = {Ishi2012b.pdf:pdf/Ishi2012b.pdf:PDF},
  keywords        = {lip motion, formant, tele-operation, humanoid robot},
}
Martin Cooney, Francesco Zanlungo, Shuichi Nishio, Hiroshi Ishiguro, "Designing a Flying Humanoid Robot (FHR): Effects of Flight on Interactive Communication", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 364-371, September, 2012.
Abstract: This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot (FHR), with a focus on human-robot interaction (HRI). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.
BibTeX:
@Inproceedings{Cooney2012b,
  author          = {Martin Cooney and Francesco Zanlungo and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Designing a Flying Humanoid Robot ({FHR}): Effects of Flight on Interactive Communication},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {364--371},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343780},
  abstract        = {This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot ({FHR}), with a focus on human-robot interaction ({HRI}). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.},
  file            = {Cooney2012b.pdf:Cooney2012b.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1066-1071, September, 2012.
Abstract: Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.
BibTeX:
@Inproceedings{Yamazaki2012b,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2012},
  pages     = {1066--1071},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.},
  file      = {Yamazaki2012b.pdf:Yamazaki2012b.pdf:PDF},
}
Takashi Minato, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Studying the Influence of Handheld Robotic Media on Social Communications", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 15-16, September, 2012.
Abstract: This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.
BibTeX:
@Inproceedings{Minato2012c,
  author    = {Takashi Minato and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Studying the Influence of Handheld Robotic Media on Social Communications},
  booktitle = {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year      = {2012},
  pages     = {15--16},
  address   = {Paris, France},
  month     = Sep,
  day       = {9-13},
  abstract  = {This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.},
  file      = {Minato2012c.pdf:Minato2012c.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 32-39, September, 2012.
Abstract: In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.
BibTeX:
@Inproceedings{Sumioka2012c,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design},
  booktitle       = {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year            = {2012},
  pages           = {32--39},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  abstract        = {In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.},
  file            = {Sumioka2012c.pdf:Sumioka2012c.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Isolation of physical traits and conversational content for personality design", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 596-601, September, 2012.
Abstract: In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality
BibTeX:
@Inproceedings{Sumioka2012d,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title           = {Isolation of physical traits and conversational content for personality design},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2012},
  pages           = {596--601},
  address         = {Paris, France},
  month           = Sep,
  day             = {9-13},
  doi             = {10.1109/ROMAN.2012.6343816},
  abstract        = {In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality},
  file            = {Sumioka2012d.pdf:Sumioka2012d.pdf:PDF},
}
Antonio Chella, Haris Dindo, Rosario Sorbello, Shuichi Nishio, Hiroshi Ishiguro, "Sing with the Telenoid", In CogSci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art, Sapporo Convention Center, pp. 16-20, August, 2012.
BibTeX:
@Inproceedings{Chella2012,
  author    = {Antonio Chella and Haris Dindo and Rosario Sorbello and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Sing with the Telenoid},
  booktitle = {{C}og{S}ci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art},
  year      = {2012},
  pages     = {16--20},
  address   = {Sapporo Convention Center},
  month     = Aug,
  day       = {1-4},
  file      = {Chella2012.pdf:Chella2012.pdf:PDF},
  keywords  = {Computer Music; Embodiment; Emotions; Imitation learning; Creativity; Human-robot Interaction},
}
Takashi Minato, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Development of Cellphone-type Tele-operated Android", Poster presentation at The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 665-666, August, 2012.
Abstract: This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.
BibTeX:
@Inproceedings{Minato2012b,
  author    = {Takashi Minato and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title     = {Development of Cellphone-type Tele-operated Android},
  booktitle = {The 10th Asia Pacific Conference on Computer Human Interaction},
  year      = {2012},
  pages     = {665-666},
  address   = {Matsue, Japan},
  month     = Aug,
  day       = {28-31},
  abstract  = {This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.},
  file      = {Minato2012b.pdf:Minato2012b.pdf:PDF},
  keywords  = {Communication media; minimal design; human's presence},
}
Shuichi Nishio, "Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care", In CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art, Sapporo, Japan, August, 2012.
Abstract: Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.
BibTeX:
@Inproceedings{Nishio2012d,
  author    = {Shuichi Nishio},
  title     = {Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care},
  booktitle = {CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art},
  year      = {2012},
  address   = {Sapporo, Japan},
  month     = Aug,
  abstract  = {Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study", In Annual meeting of the Cognitive Science Society (CogSci2012), Sapporo Convention Center, pp. 2375-2380, August, 2012.
Abstract: We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.
BibTeX:
@Inproceedings{Sumioka2012,
  author          = {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title           = {Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study},
  booktitle       = {Annual meeting of the Cognitive Science Society ({C}og{S}ci2012)},
  year            = {2012},
  pages           = {2375-2380},
  address         = {Sapporo Convention Center},
  month           = Aug,
  day             = {1-4},
  url             = {http://mindmodeling.org/cogsci2012/papers/0413/paper0413.pdf},
  abstract        = {We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.},
  file            = {Sumioka2012.pdf:Sumioka2012.pdf:PDF},
  keywords        = {social cognition; android science; human-robot interaction; personality psychology; personal presence},
}
Hidenobu Sumioka, Takashi Minato, Kurima Sakai, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of an Interactive Small Humanoid Robot with Visual Illusion", In The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 93-100, August, 2012.
Abstract: We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.
BibTeX:
@Inproceedings{Sumioka2012a,
  author    = {Hidenobu Sumioka and Takashi Minato and Kurima Sakai and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Motion Design of an Interactive Small Humanoid Robot with Visual Illusion},
  booktitle = {The 10th Asia Pacific Conference on Computer Human Interaction},
  year      = {2012},
  pages     = {93-100},
  address   = {Matsue, Japan},
  month     = Aug,
  day       = {28-31},
  url       = {http://dl.acm.org/authorize?6720741},
  abstract  = {We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.},
  file      = {Sumioka2012a.pdf:Sumioka2012a.pdf:PDF},
  keywords  = {telecommunication; nonverbal communication; portable robot avatar; visual illusion of motion},
}
Hiroshi Ishiguro, Shuichi Nishio, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Perceptual Social Dimensions of Human-Humanoid Robot Interaction", In The 12th International Conference on Intelligent Autonomous Systems, Springer Berlin Heidelberg, vol. 194, Jeju International Convention Center, Korea, pp. 409-421, June, 2012.
Abstract: The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.
BibTeX:
@Inproceedings{Ishiguro2012,
  author    = {Hiroshi Ishiguro and Shuichi Nishio and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  title     = {Perceptual Social Dimensions of Human-Humanoid Robot Interaction},
  booktitle = {The 12th International Conference on Intelligent Autonomous Systems},
  year      = {2012},
  volume    = {194},
  series    = {Advances in Intelligent Systems and Computing},
  pages     = {409-421},
  address   = {Jeju International Convention Center, Korea},
  month     = Jun,
  publisher = {Springer Berlin Heidelberg},
  day       = {26-29},
  doi       = {10.1007/978-3-642-33932-5_38},
  url       = {http://link.springer.com/chapter/10.1007/978-3-642-33932-5_38},
  abstract  = {The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.},
  file      = {Ishiguro2012.pdf:Ishiguro2012.pdf:PDF},
  keywords  = {Telenoid, Geminoid, Human Robot Interaction, Social Robot, Humanoid Robot},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, Kohei Matsumura, Kensuke Koda, Tsutomu Fujinami, "How Does Telenoid Affect the Communication between Children in Classroom Setting ?", In Extended Abstracts of the Conference on Human Factors in Computing Systems, Austin, Texas, USA, pp. 351-366, May, 2012.
Abstract: Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.
BibTeX:
@Inproceedings{Yamazaki2012,
  author          = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro and Kohei Matsumura and Kensuke Koda and Tsutomu Fujinami},
  title           = {How Does Telenoid Affect the Communication between Children in Classroom Setting ?},
  booktitle       = {Extended Abstracts of the Conference on Human Factors in Computing Systems},
  year            = {2012},
  pages           = {351-366},
  address         = {Austin, Texas, {USA}},
  month           = May,
  day             = {5-10},
  doi             = {10.1145/2212776.2212814},
  url             = {http://dl.acm.org/authorize?6764060},
  abstract        = {Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.},
  file            = {Yamazaki2012.pdf:Yamazaki2012.pdf:PDF},
  keywords        = {Tele-operation; android; minimal design; human interaction; role differentiation; cooperation},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BMI-teleoperation of androids can transfer the sense of body ownership", Poster presentation at Cognitive Neuroscience Society's Annual Meeting, Chicago, Illinois, USA, April, 2012.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.
BibTeX:
@Inproceedings{Alimardani2012,
  author    = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {{BMI}-teleoperation of androids can transfer the sense of body ownership},
  booktitle = {Cognitive Neuroscience Society's Annual Meeting},
  year      = {2012},
  address   = {Chicago, Illinois, {USA}},
  month     = Apr,
  day       = {1},
  abstract  = {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.},
  file      = {Alimardani2012.pdf:Alimardani2012.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction", In ACM/IEEE International Conference on Human Robot Interaction, Boston, USA, pp. 285-292, March, 2012.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
BibTeX:
@Inproceedings{Liu2012,
  author          = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction},
  booktitle       = {{ACM/IEEE} International Conference on Human Robot Interaction},
  year            = {2012},
  pages           = {285--292},
  address         = {Boston, USA},
  month           = Mar,
  day             = {5-8},
  doi             = {10.1145/2157689.2157797},
  abstract        = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.},
  file            = {Liu2012.pdf:Liu2012.pdf:PDF},
  keywords        = {Head motion; dialogue acts; eye gazing; motion generation.},
}
田浦康一, 西尾修一, 小川浩平, 石黒浩, "遠隔操作型アンドロイドを用いた身体感覚の同調と視点による影響の検証", HAIシンポジウム, 京都工芸繊維大学, pp. I-2A-2, December, 2011.
Abstract: Previously we have investigated the body ownership transfer to a teleoperated android body caused by motion synchronization between robot and operator. Although visual feedback is the only information provided from robot, due to body ownership transfer some operators feel as if they were touched when the robot's body was touched. This illusion can help operators to transfer their presence to robotic body during teleoperation.By enhancing this phenomenon, we can improve our communication interface and the quality of interaction between operator and interlocutor. This can eventually contribute to a novel treatment method for autistic patients. In this paper, we examined how the change in operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on rubber hand illusion, it is hypothesized that perspective change will surpress the body owner transfer. The results, however, showed that in any perspective condition, the participants felt body ownership transfer. .This shows that body ownership transfer to teleoperated androids and the rubber hand illusion differs in their generation process.
BibTeX:
@Inproceedings{田浦康一2011,
  author          = {田浦康一 and 西尾修一 and 小川浩平 and 石黒浩},
  title           = {遠隔操作型アンドロイドを用いた身体感覚の同調と視点による影響の検証},
  booktitle       = {{HAI}シンポジウム},
  year            = {2011},
  pages           = {I-2{A}-2},
  address         = {京都工芸繊維大学},
  month           = Dec,
  day             = {3-5},
  url             = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/html/paper/paper-1-2a-2.html},
  etitle          = {Effect of perspective change in body ownership transfer to teleoperated android robot},
  abstract        = {Previously we have investigated the body ownership transfer to a teleoperated android body caused by motion synchronization between robot and operator. Although visual feedback is the only information provided from robot, due to body ownership transfer some operators feel as if they were touched when the robot's body was touched. This illusion can help operators to transfer their presence to robotic body during teleoperation.By enhancing this phenomenon, we can improve our communication interface and the quality of interaction between operator and interlocutor. This can eventually contribute to a novel treatment method for autistic patients. In this paper, we examined how the change in operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on rubber hand illusion, it is hypothesized that perspective change will surpress the body owner transfer. The results, however, showed that in any perspective condition, the participants felt body ownership transfer. .This shows that body ownership transfer to teleoperated androids and the rubber hand illusion differs in their generation process.},
  file            = {田浦康一2011.pdf:田浦康一2011.pdf:PDF;I-2A-2.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/pdf/I-2A-2.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Body ownership transfer to tele-operated android through mind controlling", In HAI-2011, Kyoto Institute of Technology, pp. I-2A-1, December, 2011.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.
BibTeX:
@Inproceedings{Alimardani2011,
  author          = {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Body ownership transfer to tele-operated android through mind controlling},
  booktitle       = {{HAI}-2011},
  year            = {2011},
  pages           = {I-2{A}-1},
  address         = {Kyoto Institute of Technology},
  month           = Dec,
  day             = {3-5},
  url             = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/html/paper/paper-1-2a-1.html},
  abstract        = {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.},
  file            = {Alimardani2011.pdf:Alimardani2011.pdf:PDF;I-2A-1.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/pdf/I-2A-1.pdf:PDF},
}
境くりま, 港隆史, 西尾修一, 石黒浩, "LED点滅による運動錯覚を用いた携帯型アンドロイドの運動錯覚の生成", HAIシンポジウム, 京都工芸繊維大学, pp. II-1B-1, December, 2011.
Abstract: We can not naturally communicate through communication media such as cell phone compared with face to face communication because they can not transmit human presence. We try to transmit human presence by a small humanoid communication medium ``Elfoid''. Elfoid is a new information medium that harmonizes human with information-environment beyond existing personal computers and cellphones, and is designed according to minimum requirements to express humanlike appearance and motion, which are revealed in our past studies. To transmit human presence, Elfoid needs to express motions. However it is too small to equip actuators for motions. Instead of actuators, we proposed a way to have users recognize motions of Elfoid by using motion illusion with LED blinking. In this paper, we focused on motion of nodding that is important for talking. In the results of experiments, we revealed that a blinking pattern to manipulate a shadow of the face naturally elicits an illusion of nodding motion.
BibTeX:
@Inproceedings{境くりま2011,
  author          = {境くりま and 港隆史 and 西尾修一 and 石黒浩},
  title           = {{LED}点滅による運動錯覚を用いた携帯型アンドロイドの運動錯覚の生成},
  booktitle       = {{HAI}シンポジウム},
  year            = {2011},
  pages           = {II-1B-1},
  address         = {京都工芸繊維大学},
  month           = Dec,
  day             = {3-5},
  url             = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/html/paper/paper-2-1b-1.html},
  etitle          = {Creating Motion of Mobile Android by Motion Illusions with LED Blinking},
  abstract        = {We can not naturally communicate through communication media such as cell phone compared with face to face communication because they can not transmit human presence. We try to transmit human presence by a small humanoid communication medium ``Elfoid''. Elfoid is a new information medium that harmonizes human with information-environment beyond existing personal computers and cellphones, and is designed according to minimum requirements to express humanlike appearance and motion, which are revealed in our past studies. To transmit human presence, Elfoid needs to express motions. However it is too small to equip actuators for motions. Instead of actuators, we proposed a way to have users recognize motions of Elfoid by using motion illusion with LED blinking. In this paper, we focused on motion of nodding that is important for talking. In the results of experiments, we revealed that a blinking pattern to manipulate a shadow of the face naturally elicits an illusion of nodding motion.},
  file            = {境くりま2011.pdf:境くりま2011.pdf:PDF;II-1B-1.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/pdf/II-1B-1.pdf:PDF},
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Antonio Chella, Hiroshi Ishiguro, "A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors", In Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the BICA Society, IOS Press, vol. 233, Arlington, Virginia, USA, pp. 26-38, November, 2011.
Abstract: Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@Inproceedings{Balistreri2011a,
  author    = {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Antonio Chella and Hiroshi Ishiguro},
  title     = {A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors},
  booktitle = {Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the {BICA} Society},
  year      = {2011},
  volume    = {233},
  pages     = {26-38},
  address   = {Arlington, Virginia, {USA}},
  month     = Nov,
  publisher = {{IOS} Press},
  day       = {5-6},
  abstract  = {Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  file      = {Balistreri2011a.pdf:Balistreri2011a.pdf:PDF},
  keywords  = {Android; gaze; sensor network},
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot", In IEEE-RAS International Conference on Humanoid Robots (Humanoids), Bled, Slovenia, pp. 112-119, October, 2011.
Abstract: Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.
BibTeX:
@Inproceedings{Cooney2011,
  author          = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  title           = {Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot},
  booktitle       = {{IEEE-RAS} International Conference on Humanoid Robots (Humanoids)},
  year            = {2011},
  pages           = {112--119},
  address         = {Bled, Slovenia},
  month           = Oct,
  day             = {26-28},
  abstract        = {Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.},
  file            = {Cooney2011.pdf:Cooney2011.pdf:PDF},
  keywords        = {interaction design; enjoyment; playful human-robot interaction; small humanoid robot},
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Hiroshi Ishiguro, "Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction", In Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence), Springer, vol. 6934, Palermo, Italy, pp. 432-437, September, 2011.
Abstract: Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@Inproceedings{Balistreri2011,
  author    = {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Hiroshi Ishiguro},
  title     = {Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction},
  booktitle = {Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence)},
  year      = {2011},
  volume    = {6934},
  pages     = {432--437},
  address   = {Palermo, Italy},
  month     = Sep,
  publisher = {Springer},
  doi       = {10.1007/978-3-642-23954-0_43},
  url       = {http://www.springerlink.com/content/c015680178436107/},
  abstract  = {Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  bibsource = {DBLP, http://dblp.uni-trier.de},
  file      = {Balistreri2011.pdf:Balistreri2011.pdf:PDF},
  keywords  = {Android; gaze; sensor network},
}
Panikos Heracleous, Miki Sato, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition", In International Congress of Phonetic Sciences, Hong Kong, China, pp. 855-858, August, 2011.
Abstract: Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.
BibTeX:
@Inproceedings{Heracleous2011e,
  author          = {Panikos Heracleous and Miki Sato and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition},
  booktitle       = {International Congress of Phonetic Sciences},
  year            = {2011},
  pages           = {855--858},
  address         = {Hong Kong, China},
  month           = Aug,
  day             = {18-21},
  abstract        = {Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.},
  file            = {Heracleous2011e.pdf:Heracleous2011e.pdf:PDF;Heracleous.pdf:http\://www.icphs2011.hk/resources/OnlineProceedings/RegularSession/Heracleous/Heracleous.pdf:PDF},
  keywords        = {speech; noisy environments; Lombard effect; lipreading},
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Koichi Taura, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Telenoid: Tele-presence android for communication", In SIGGRAPH Emerging Technology, Vancouver, Canada, pp. 15, August, 2011.
Abstract: In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.
BibTeX:
@Inproceedings{Ogawa2011a,
  author          = {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Koichi Taura and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title           = {Telenoid: Tele-presence android for communication},
  booktitle       = {{SIGGRAPH} Emerging Technology},
  year            = {2011},
  pages           = {15},
  address         = {Vancouver, Canada},
  month           = Aug,
  day             = {7-11},
  doi             = {10.1145/2048259.2048274},
  url             = {http://dl.acm.org/authorize?6594082},
  abstract        = {In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.},
  file            = {Ogawa2011a.pdf:Ogawa2011a.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Speech-driven lip motion generation for tele-operated humanoid robots", In the International Conference on Audio-Visual Speech Processing 2011, Volterra, Italy, pp. 131-135, August, 2011.
Abstract: (such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.
BibTeX:
@Inproceedings{Ishi2011a,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Speech-driven lip motion generation for tele-operated humanoid robots},
  booktitle       = {the International Conference on Audio-Visual Speech Processing 2011},
  year            = {2011},
  pages           = {131-135},
  address         = {Volterra, Italy},
  month           = Aug,
  day             = {31-3},
  abstract        = {(such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.},
  file            = {Ishi2011a.pdf:pdf/Ishi2011a.pdf:PDF},
  keywords        = {lip motion; formant; humanoid robot; tele-operation; synchronization},
}
Panikos Heracleous, Hiroshi Ishiguro, Norihiro Hagita, "Visual-speech to text conversion applicable to telephone communication for deaf individuals", In International Conference on Telecommunications, Ayia Napa, Cyprus, pp. 130-133, May, 2011.
Abstract: The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.
BibTeX:
@Inproceedings{Heracleous2011f,
  author    = {Panikos Heracleous and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Visual-speech to text conversion applicable to telephone communication for deaf individuals},
  booktitle = {International Conference on Telecommunications},
  year      = {2011},
  pages     = {130--133},
  address   = {Ayia Napa, Cyprus},
  month     = May,
  day       = {8-11},
  doi       = {10.1109/CTS.2011.5898904},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5898904},
  abstract  = {The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.},
  file      = {Heracleous2011f.pdf:Heracleous2011f.pdf:PDF},
}
Panikos Heracleous, Norihiro Hagita, "Automatic Recognition of Speech without any audio information", In IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, pp. 2392-2395, May, 2011.
Abstract: This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography (EMA) device and are used as features to create hidden Markov models (HMMs) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.
BibTeX:
@Inproceedings{Heracleous2011a,
  author    = {Panikos Heracleous and Norihiro Hagita},
  title     = {Automatic Recognition of Speech without any audio information},
  booktitle = {{IEEE} International Conference on Acoustics, Speech and Signal Processing},
  year      = {2011},
  pages     = {2392--2395},
  address   = {Prague, Czech Republic},
  month     = May,
  day       = {22-27},
  doi       = {10.1109/ICASSP.2011.5946965},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5946965},
  abstract  = {This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography ({EMA}) device and are used as features to create hidden Markov models ({HMM}s) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.},
  file      = {Heracleous2011a.pdf:Heracleous2011a.pdf:PDF},
}
Panikos Heracleous, Miki Sato, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita, "The effect of environmental noise to automatic lip-reading", In Spring Meeting Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 5-8, March, 2011.
Abstract: In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.
BibTeX:
@Inproceedings{Heracleous2011c,
  author          = {Panikos Heracleous and Miki Sato and Carlos Toshinori Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {The effect of environmental noise to automatic lip-reading},
  booktitle       = {Spring Meeting Acoustical Society of Japan},
  year            = {2011},
  series          = {1-5-3},
  pages           = {5--8},
  address         = {Waseda University, Tokyo, Japan},
  month           = Mar,
  abstract        = {In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.},
  file            = {Heracleous2011c.pdf:Heracleous2011c.pdf:PDF},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An Android in the Field", In the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland, pp. 283-284, March, 2011.
Abstract: Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.
BibTeX:
@Inproceedings{Putten2011,
  author    = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title     = {An Android in the Field},
  booktitle = {the 6th {ACM/IEEE} International Conference on Human-Robot Interaction},
  year      = {2011},
  pages     = {283--284},
  address   = {Lausanne, Switzerland},
  month     = Mar,
  day       = {6-9},
  doi       = {10.1145/1957656.1957772},
  abstract  = {Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.},
}
松下光次郎, Maryam Alimardani, 山本知幸, "P300-BMIによる実空間オブジェクトのポインティング装置", インタラクション, 東京, pp. 343-346, March, 2011.
Abstract: 頭皮脳波の分析を利用したブレインマシンインターフェイスであるP300-BMIを応用して実空間のオブジェクトを指定するポインティング装置を開発した。ディスプレイ上に点滅する文字などの視覚刺激を選択してスペリングなどを行う一般的なP300-BMIと異なり、実世界の空間内にフラッシャーを配置してオブジェクトを選択するが、ディスプレイ上のような黒バックを使用せずとも視覚刺激の弁別が可能であることが実験により検証された。BMIの新たな応用として、日常的に使用することができるインターフェイスを提案する。
BibTeX:
@Inproceedings{松下光次郎2011,
  author          = {松下光次郎 and Maryam Alimardani and 山本知幸},
  title           = {{P300-BMI}による実空間オブジェクトのポインティング装置},
  booktitle       = {インタラクション},
  year            = {2011},
  pages           = {343--346},
  address         = {東京},
  month           = Mar,
  etitle          = {A Pointing Device for Real World Objects Using {P300-BMI}},
  abstract        = {頭皮脳波の分析を利用したブレインマシンインターフェイスであるP300-BMIを応用して実空間のオブジェクトを指定するポインティング装置を開発した。ディスプレイ上に点滅する文字などの視覚刺激を選択してスペリングなどを行う一般的なP300-BMIと異なり、実世界の空間内にフラッシャーを配置してオブジェクトを選択するが、ディスプレイ上のような黒バックを使用せずとも視覚刺激の弁別が可能であることが実験により検証された。BMIの新たな応用として、日常的に使用することができるインターフェイスを提案する。},
  eabstract       = {In this research, we propose a novel pointing device based on P300-BMI. This device presents an extension of the conventional P300-BMI ``P300-Speller '' from 2D display to 3D life space, by using LED flashing markers as visual stimuli. A performance comparison between the proposed P300-BM and typical P300-Speller was made and experimental results showed the same selecting accuracy for the two methods. This proves that LED-flashing markers can work effectively in the 3D life space despite major disturbances in the visual field. It is therefore concluded that the proposed device can potentially contribute to the realization of a practical P300-BMI in the 3D life space.},
  file            = {松下光次郎2011.pdf:松下光次郎2011.pdf:PDF},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated identity in interaction with a teleoperated android robot: A case study", In IEEE International Symposium on Robot and Human Interactive Communication, Viareggio, Italy, pp. 139-144, September, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@Inproceedings{Straub2010a,
  author          = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title           = {Incorporated identity in interaction with a teleoperated android robot: A case study},
  booktitle       = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year            = {2010},
  pages           = {139--144},
  address         = {Viareggio, Italy},
  month           = Sep,
  doi             = {10.1109/ROMAN.2010.5598695},
  url             = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5598695},
  abstract        = {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  file            = {Straub2010a.pdf:Straub2010a.pdf:PDF},
  issn            = {1944-9445},
  keywords        = {Geminoid HI-1;artificial social agent robot;identity-creation;identity-imitation;identity-mediation;identity-switching;interaction tool analysis;metaphorical language;qualitative methods;teleoperated android robot;virtual agents;human-robot interaction;humanoid robots;telerobotics;},
}
Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploring the uncanny valley with Geminoid HI-1 in a real-world application", In IADIS International Conference on Interfaces and Human Computer Interaction, Freiburg, Germany, pp. 121-128, July, 2010.
Abstract: This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid HI-1, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid HI-1. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.
BibTeX:
@Inproceedings{Becker-Asano2010,
  author    = {Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Exploring the uncanny valley with Geminoid {HI}-1 in a real-world application},
  booktitle = {{IADIS} International Conference on Interfaces and Human Computer Interaction},
  year      = {2010},
  pages     = {121--128},
  address   = {Freiburg, Germany},
  month     = Jul,
  url       = {http://www.iadisportal.org/digital-library/exploring-the-uncanny-valley-with-geminoid-hi-1-in-a-real-world-application},
  abstract  = {This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid {HI-1}, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid {HI-1}. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.},
  file      = {Becker-Asano2010.pdf:Becker-Asano2010.pdf:PDF},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study", In International Conference on Culture and Computing, Kyoto, Japan, pp. 63-75, February, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@Inproceedings{Straub2010,
  author    = {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study},
  booktitle = {International Conference on Culture and Computing},
  year      = {2010},
  pages     = {63--75},
  address   = {Kyoto, Japan},
  month     = Feb,
  abstract  = {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  file      = {Straub2010.pdf:Straub2010.pdf:PDF},
}
Christian Becker-Asano, Hiroshi Ishiguro, "Laughter in Social Robotics - no laughing matter", In International Workshop on Social Intelligence Design, Kyoto, Japan, pp. 287-300, November, 2009.
Abstract: In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.
BibTeX:
@Inproceedings{Becker-Asano2009,
  author          = {Christian Becker-Asano and Hiroshi Ishiguro},
  title           = {Laughter in Social Robotics - no laughing matter},
  booktitle       = {International Workshop on Social Intelligence Design},
  year            = {2009},
  pages           = {287--300},
  address         = {Kyoto, Japan},
  month           = Nov,
  url             = {http://www.becker-asano.de/SID09_LaughterInSocialRoboticsCameraReady.pdf},
  abstract        = {In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.},
  file            = {Becker-Asano2009.pdf:Becker-Asano2009.pdf:PDF},
  keywords        = {Affective Computing; Natural Interaction; Laughter; Social Robotics.},
}
Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, "Can an android persuade you?", In IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, pp. 516-521, September, 2009.
Abstract: The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
BibTeX:
@Inproceedings{Ogawa2009,
  author    = {Kohei Ogawa and Christoph Bartneck and Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro},
  title     = {Can an android persuade you?},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year      = {2009},
  pages     = {516--521},
  address   = {Toyama, Japan},
  month     = Sep,
  doi       = {10.1109/ROMAN.2009.5326352},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5326352},
  abstract  = {The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.},
  file      = {Ogawa2009.pdf:Ogawa2009.pdf:PDF},
  issn      = {1944-9445},
  keywords  = {Bluetooth headset;human counterpart;persuasive agent;persuasive android robot;robotic copy;Bluetooth;humanoid robots;},
}
小川浩平, Christoph Bartneck, 坂本大介, 神田崇之, 小野哲雄, 石黒浩, "コマーシャルエージェントとしてのアンドロイドの可能性", HAIシンポジウム, 慶応義塾大学, pp. 2B-3, December, 2008.
Abstract: 近年,人間の姿を完全にコピーしたロボットが登場した.そこで我々は,コピー元の人間,コピーロボット,コピー元の人間の姿を撮影したビデオ,という3種類のコマーシャルエージェントの身体性の違いが,被験者のエージェントに対する認知や態度変容にどのような影響を与えるかを検証した.その結果エージェントが持つ身体性の違いが,被験者の態度変容やエージェントへの印象に対して一定の影響を与えることが分かった.
BibTeX:
@Inproceedings{小川浩平2008,
  author          = {小川浩平 and Christoph Bartneck and 坂本大介 and 神田崇之 and 小野哲雄 and 石黒浩},
  title           = {コマーシャルエージェントとしてのアンドロイドの可能性},
  booktitle       = {{HAI}シンポジウム},
  year            = {2008},
  pages           = {2B-3},
  address         = {慶応義塾大学},
  month           = Dec,
  url             = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/HAI2008/html/paper/paper-2b-3.html},
  etitle          = {Possibilities of an Android as a Commercial Agent},
  abstract        = {近年,人間の姿を完全にコピーしたロボットが登場した.そこで我々は,コピー元の人間,コピーロボット,コピー元の人間の姿を撮影したビデオ,という3種類のコマーシャルエージェントの身体性の違いが,被験者のエージェントに対する認知や態度変容にどのような影響を与えるかを検証した.その結果エージェントが持つ身体性の違いが,被験者の態度変容やエージェントへの印象に対して一定の影響を与えることが分かった.},
  eabstract       = {In recent years, an android which was copied of real human have available. With this robot, we have opportunities to exist in multiple locations at the same time. The purpose of this study is that investigate how embodiment of a persuasive agent effects to humans' change of attitude and perceptions of personality. Does the copy android have same persuasive power as the original human? We conducted an experiment to investigate the question. The persuasive agents presented a product which is bluetooth headset to the participants. We asked the value of the product which was presented by persuasive agent to the participants. The results of the experiment indicate that the persuasive agent which has an android appearance is might be the most efficient commercial media for the participants.},
  file            = {小川浩平2008.pdf:小川浩平2008.pdf:PDF;2b-3.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/HAI2008/pdf/2b-3.pdf:PDF},
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Expressing individuality through teleoperated android: a case study with children", In IASTED International Conference on Human Computer Interaction, ACTA Press, Innsbruck, Autria, pp. 297-302, March, 2008.
Abstract: When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Inproceedings{Nishio2008,
  author    = {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title     = {Expressing individuality through teleoperated android: a case study with children},
  booktitle = {{IASTED} International Conference on Human Computer Interaction},
  year      = {2008},
  pages     = {297--302},
  address   = {Innsbruck, Autria},
  month     = Mar,
  publisher = {{ACTA} Press},
  url       = {http://dl.acm.org/citation.cfm?id=1722359.1722414},
  abstract  = {When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file      = {Nishio2008.pdf:Nishio2008.pdf:PDF},
  keywords  = {android; human individuality; human-robot interaction; personal presence},
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Representing Personal Presence with a Teleoperated Android: A Case Study with Family", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, Stanford University, Palo Alto, California, USA, March, 2008.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Inproceedings{Nishio2008a,
  author          = {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title           = {Representing Personal Presence with a Teleoperated Android: A Case Study with Family},
  booktitle       = {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  year            = {2008},
  address         = {Stanford University, Palo Alto, California, {USA}},
  month           = Mar,
  abstract        = {Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file            = {Nishio2008a.pdf:Nishio2008a.pdf:PDF},
}
山森崇義, 坂本大介, 西尾修一, 石黒浩, 萩田紀博, "アンドロイドと「目が合う」条件の検証", HAIシンポジウム, 慶應義塾大学, pp. 1B-2, December, 2007.
Abstract: ロボットに人と目を合わせる機能を実装することで,人との円滑なコミュニケーションが期待できる.ここで,ロボットが人に「目が合う」感覚を与えるためにはどのような条件が必要なのか?本稿では,見かけが人間に酷似したロボット,アンドロイドを用いて目が合う条件を検証する実験を行った.本実験の結果から,目が合う条件はアンドロイドの視線偏位角に依存すること,眼球に微小な動作を加えることで目が合う視線偏位角の範囲が広がることが明らかとなった.
BibTeX:
@Inproceedings{山森崇義2007a,
  author    = {山森崇義 and 坂本大介 and 西尾修一 and 石黒浩 and 萩田紀博},
  title     = {アンドロイドと「目が合う」条件の検証},
  booktitle = {{HAI}シンポジウム},
  year      = {2007},
  pages     = {1B-2},
  address   = {慶應義塾大学},
  month     = Dec,
  url       = {http://www.ii.is.kit.ac.jp/hai2011/proceedings/HAI2007/html/paper/paper-1b-2.html},
  abstract  = {ロボットに人と目を合わせる機能を実装することで,人との円滑なコミュニケーションが期待できる.ここで,ロボットが人に「目が合う」感覚を与えるためにはどのような条件が必要なのか?本稿では,見かけが人間に酷似したロボット,アンドロイドを用いて目が合う条件を検証する実験を行った.本実験の結果から,目が合う条件はアンドロイドの視線偏位角に依存すること,眼球に微小な動作を加えることで目が合う視線偏位角の範囲が広がることが明らかとなった.},
  file      = {1b-2.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/HAI2007/pdf/1b-2.pdf:PDF},
}
坂本大介, 神田崇行, 小野哲雄, 石黒浩, 萩田紀博, "アンドロイドロボットを用いた遠隔コミュニケーションシステムの開発と評価", エンタテインメントコンピューティング, 大阪, pp. 233-236, October, 2007.
Abstract: In this research, we realize human telepresence by developing a remote- controlled android system. This system employs human-like robot called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor.
BibTeX:
@Inproceedings{坂本大介2007a,
  author          = {坂本大介 and 神田崇行 and 小野哲雄 and 石黒浩 and 萩田紀博},
  title           = {アンドロイドロボットを用いた遠隔コミュニケーションシステムの開発と評価},
  booktitle       = {エンタテインメントコンピューティング},
  year            = {2007},
  pages           = {233--236},
  address         = {大阪},
  month           = Oct,
  etitle          = {Development of a tele-communication system employing an android robot},
  abstract        = {In this research, we realize human telepresence by developing a remote- controlled android system. This system employs human-like robot called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor.},
  eabstract       = {In this research, we realize human telepresence by developing a remote- controlled android system. This system employs human-like robot called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor.},
  file            = {坂本大介2007a.pdf:坂本大介2007a.pdf:PDF},
  keywords        = {Telecommunication system; Telepresence; Android Science},
}
Freerk P. Wilbers, Carlos T. Ishi, Hiroshi Ishiguro, "A Blendshape Model for Mapping Facial Motions to an Android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 542-547, October, 2007.
Abstract: An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.
BibTeX:
@Inproceedings{Wilbers2007,
  author    = {Freerk P. Wilbers and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {A Blendshape Model for Mapping Facial Motions to an Android},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2007},
  pages     = {542--547},
  month     = Oct,
  doi       = {10.1109/IROS.2007.4399394},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399394},
  abstract  = {An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.},
  file      = {Wilbers2007.pdf:Wilbers2007.pdf:PDF},
  keywords  = {Repliee Q2;android;animated character;blendshape model;computer graphics animation;facial motions mapping;computer animation;face recognition;motion compensation;},
}
Carlos T. Ishi, Judith Haas, Freerk P. Wilbers, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of head motions and speech, and head motion control in an android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, California, USA, pp. 548-553, October, 2007.
Abstract: With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.
BibTeX:
@Inproceedings{Ishi2007,
  author    = {Carlos T. Ishi and Judith Haas and Freerk P. Wilbers and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Analysis of head motions and speech, and head motion control in an android},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year      = {2007},
  pages     = {548--553},
  address   = {San Diego, California, USA},
  month     = Oct,
  doi       = {10.1109/IROS.2007.4399335},
  url       = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399335},
  abstract  = {With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.},
  file      = {Ishi2007.pdf:Ishi2007.pdf:PDF},
  keywords  = {android;head motion control;natural dialogue;paralinguistic information;phrase boundaries;speech analysis;speech utterances;voice quality information;humanoid robots;motion control;speech synthesis;},
}
Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, Norihiro Hagita, "Android as a telecommunication medium with a human-like presence", In ACM/IEEE International Conference on Human Robot Interaction, Arlington, Virginia, USA, pp. 193-200, March, 2007.
Abstract: In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.
BibTeX:
@Inproceedings{Sakamoto2007,
  author    = {Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro and Norihiro Hagita},
  title     = {Android as a telecommunication medium with a human-like presence},
  booktitle = {{ACM/IEEE} International Conference on Human Robot Interaction},
  year      = {2007},
  pages     = {193--200},
  address   = {Arlington, Virginia, {USA}},
  month     = Mar,
  doi       = {10.1145/1228716.1228743},
  url       = {http://doi.acm.org/10.1145/1228716.1228743},
  abstract  = {In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.},
  keywords  = {android science; humanoid robot; telecommunication; telepresence},
  numpages  = {8},
}
坂本大介, 神田崇行, 小野哲雄, 石黒浩, 萩田紀博, "遠隔存在感メディアとしてのアンドロイド・ロボットの可能性", インタラクション, 東京, pp. 97-104, March, 2007. (ベストペーパー賞受賞)
Abstract: 本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.
BibTeX:
@Inproceedings{坂本大介2007b,
  author    = {坂本大介 and 神田崇行 and 小野哲雄 and 石黒浩 and 萩田紀博},
  title     = {遠隔存在感メディアとしてのアンドロイド・ロボットの可能性},
  booktitle = {インタラクション},
  year      = {2007},
  series    = {情報処理学会シンポジウムシリーズ},
  pages     = {97--104},
  address   = {東京},
  month     = Mar,
  abstract  = {本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.},
  note      = {ベストペーパー賞受賞},
}
会議発表(査読なし)
東中竜一郎, 高橋哲朗, 稲葉通将, 斉志揚, 佐々木裕多, 船越孝太郎, 守屋彰二, 佐藤志貴, 港隆史, 境くりま, 船山智, 小室允人, 西川寛之, 牧野遼作, 菊池浩史, 宇佐美まゆみ, "対話システムライブコンペティション6", 第99回 人工知能学会 言語・音声理解と対話処理研究会(SLUD)(第14回対話システムシンポジウム), 国立国語研究所, 東京 (オンライン), December, 2023.
Abstract: Following the success of the five previous dialogue system live competitions, we held thesixth edition, titled “Dialogue System Live Competition 6”. The aim of this competition series is tohighlight the challenges and limitations of human-computer dialogue in a live event setting. Similarto the previous edition, our focus was on multimodal dialogue systems. This year’s competitionfeatured a single track, named the “Situation Track”, with the objective of developing a human-likedialogue system for a given situation. In the preliminary round, eight teams competed. This paperprovides an overview of the event and details the results from the preliminary round. The finalround is scheduled to take place as a live event at the 14th Dialogue System Symposium.
BibTeX:
@InProceedings{東中竜一郎2023,
  author    = {東中竜一郎 and 高橋哲朗 and 稲葉通将 and 斉志揚 and 佐々木裕多 and 船越孝太郎 and 守屋彰二 and 佐藤志貴 and 港隆史 and 境くりま and 船山智 and 小室允人 and 西川寛之 and 牧野遼作 and 菊池浩史 and 宇佐美まゆみ},
  booktitle = {第99回 人工知能学会 言語・音声理解と対話処理研究会(SLUD)(第14回対話システムシンポジウム)},
  title     = {対話システムライブコンペティション6},
  year      = {2023},
  address   = {国立国語研究所, 東京 (オンライン)},
  day       = {13-14},
  etitle    = {The Dialogue System Live Competition 6},
  month     = dec,
  url       = {https://jsai-slud.github.io/sig-slud/99th-sig.html},
  abstract  = {Following the success of the five previous dialogue system live competitions, we held thesixth edition, titled “Dialogue System Live Competition 6”. The aim of this competition series is tohighlight the challenges and limitations of human-computer dialogue in a live event setting. Similarto the previous edition, our focus was on multimodal dialogue systems. This year’s competitionfeatured a single track, named the “Situation Track”, with the objective of developing a human-likedialogue system for a given situation. In the preliminary round, eight teams competed. This paperprovides an overview of the event and details the results from the preliminary round. The finalround is scheduled to take place as a live event at the 14th Dialogue System Symposium.},
}
住岡英信, "人とロボットの触れ合いがもたらす影響について", 第41回日本ロボット学会学術講演会(RSJ2023), no. RSJ2023AC2F2-01, 仙台国際センター, 宮城, pp. 1, September, 2023.
Abstract: 本講演では、人とロボットが触れ合うことによる影響について良い影響だけでなく、悪い影響をもたらす可能性も紹介しながら、人と共生するロボットにとって重要な能力であるソーシャルタッチの可能性について議論する。
BibTeX:
@InProceedings{住岡英信2023d,
  author    = {住岡英信},
  booktitle = {第41回日本ロボット学会学術講演会(RSJ2023)},
  title     = {人とロボットの触れ合いがもたらす影響について},
  year      = {2023},
  address   = {仙台国際センター, 宮城},
  day       = {11-14},
  month     = sep,
  number    = {RSJ2023AC2F2-01},
  pages     = {1},
  url       = {https://ac.rsj-web.org/2023/},
  abstract  = {本講演では、人とロボットが触れ合うことによる影響について良い影響だけでなく、悪い影響をもたらす可能性も紹介しながら、人と共生するロボットにとって重要な能力であるソーシャルタッチの可能性について議論する。},
}
秋吉拓斗, 住岡英信, 中西惇也, 加藤博一, 塩見昌裕, "触れ合い対話を伴うカウンセリングロボット実現に向けた撫で・叩き動作のモデル化", 第41回日本ロボット学会学術講演会(RSJ2023), no. RSJ2023AC2D2-02, 仙台国際センター, 宮城, pp. 1-4, September, 2023.
Abstract: 思考整理を促すカウンセリング対話において信頼関係の構築は重要であり,共感的理解などの対話技術が活用されている.一方で,触れ合いの活用は適切な触れ方が未解明であるため実用化に至ってない.しかし,ロボットならば制御可能な装置・仕組みによって適切な触れ方を探索でき,安全かつ効果的なカウンセリング対話を実現できる可能性がある.本稿では,触れ合い対話を伴うカウンセリングロボットの設計指針を得るため,抱擁時の人間のカウンセリング対話における撫で・叩き動作のタイミングや動作時間,頻度のモデル化に取り組む.
BibTeX:
@InProceedings{秋吉拓斗2023a,
  author    = {秋吉拓斗 and 住岡英信 and 中西惇也 and 加藤博一 and 塩見昌裕},
  booktitle = {第41回日本ロボット学会学術講演会(RSJ2023)},
  title     = {触れ合い対話を伴うカウンセリングロボット実現に向けた撫で・叩き動作のモデル化},
  year      = {2023},
  address   = {仙台国際センター, 宮城},
  day       = {11-14},
  month     = sep,
  number    = {RSJ2023AC2D2-02},
  pages     = {1-4},
  url       = {https://ac.rsj-web.org/2023/},
  abstract  = {思考整理を促すカウンセリング対話において信頼関係の構築は重要であり,共感的理解などの対話技術が活用されている.一方で,触れ合いの活用は適切な触れ方が未解明であるため実用化に至ってない.しかし,ロボットならば制御可能な装置・仕組みによって適切な触れ方を探索でき,安全かつ効果的なカウンセリング対話を実現できる可能性がある.本稿では,触れ合い対話を伴うカウンセリングロボットの設計指針を得るため,抱擁時の人間のカウンセリング対話における撫で・叩き動作のタイミングや動作時間,頻度のモデル化に取り組む.},
}
住岡英信, 大和信夫, 塩見昌裕, "介護者が見守らないコミュニケーション支援実現に向けた対話ロボットの要素検討", 第41回日本ロボット学会学術講演会(RSJ2023), no. RSJ2023AC1A2-04, 仙台国際センター, 宮城, pp. 1, September, 2023.
Abstract: 本稿では、介護者が見守らないコミュニケーション支援実現を目指し、我々がこれまで介護現場とともに進めてきた赤ちゃん型対話ロボット開発を紹介する。開発を通して得られた現場で利用し続けてもらえるための要素について議論する.
BibTeX:
@InProceedings{住岡英信2023c,
  author    = {住岡英信 and 大和信夫 and 塩見昌裕},
  booktitle = {第41回日本ロボット学会学術講演会(RSJ2023)},
  title     = {介護者が見守らないコミュニケーション支援実現に向けた対話ロボットの要素検討},
  year      = {2023},
  address   = {仙台国際センター, 宮城},
  day       = {11-14},
  month     = sep,
  number    = {RSJ2023AC1A2-04},
  pages     = {1},
  url       = {https://ac.rsj-web.org/2023/},
  abstract  = {本稿では、介護者が見守らないコミュニケーション支援実現を目指し、我々がこれまで介護現場とともに進めてきた赤ちゃん型対話ロボット開発を紹介する。開発を通して得られた現場で利用し続けてもらえるための要素について議論する.},
}
住岡英信, 大和信夫, 塩見昌裕, "赤ちゃん型対話ロボットが介護者に与える影響", 第41回日本ロボット学会学術講演会(RSJ2023), no. RSJ2023AC1A2-03, 仙台国際センター, 宮城, pp. 1, September, 2023.
Abstract: 本稿では、介護者が見守らないコミュニケーション支援実現を目指し、我々がこれまで介護現場とともに進めてきた赤ちゃん型対話ロボット開発を紹介する。開発を通して得られた現場で利用し続けてもらえるための要素について議論する.
BibTeX:
@InProceedings{住岡英信2023b,
  author    = {住岡英信 and 大和信夫 and 塩見昌裕},
  booktitle = {第41回日本ロボット学会学術講演会(RSJ2023)},
  title     = {赤ちゃん型対話ロボットが介護者に与える影響},
  year      = {2023},
  address   = {仙台国際センター, 宮城},
  day       = {11-14},
  month     = sep,
  number    = {RSJ2023AC1A2-03},
  pages     = {1},
  url       = {https://ac.rsj-web.org/2023/},
  abstract  = {本稿では、介護者が見守らないコミュニケーション支援実現を目指し、我々がこれまで介護現場とともに進めてきた赤ちゃん型対話ロボット開発を紹介する。開発を通して得られた現場で利用し続けてもらえるための要素について議論する.},
}
酒井和紀, 光田航, 吉川雄一郎, 東中竜一郎, 港隆史, 石黒浩, "複数ロボット議論における議論展開と見かけの違いによるユーザの理解度への影響の調査", 2023年度 人工知能学会全国大会 (第37回) (JSAI2023), 熊本城ホール, 熊本 (online), pp. 1-4, June, 2023.
Abstract: これまでに本研究では、2台のロボットが、2つの主要な主張を持つトピックについて、議論構造を用いてユーザに議論を示す議論システムを開発した。しかし、ユーザーの意見を変化させる議論の展開方法や、ロボット利用による効果は不明であった。本稿では、議論展開とロボット利用がユーザーの理解に与える影響について検討する。展示会にて1ヶ月間のフィールド実験を実施した。得られた2925件の会話から、ユーザーと反対のスタンスのロボットが、同じスタンスのロボットと同意するインタラクションを見せることで、ユーザーの理解が深まることが示唆された。 Discussion capability is important for humans and robots. We have previously developed a discussion systemwhere two robots showed a user discussions about topics with two main claims by using an argumentation structure.However, a method of developing discussion that changes the user’s opinion and the effect of robots’ appearanceswere unclear. In this study, we investigate the effects of discussion development and robots’ appearances on user’s understanding. Field experiments were conducted for one month in an exhibition. The results obtained from2925 conversations suggest that showing interactions where the robot with the opposite stance of the user agreedwith another robot with the same stance improved the user’s understanding. It is also suggested that when smallhumanoid robots rather than android robots with the same stance disagreed with another robot with the oppositestance, the user increases the confidence of the opinion.
BibTeX:
@InProceedings{酒井和紀2023,
  author    = {酒井和紀 and 光田航 and 吉川雄一郎 and 東中竜一郎 and 港隆史 and 石黒浩},
  booktitle = {2023年度 人工知能学会全国大会 (第37回) (JSAI2023)},
  title     = {複数ロボット議論における議論展開と見かけの違いによるユーザの理解度への影響の調査},
  year      = {2023},
  address   = {熊本城ホール, 熊本 (online)},
  day       = {6-9},
  etitle    = {Investigation of Effects of Discussion Development and Appearance on User’s Understanding in Multi-Robot Discussion},
  month     = jun,
  pages     = {1-4},
  url       = {https://www.ai-gakkai.or.jp/jsai2023/},
  abstract  = {これまでに本研究では、2台のロボットが、2つの主要な主張を持つトピックについて、議論構造を用いてユーザに議論を示す議論システムを開発した。しかし、ユーザーの意見を変化させる議論の展開方法や、ロボット利用による効果は不明であった。本稿では、議論展開とロボット利用がユーザーの理解に与える影響について検討する。展示会にて1ヶ月間のフィールド実験を実施した。得られた2925件の会話から、ユーザーと反対のスタンスのロボットが、同じスタンスのロボットと同意するインタラクションを見せることで、ユーザーの理解が深まることが示唆された。 
Discussion capability is important for humans and robots. We have previously developed a discussion systemwhere two robots showed a user discussions about topics with two main claims by using an argumentation structure.However, a method of developing discussion that changes the user’s opinion and the effect of robots’ appearanceswere unclear. In this study, we investigate the effects of discussion development and robots’ appearances on user’s understanding. Field experiments were conducted for one month in an exhibition. The results obtained from2925 conversations suggest that showing interactions where the robot with the opposite stance of the user agreedwith another robot with the same stance improved the user’s understanding. It is also suggested that when smallhumanoid robots rather than android robots with the same stance disagreed with another robot with the oppositestance, the user increases the confidence of the opinion.},
}
春野幸輝, 田熊隆史, 住岡英信, 港隆史, 塩見昌裕, "導電性布を有するソフトロボットフィンガーによる把持対象物の非接触位置推定", 2022年度 計測自動制御学会関西支部・システム制御情報学会シンポジウム, 大阪公立大学I-siteなんば, 大阪, January, 2023.
Abstract: In order to estimate position of grasping object by the soft robot fingers, we adopt a flexible conductive cloth whose capacitance changes according to the distance between the cloth and the object. We test the possibility and accuracy of position estimation, and experimental results showed that the position of conductive object was estimated.
BibTeX:
@InProceedings{春野幸輝2023,
  author    = {春野幸輝 and 田熊隆史 and 住岡英信 and 港隆史 and 塩見昌裕},
  booktitle = {2022年度 計測自動制御学会関西支部・システム制御情報学会シンポジウム},
  title     = {導電性布を有するソフトロボットフィンガーによる把持対象物の非接触位置推定},
  year      = {2023},
  address   = {大阪公立大学I-siteなんば, 大阪},
  day       = {11},
  etitle    = {Touchless position estimation of grasping object for soft robot fingers with conductive cloth},
  month     = jan,
  url       = {https://www.sice.or.jp/org/kansai/22/sice-iscie-symp2022/},
  abstract  = {In order to estimate position of grasping object by the soft robot fingers, we adopt a flexible conductive cloth whose capacitance changes according to the distance between the cloth and the object. We test the possibility and accuracy of position estimation, and experimental results showed that the position of conductive object was estimated.},
}
東中竜一郎, 高橋哲朗, 堀内颯太, 稲葉通将, 佐藤志貴, 船越孝太郎, 小室允人, 西川寛之, 宇佐美まゆみ, 港隆史, 境くりま, 船山智, "対話システムライブコンペティション5", 第96回 人工知能学会 言語・音声理解と対話処理研究会(第13回対話システムシンポジウム), 国立国語研究所, 東京 (online), December, 2022.
Abstract: 本稿では、「対話システムライブコンペティション5」の概要について述べる。このコンペティション・シリーズの狙いは 人間とコンピュータの対話の難しさと限界を、ライブで明らかにすることである。これまでの大会ではテキストベースの対話システムを対象としてきたが、今回はより難易度の高いマルチモーダル対話システムに焦点を当てた。オープントラックとシチュエーショントラックの2つのトラックを用意した。前者はオープンドメインのチャット指向の対話システム、後者は人間らしいチャット指向の対話システムの開発を目指すものである。予選では、オープントラックで9チーム、シチュエーショントラックで10チームが参加した。その大会の概要と予選の結果を報告する。
BibTeX:
@InProceedings{東中竜一郎2022,
  author    = {東中竜一郎 and 高橋哲朗 and 堀内颯太 and 稲葉通将 and 佐藤志貴 and 船越孝太郎 and 小室允人 and 西川寛之 and 宇佐美まゆみ and 港隆史 and 境くりま and 船山智},
  booktitle = {第96回 人工知能学会 言語・音声理解と対話処理研究会(第13回対話システムシンポジウム)},
  title     = {対話システムライブコンペティション5},
  year      = {2022},
  address   = {国立国語研究所, 東京 (online)},
  day       = {13-14},
  etitle    = {The Dialogue System Live Competition 5},
  month     = dec,
  url       = {https://jsai-slud.github.io/sig-slud/events/index.html},
  abstract  = {本稿では、「対話システムライブコンペティション5」の概要について述べる。このコンペティション・シリーズの狙いは 人間とコンピュータの対話の難しさと限界を、ライブで明らかにすることである。これまでの大会ではテキストベースの対話システムを対象としてきたが、今回はより難易度の高いマルチモーダル対話システムに焦点を当てた。オープントラックとシチュエーショントラックの2つのトラックを用意した。前者はオープンドメインのチャット指向の対話システム、後者は人間らしいチャット指向の対話システムの開発を目指すものである。予選では、オープントラックで9チーム、シチュエーショントラックで10チームが参加した。その大会の概要と予選の結果を報告する。},
}
王可心, 石井カルロス寿憲, 林良子, "自由会話における「楽しい笑い」と「愛想笑い」の音声的特徴:予備的分析", 第25回 日本音響学会 関西支部 若手研究者交流研究発表会, 同志社大学京田辺キャンパス, 京都, November, 2022.
Abstract: 笑いは、人間の社会的インタラクションにおいて、重要なコミュニケーションの要素の一つである。笑いは、心理学では、愉快な状態である笑いと社会的な笑いである微笑みと大分類される(志水2000)。本研究では、三者自由会話における「楽しい笑い」と「愛想笑い」に着目し、男性5名と女性4名のデータを分析した。音響分析の結果、愛想笑いの方がインテンシティー最大値が小さく、気息性が強く、声帯が緊張しているという傾向が見られた。F0平均値に関して、女性は愛想笑いの方が低いが、男性は有意差がなかった。笑いの長さには有意差がなかった。さらに、愛想笑いの方が状況によってバリエーションがより豊富である傾向が見られた。
BibTeX:
@InProceedings{王2022,
  author    = {王可心 and 石井カルロス寿憲 and 林良子},
  booktitle = {第25回 日本音響学会 関西支部 若手研究者交流研究発表会},
  title     = {自由会話における「楽しい笑い」と「愛想笑い」の音声的特徴:予備的分析},
  year      = {2022},
  address   = {同志社大学京田辺キャンパス, 京都},
  day       = {26},
  month     = nov,
  url       = {https://asj-kansai.acoustics.jp/event/25wakate/},
  abstract  = {笑いは、人間の社会的インタラクションにおいて、重要なコミュニケーションの要素の一つである。笑いは、心理学では、愉快な状態である笑いと社会的な笑いである微笑みと大分類される(志水2000)。本研究では、三者自由会話における「楽しい笑い」と「愛想笑い」に着目し、男性5名と女性4名のデータを分析した。音響分析の結果、愛想笑いの方がインテンシティー最大値が小さく、気息性が強く、声帯が緊張しているという傾向が見られた。F0平均値に関して、女性は愛想笑いの方が低いが、男性は有意差がなかった。笑いの長さには有意差がなかった。さらに、愛想笑いの方が状況によってバリエーションがより豊富である傾向が見られた。},
}
住岡英信, 田中彰人, 安琪, 倉爪亮, 塩見昌裕, "優しい介護を測る:ユマニチュード理解に向けた触れ合い計測スーツ", 第4回日本ユマニチュード学会総会, 京都大学国際科学イノベーション棟シンポジウムホール, 京都, September, 2022.
Abstract: ユマニチュードに基づく認知症ケアでは、被介護者に対して通常よりも近づ き、触れ合いながら介護を行います。本研究では、こういった触れ合いの理解を 深め、ケア技術の訓練やより優しい介護ロボットの実現を目指し、簡単に着用で きる近接・接触センサスーツを開発しました。これを着てケアを行ってもらうこ とで、ケアにおける介護者と被介護者の触れあいの「見える化」が可能となりま す。介護現場でご利用いただき、データを集めることで、習得が難しいといわれ るユマニチュードの訓練支援システムの実現にもつながると考えています。
BibTeX:
@InProceedings{住岡英信2022d,
  author    = {住岡英信 and 田中彰人 and 安琪 and 倉爪亮 and 塩見昌裕},
  booktitle = {第4回日本ユマニチュード学会総会},
  title     = {優しい介護を測る:ユマニチュード理解に向けた触れ合い計測スーツ},
  year      = {2022},
  address   = {京都大学国際科学イノベーション棟シンポジウムホール, 京都},
  day       = {24-25},
  month     = sep,
  url       = {https://jhuma.org/soukai4/},
  abstract  = {ユマニチュードに基づく認知症ケアでは、被介護者に対して通常よりも近づ き、触れ合いながら介護を行います。本研究では、こういった触れ合いの理解を 深め、ケア技術の訓練やより優しい介護ロボットの実現を目指し、簡単に着用で きる近接・接触センサスーツを開発しました。これを着てケアを行ってもらうこ とで、ケアにおける介護者と被介護者の触れあいの「見える化」が可能となりま す。介護現場でご利用いただき、データを集めることで、習得が難しいといわれ るユマニチュードの訓練支援システムの実現にもつながると考えています。},
}
Hiroshi Ishiguro, "Realisation of the Avatar Symbiotic Society: The Concept and Technologies", In ROBOPHILOSOPHY CONFERENCE 2022, WORKSHOP 3: ELSI of the Avatar Symbiotic Society, University of Helsinki, Finland (online), August, 2022.
Abstract: Part of WORKSHOP 3: ELSI of the Avatar Symbiotic Society The author has long been engaged in research and development on robots that act as human surrogates. Moreover, the author has been addressing the issues of how to give robots a sense of presence, how to make them look and feel alive, how to enrich human-robot interaction, and how to design a society where humans and robots coexist. Recently, based on this research and development, the author is leading a project to realize the Avatar Symbiotic Society in which one can easily manipulate multiple avatars as one wishes and participate in various social activities through them. In this presentation, the author will introduce some of the technologies being developed in this research and introduce the concept of an avatar symbiotic society.
BibTeX:
@InProceedings{Ishiguro2022a,
  author    = {Hiroshi Ishiguro},
  booktitle = {ROBOPHILOSOPHY CONFERENCE 2022, WORKSHOP 3: ELSI of the Avatar Symbiotic Society},
  title     = {Realisation of the Avatar Symbiotic Society: The Concept and Technologies},
  year      = {2022},
  address   = {University of Helsinki, Finland (online)},
  day       = {16-19},
  month     = aug,
  url       = {https://cas.au.dk/robophilosophy/conferences/rpc2022/program/workshop-3-elsi-of-the-avatar-symbiotic-society},
  abstract  = {Part of WORKSHOP 3: ELSI of the Avatar Symbiotic Society
The author has long been engaged in research and development on robots that act as human surrogates. Moreover, the author has been addressing the issues of how to give robots a sense of presence, how to make them look and feel alive, how to enrich human-robot interaction, and how to design a society where humans and robots coexist. Recently, based on this research and development, the author is leading a project to realize the Avatar Symbiotic Society in which one can easily manipulate multiple avatars as one wishes and participate in various social activities through them. In this presentation, the author will introduce some of the technologies being developed in this research and introduce the concept of an avatar symbiotic society.},
}
Hiroshi Ishiguro, "Symbiotic Society with Avatars : Social Acceptance, Ethics, and Technologies (SSA)", In 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022), Naples, Italy (hybrid), August, 2022.
Abstract: Part of Morning Workshop (SALA ARAGONESE) Hybrid This workshop aims to provide an opportunity that researchers in communication robot, avatar, psychology, ethics, and law come together and discuss the issues described above to realize a symbiotic society with avatars.
BibTeX:
@InProceedings{Ishiguro2022b,
  author    = {Hiroshi Ishiguro},
  booktitle = {31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022)},
  title     = {Symbiotic Society with Avatars : Social Acceptance, Ethics, and Technologies (SSA)},
  year      = {2022},
  address   = {Naples, Italy (hybrid)},
  day       = {29-02},
  month     = aug,
  url       = {http://www.smile.unina.it/ro-man2022/2-september-2022/},
  abstract  = {Part of Morning Workshop (SALA ARAGONESE) Hybrid This workshop aims to provide an opportunity that researchers in communication robot, avatar, psychology, ethics, and law come together and discuss the issues described above to realize a symbiotic society with avatars.},
}
小谷尚輝, 内田貴久, 亀尾菜保子, 境くりま, 船山智, 港隆史, 菊池あかね, 石黒浩, "遠隔操作アンドロイドアバターを用いた講演会システムの印象と教育的効果の検討", 第199回ヒューマンコンピュータインタラクション研究会, vol. 2022-HCI-199, no. 14, 小樽市小樽経済センター, 北海道 (hybrid), pp. 1-6, August, 2022.
Abstract: 近年遠隔授業や遠隔講演会の社会的ニーズが高まり,登壇者及び聴講者の時間的,物理的制約を軽減することが期待される.アバターを用いることにより,本人が行う講演と同等またはそれ以上の質の遠隔講演が可能になると考えられる.特にアンドロイドアバターを用いれば,聴講者に対して人間が登壇するのと変わらない存在感を感じさせられると期待できる.本研究ではアンドロイドアバターが高校において数百人規模の講演会を行い,聴講者のアンドロイドアバターに対する印象を評価した.聴講者のアンドロイドに対する評価尺度として,擬人化,温かさ,能力,不快感を用い,さらに教育的観点から,ロボット講演に対するエンゲージメントと理解度の主観的評価を行った.これらから,現時点における遠隔操作アンドロイドアバターの効果とその発展性について議論する.
BibTeX:
@InProceedings{小谷尚輝2022,
  author    = {小谷尚輝 and 内田貴久 and 亀尾菜保子 and 境くりま and 船山智 and 港隆史 and 菊池あかね and 石黒浩},
  booktitle = {第199回ヒューマンコンピュータインタラクション研究会},
  title     = {遠隔操作アンドロイドアバターを用いた講演会システムの印象と教育的効果の検討},
  year      = {2022},
  address   = {小樽市小樽経済センター, 北海道 (hybrid)},
  day       = {22-23},
  etitle    = {Study on Impression and Educational Effect of Lecture by Teleoperated Android Avatar},
  month     = aug,
  number    = {14},
  pages     = {1-6},
  url       = {https://www.ipsj.or.jp/kenkyukai/event/hci199.html},
  volume    = {2022-HCI-199},
  abstract  = {近年遠隔授業や遠隔講演会の社会的ニーズが高まり,登壇者及び聴講者の時間的,物理的制約を軽減することが期待される.アバターを用いることにより,本人が行う講演と同等またはそれ以上の質の遠隔講演が可能になると考えられる.特にアンドロイドアバターを用いれば,聴講者に対して人間が登壇するのと変わらない存在感を感じさせられると期待できる.本研究ではアンドロイドアバターが高校において数百人規模の講演会を行い,聴講者のアンドロイドアバターに対する印象を評価した.聴講者のアンドロイドに対する評価尺度として,擬人化,温かさ,能力,不快感を用い,さらに教育的観点から,ロボット講演に対するエンゲージメントと理解度の主観的評価を行った.これらから,現時点における遠隔操作アンドロイドアバターの効果とその発展性について議論する.},
  keywords  = {アンドロイド, アバター, 遠隔授業, Android, Avatar, Remote Lectures},
}
大平義輝, 内田貴久, 港隆史, 石黒浩, "ユーザをモデル化するための社会モデルを用いた意見対話システム", 2022年度 人工知能学会全国大会 (第36回), no. 1P1-GS-10-01, 京都国際会館, 京都 (online), pp. 1-3, June, 2022.
Abstract: 本研究の目的は、日常的な対話におけるユーザの意見をモデル化する対話システムを開発することである。ユーザの意見をモデル化することは、ユーザの対話満足度を向上させるために重要である。本研究では、ユーザの意見をモデル化するために、複数の人の意見を抽象化したモデル(社会的意見モデル)を用い、相互の対応と相違の観点から個人の意見をモデル化する。また、個人の意見モデルから社会的意見モデルを更新する対話も実現した。社会的意見モデルの構築方法として、個人の意見モデルを複数の視点(何を、どこで、誰が、どのように)で抽象化する。これにより、様々な社会モデルを生成することが可能となる。まず、事前に収集した意見データを分析し、社会モデルを抽出した。その結果、特定の視点から対話で参照される社会モデルを構築できることが確認できた。次に、これを踏まえて、社会モデルと個人モデルの両方をモデル化した対話戦略を検討した。具体的には、社会的な意見モデルが複数存在する場合の対話戦略や、対話によって得られた個人モデルから社会的なモデルを更新するルールについて検討した。 The purpose of this research is to develop a dialogue system that models user opinions in daily dialogue. Modeling theuser's opinion is important to increase the user's dialogue satisfaction. In this research, we use a model that abstracts theopinions of multiple people (social opinion model) to model the opinions of users, and model the opinions of individualsfrom the viewpoint of mutual correspondence and differences. We also realized a dialogue that updates the social opinionmodel from the individual opinion model is also realized. As a method of constructing a social opinion model, an individualopinion model is abstracted from multiple viewpoints (what, where, who, how). This makes it possible to generate varioussocial models. First, we analyzed the opinion data collected in advance and extracted the social model. As a result, it wasconfirmed that a social model that is referred to in dialogue from a specific viewpoint can be constructed. Next, based on this,we examined a dialogue strategy that models both the social model and the individual model. Specifically, we exploreddialogue strategies when there are multiple social opinion models, and rules for updating social models from individualmodels acquired through dialogue.
BibTeX:
@InProceedings{大平義輝2022,
  author    = {大平義輝 and 内田貴久 and 港隆史 and 石黒浩},
  booktitle = {2022年度 人工知能学会全国大会 (第36回)},
  title     = {ユーザをモデル化するための社会モデルを用いた意見対話システム},
  year      = {2022},
  address   = {京都国際会館, 京都 (online)},
  day       = {14-17},
  doi       = {10.11517/pjsai.JSAI2022.0_1P1GS1001},
  etitle    = {A Dialogue System for Modeling User’s Opinion Using Social Models},
  month     = jun,
  number    = {1P1-GS-10-01},
  pages     = {1-3},
  url       = {https://www.jstage.jst.go.jp/article/pjsai/JSAI2022/0/JSAI2022_1P1GS1001/_article/-char/ja/},
  abstract  = {本研究の目的は、日常的な対話におけるユーザの意見をモデル化する対話システムを開発することである。ユーザの意見をモデル化することは、ユーザの対話満足度を向上させるために重要である。本研究では、ユーザの意見をモデル化するために、複数の人の意見を抽象化したモデル(社会的意見モデル)を用い、相互の対応と相違の観点から個人の意見をモデル化する。また、個人の意見モデルから社会的意見モデルを更新する対話も実現した。社会的意見モデルの構築方法として、個人の意見モデルを複数の視点(何を、どこで、誰が、どのように)で抽象化する。これにより、様々な社会モデルを生成することが可能となる。まず、事前に収集した意見データを分析し、社会モデルを抽出した。その結果、特定の視点から対話で参照される社会モデルを構築できることが確認できた。次に、これを踏まえて、社会モデルと個人モデルの両方をモデル化した対話戦略を検討した。具体的には、社会的な意見モデルが複数存在する場合の対話戦略や、対話によって得られた個人モデルから社会的なモデルを更新するルールについて検討した。

The purpose of this research is to develop a dialogue system that models user opinions in daily dialogue. Modeling theuser's opinion is important to increase the user's dialogue satisfaction. In this research, we use a model that abstracts theopinions of multiple people (social opinion model) to model the opinions of users, and model the opinions of individualsfrom the viewpoint of mutual correspondence and differences. We also realized a dialogue that updates the social opinionmodel from the individual opinion model is also realized. As a method of constructing a social opinion model, an individualopinion model is abstracted from multiple viewpoints (what, where, who, how). This makes it possible to generate varioussocial models. First, we analyzed the opinion data collected in advance and extracted the social model. As a result, it wasconfirmed that a social model that is referred to in dialogue from a specific viewpoint can be constructed. Next, based on this,we examined a dialogue strategy that models both the social model and the individual model. Specifically, we exploreddialogue strategies when there are multiple social opinion models, and rules for updating social models from individualmodels acquired through dialogue.},
  keywords  = {意見モデル, 個人モデル, 社会モデル, 対話戦略, 対話システム},
}
李歆玥, 石井カルロス寿憲, 林良子, "日本語自然会話におけるフィラーの音響分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-", In 2022年3月日本音響学会音声コミュニケーション研究会, vol. 2, no. 2, online, pp. 27-30, March, 2022.
Abstract: The present study documents (1) how Japanese native speakers and L1-Chinese learners of L2 Japanese differin the production of filled pauses during spontaneous conversations, and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation. Prosodic and voice quality measurements were extracted and the results of acousticanalyses indicated that there are significant differences in prosodic and voice quality measurements including duration, F0mean,intensity, spectral tilt-related indices, jitter and shimmer, (1) between Japanese native speakers and Chinese learners of L2Japanese, as well as (2) between filled pauses and ordinary lexical items. Furthermore, results of random forest classification analysisindicate that duration and intensity play the most significant role, while voice quality related features make a secondary contribution to theclassification.
BibTeX:
@InProceedings{Li2022,
  author    = {李歆玥 and 石井カルロス寿憲 and 林良子},
  booktitle = {2022年3月日本音響学会音声コミュニケーション研究会},
  title     = {日本語自然会話におけるフィラーの音響分析 -日本語母語話者および中国語を母語とする日本語学習者を対象に-},
  year      = {2022},
  address   = {online},
  day       = {21},
  etitle    = {Prosodic and Voice Quality Analyses of Filled Pauses in Japanese Spontaneous Conversation -Japanese Native Speakers and L1-Chinese learners of L2 Japanese-},
  month     = mar,
  number    = {2},
  pages     = {27-30},
  url       = {https://asj-sccom.acoustics.jp/},
  volume    = {2},
  abstract  = {The present study documents (1) how Japanese native speakers and L1-Chinese learners of L2 Japanese differin the production of filled pauses during spontaneous conversations, and (2) how the vowels of filled pauses and ordinary lexicalitems differ in spontaneous conversation. Prosodic and voice quality measurements were extracted and the results of acousticanalyses indicated that there are significant differences in prosodic and voice quality measurements including duration, F0mean,intensity, spectral tilt-related indices, jitter and shimmer, (1) between Japanese native speakers and Chinese learners of L2Japanese, as well as (2) between filled pauses and ordinary lexical items. Furthermore, results of random forest classification analysisindicate that duration and intensity play the most significant role, while voice quality related features make a secondary contribution to theclassification.},
  keywords  = {Spontaneous conversation, Second language acquisition, Random Forest, Disfluency},
}
Yoji Kohda, Nobuo Yamato, Hidenobu Sumioka, "Role of Artificial Intelligence (AI) to Provide Quality Public Health Services", In International Conference On Sustainable Development : Opportunities And Challenges, American International University Bangladesh, Bangladesh (online), January, 2022.
Abstract: In this talk, I would like to talk about the role of AI in general from a knowledge science perspective, using the heath care sector as an example. I discuss the role of AI to answer two questions: "Can doctors learn from AI?" and "Will patients listen to AI?".
BibTeX:
@InProceedings{Kohda2022,
  author    = {Yoji Kohda and Nobuo Yamato and Hidenobu Sumioka},
  booktitle = {International Conference On Sustainable Development : Opportunities And Challenges},
  title     = {Role of Artificial Intelligence (AI) to Provide Quality Public Health Services},
  year      = {2022},
  address   = {American International University Bangladesh, Bangladesh (online)},
  day       = {12-13},
  month     = jan,
  url       = {https://aicss.aiub.edu/},
  abstract  = {In this talk, I would like to talk about the role of AI in general from a knowledge science perspective, using the heath care sector as an example. I discuss the role of AI to answer two questions: "Can doctors learn from AI?" and "Will patients listen to AI?".},
}
住岡英信, 大和信夫, 塩見昌裕, "介護施設への赤ちゃん型ロボットの継続的導入に向けた予備的調査 -パッシブソーシャルメディアとしての赤ちゃん型ロボットの可能性-", 第39回日本ロボット学会学術講演会 (RSJ2021), no. RSJ2021AC2G2-03, オンライン, pp. 1-4, September, 2021.
Abstract: 本研究では赤ちゃん型ロボットを実際の介護現場に2週間導入し,介護スタッフのみで運用してもらうことで,認知症高齢者のロボットに対する反応の変化や,運用の際の課題や影響などについての予備的調査を行った.
BibTeX:
@Inproceedings{住岡英信2021,
  author    = {住岡英信 and 大和信夫 and 塩見昌裕},
  title     = {介護施設への赤ちゃん型ロボットの継続的導入に向けた予備的調査 -パッシブソーシャルメディアとしての赤ちゃん型ロボットの可能性-},
  booktitle = {第39回日本ロボット学会学術講演会 (RSJ2021)},
  year      = {2021},
  number    = {RSJ2021AC2G2-03},
  pages     = {1-4},
  address   = {オンライン},
  month     = sep,
  day       = {8-11},
  url       = {https://ac.rsj-web.org/2021/index.html},
  abstract  = {本研究では赤ちゃん型ロボットを実際の介護現場に2週間導入し,介護スタッフのみで運用してもらうことで,認知症高齢者のロボットに対する反応の変化や,運用の際の課題や影響などについての予備的調査を行った.},
}
石井カルロス寿憲, "3者対話における視線の理由と視線逸らしの分析", In 日本音響学会2021年秋季研究発表会, no. 3-3-15, オンライン, pp. 1281-1282, September, 2021.
Abstract: 3者対話データベースを用いて、発話に伴う話者の顔に向けた視線および話者の顔以外に向けられた視線逸らしの理由を調べた。視線逸らしの場合は、黒目の動きの分布も分析した。
BibTeX:
@InProceedings{石井カルロス寿憲2021_,
  author    = {石井カルロス寿憲},
  booktitle = {日本音響学会2021年秋季研究発表会},
  title     = {3者対話における視線の理由と視線逸らしの分析},
  year      = {2021},
  address   = {オンライン},
  day       = {7-9},
  month     = sep,
  number    = {3-3-15},
  pages     = {1281-1282},
  url       = {https://acoustics.jp/annualmeeting/},
  abstract  = {3者対話データベースを用いて、発話に伴う話者の顔に向けた視線および話者の顔以外に向けられた視線逸らしの理由を調べた。視線逸らしの場合は、黒目の動きの分布も分析した。},
}
春野幸輝, 山田晃翼, 田熊隆史, 住岡英信, 港隆史, 塩見昌裕, "導電性布によるマルチモーダルセンシングの実現と実ロボットによる移動経路の形状スキャニング", ロボティクス・メカトロニクス 講演会 2021, no. 2P2-G06, オンライン, pp. 1-4, June, 2021.
Abstract: This paper explains the characteristics of a flexible and stretchable sensor in which a conductivecloth is embedded in silicone, and the “scanning” in which a water-driven robot equipping the sensorestimates a shape of pathway. The cloth measures the change in capacitance not only by the extensionof the sensor but also by the approach of conductive object, that is, multi-modal sensing. In scanning,the shape of obstacles such as width and height of gap between the floor and obstacle is estimated fromthe profile in the capacitance of the conductive cloth according to the movement of the soft robot.
BibTeX:
@Inproceedings{春野幸輝2021,
  author    = {春野幸輝 and 山田晃翼 and 田熊隆史 and 住岡英信 and 港隆史 and 塩見昌裕},
  title     = {導電性布によるマルチモーダルセンシングの実現と実ロボットによる移動経路の形状スキャニング},
  booktitle = {ロボティクス・メカトロニクス 講演会 2021},
  year      = {2021},
  number    = {2P2-G06},
  pages     = {1-4},
  address   = {オンライン},
  month     = jun,
  day       = {6-8},
  url       = {https://robomech.org/2021/},
  etitle    = {Realization of multi-modal sensing with conductive clothand shape estimation of pathway by mobile robot},
  abstract  = {This paper explains the characteristics of a flexible and stretchable sensor in which a conductivecloth is embedded in silicone, and the “scanning” in which a water-driven robot equipping the sensorestimates a shape of pathway. The cloth measures the change in capacitance not only by the extensionof the sensor but also by the approach of conductive object, that is, multi-modal sensing. In scanning,the shape of obstacles such as width and height of gap between the floor and obstacle is estimated fromthe profile in the capacitance of the conductive cloth according to the movement of the soft robot.},
  keywords  = {Soft robot, Stretchable conductive cloth, Soft sensor},
}
新谷太健, 石井カルロス寿憲, 石黒浩, "複数人対話における役割に応じた視線の振る舞いの解析とロボットへの実装", 第57回人工知能学会 AI チャレンジ研究会, no. 057-17, オンライン開催, pp. 106-114, November, 2020.
Abstract: In a multi-person face-to-face dialogue, people naturally gaze according to their roles. The goal of this research is to develop an agent that can control eye movement according to its role in a face-to-face dialogue with multiple users. In this study, we analyze the gaze behaviors in three-party dialogue data accounting for dialogue roles, implement gaze models on a robot based on the analysis results, and conduct evaluation experiments. We show that natural behaviors are achieved by our proposed gaze control system, which accounts for dialogue roles and eyeball movement control.
BibTeX:
@InProceedings{新谷太健2020a,
  author    = {新谷太健 and 石井カルロス寿憲 and 石黒浩},
  booktitle = {第57回人工知能学会 AI チャレンジ研究会},
  title     = {複数人対話における役割に応じた視線の振る舞いの解析とロボットへの実装},
  year      = {2020},
  address   = {オンライン開催},
  day       = {20-21},
  etitle    = {Analysis of Role-Based Gaze Behaviors and their Implementation in a Robot in Multiparty Conversation},
  month     = nov,
  number    = {057-17},
  pages     = {106-114},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-057/program.html},
  abstract  = {In a multi-person face-to-face dialogue, people naturally gaze according to their roles. The goal of this research is to develop an agent that can control eye movement according to its role in a face-to-face dialogue with multiple users. In this study, we analyze the gaze behaviors in three-party dialogue data accounting for dialogue roles, implement gaze models on a robot based on the analysis results, and conduct evaluation experiments. We show that natural behaviors are achieved by our proposed gaze control system, which accounts for dialogue roles and eyeball movement control.},
}
内田貴久, 港隆史, 石黒浩, "Autonomous Robots for Daily Dialogue Based on Preference and Experience Models", In The 3rd International Symposium on Symbiotic Intelligent Systems: "A New Era towards Responsible Robotics and Innovation" (3rd SISReC Symposium), online, November, 2020.
Abstract: This study develops robots that people want to engage in daily dialogue with. In this study, we hypothesize that “a dialogue robot that tries to understand human relationships both improve its human-likeness and the user’s motivation to talk with it.” In this presentation, we first explain a dialogue robot that estimates others’ preference models from its own preference model. Next, we propose a dialogue robot based on the similarity of personal preference models. Finally, we propose a dialogue robot based on the similarity of personal experience models. The experimental results of the three studies support the hypothesis. Future work needs to develop a human relationship model that considers cultural differences and types of desires.
BibTeX:
@InProceedings{内田貴久2020,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  booktitle = {The 3rd International Symposium on Symbiotic Intelligent Systems: "A New Era towards Responsible Robotics and Innovation" (3rd SISReC Symposium)},
  title     = {Autonomous Robots for Daily Dialogue Based on Preference and Experience Models},
  year      = {2020},
  address   = {online},
  day       = {19-20},
  month     = nov,
  url       = {https://sisrec.otri.osaka-u.ac.jp/the-3rd-international-symposium-on-symbiotic-intelligent-systems/},
  abstract  = {This study develops robots that people want to engage in daily dialogue with. In this study, we hypothesize that “a dialogue robot that tries to understand human relationships both improve its human-likeness and the user’s motivation to talk with it.” In this presentation, we first explain a dialogue robot that estimates others’ preference models from its own preference model. Next, we propose a dialogue robot based on the similarity of personal preference models. Finally, we propose a dialogue robot based on the similarity of personal experience models. The experimental results of the three studies support the hypothesis. Future work needs to develop a human relationship model that considers cultural differences and types of desires.},
}
Bowen Wu, Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Improving Conditional-GAN using Unrolled-GAN for the Generation of Co-speech Upper Body Gesture", In 第57回人工知能学会 AI チャレンジ研究会, no. 057-15, オンライン開催, pp. 92-99, November, 2020.
Abstract: Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation show that the proposed model outperforms the existing deterministic model in terms of distribution, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.
BibTeX:
@InProceedings{Wu2020,
  author    = {Bowen Wu and Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  booktitle = {第57回人工知能学会 AI チャレンジ研究会},
  title     = {Improving Conditional-GAN using Unrolled-GAN for the Generation of Co-speech Upper Body Gesture},
  year      = {2020},
  address   = {オンライン開催},
  day       = {20-21},
  month     = nov,
  number    = {057-15},
  pages     = {92-99},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-057/program.html},
  abstract  = {Co-speech gesture is a crucial non-verbal modality for humans to express ideas. Social agents also need such capability to be more human-like and comprehensive. This work aims to model the distribution of gesture conditioned on human speech features for the generation, instead of finding an injective mapping function from speech to gesture. We propose a novel conditional GAN-based generative model to not only realize the conversion from speech to gesture but also to approximate the distribution of gesture conditioned on speech through parameterization. Objective evaluation show that the proposed model outperforms the existing deterministic model in terms of distribution, indicating that generative models can approximate the real patterns of co-speech gestures more than the existing deterministic model. Our result suggests that it is critical to consider the nature of randomness when modeling co-speech gestures.},
}
住岡英信, 港隆史, 塩見昌裕, "ユーザと触覚体験を共有する着用型エージェントの開発", 第38回日本ロボット学術講演会 (RSJ2020), online, pp. RSJ2020AC2I2-01 1-3, October, 2020.
Abstract: 本研究では、人の主観的体験である接触体験を人と共有する着用型エージェントを開発し、それとのインタラクションについて検討を行う
BibTeX:
@InProceedings{住岡2020a,
  author    = {住岡英信 and 港隆史 and 塩見昌裕},
  booktitle = {第38回日本ロボット学術講演会 (RSJ2020)},
  title     = {ユーザと触覚体験を共有する着用型エージェントの開発},
  year      = {2020},
  address   = {online},
  day       = {9-11},
  month     = oct,
  pages     = {RSJ2020AC2I2-01 1-3},
  url       = {https://ac.rsj-web.org/2020/},
  abstract  = {本研究では、人の主観的体験である接触体験を人と共有する着用型エージェントを開発し、それとのインタラクションについて検討を行う},
}
Hidenobu Sumioka, "A minimal design for intimate touch interaction toward interactive doll therapy", In Workshop on Socialware in human-robot collaboration and physical interaction (in the international conference on robot and human interactive communication), Online workshop (zoom), September, 2020.
BibTeX:
@Inproceedings{Sumioka2020a,
  author    = {Hidenobu Sumioka},
  title     = {A minimal design for intimate touch interaction toward interactive doll therapy},
  booktitle = {Workshop on Socialware in human-robot collaboration and physical interaction (in the international conference on robot and human interactive communication)},
  year      = {2020},
  address   = {Online workshop (zoom)},
  month     = sep,
  day       = {1},
  url       = {https://dil.atr.jp/crest2018_STI/socialware-in-roman2020/page.html},
}
草野翔悟, 住岡英信, 港隆史, 塩見昌裕, 田熊隆史, "導電性を有する布を用いた触覚センサの開発と評価", ロボティクス・メカトロニクス 講演会, オンライン開催, pp. 1P1-L110 1-4, May, 2020.
Abstract: This paper introduces novel soft chamber embedding stretchable conductive cloth. It measures the contacting information of the chamber in case that the soft robot equipping the chamber contacts the obstacle. The mechanism of the cloth and the chamber embedding the cloth are explained. The experimental results show that the chamber rapidly distinguishes the contacting position by measuring the capacitance of the cloth.
BibTeX:
@InProceedings{草野2020,
  author    = {草野翔悟 and 住岡英信 and 港隆史 and 塩見昌裕 and 田熊隆史},
  booktitle = {ロボティクス・メカトロニクス 講演会},
  title     = {導電性を有する布を用いた触覚センサの開発と評価},
  year      = {2020},
  address   = {オンライン開催},
  day       = {27-30},
  month     = may,
  pages     = {1P1-L110 1-4},
  url       = {https://robomech.org/2020/},
  abstract  = {This paper introduces novel soft chamber embedding stretchable conductive cloth. It measures the contacting information of the chamber in case that the soft robot equipping the chamber contacts the obstacle. The mechanism of the cloth and the chamber embedding the cloth are explained. The experimental results show that the chamber rapidly distinguishes the contacting position by measuring the capacitance of the cloth.},
}
石井カルロス寿憲, 三方瑠祐, 石黒浩, "雑談対話中の指示ジェスチャの分析:発話機能と対人関係との関連", 日本音響学会2020年春季研究発表会 (ASJ2020 Spring), no. 1-P-26, 埼玉大学, 埼玉, pp. 823-824, March, 2020.
Abstract: 本研究では、雑談対話に出現する指示(手のひら、人差し指などで相手を指す)ジェスチャに伴う印象に着目し、手の形や動きと、発話機能、対人関係などを考慮して、これらの関連性を調べた。
BibTeX:
@InProceedings{石井カルロス寿憲2020,
  author    = {石井カルロス寿憲 and 三方瑠祐 and 石黒浩},
  booktitle = {日本音響学会2020年春季研究発表会 (ASJ2020 Spring)},
  title     = {雑談対話中の指示ジェスチャの分析:発話機能と対人関係との関連},
  year      = {2020},
  address   = {埼玉大学, 埼玉},
  day       = {16-18},
  etitle    = {Carlos Ishi and Ryusuke Mikata and Hiroshi Ishiguro},
  month     = mar,
  number    = {1-P-26},
  pages     = {823-824},
  url       = {https://acoustics.jp/annualmeeting/},
  abstract  = {本研究では、雑談対話に出現する指示(手のひら、人差し指などで相手を指す)ジェスチャに伴う印象に着目し、手の形や動きと、発話機能、対人関係などを考慮して、これらの関連性を調べた。},
}
Xinyue Li, Carlos Toshinori Ishi, Ryoko Hayashi, "中国語を母語とする日本語学習者による態度音声の韻律的特徴", 日本音響学会2020年春季研究発表会 (ASJ2020 Spring), no. 3-P-43, 埼玉大学, 埼玉, pp. 1191-1192, March, 2020.
Abstract: 本研究では,日本語母語話者と中国語を母語とする日本語学習者が発話した態度音声を分析することで,態度のペアである「友好」「敵対」,「丁寧」「失礼」,「本気」「冗談」,「賞賛」「非難」の音声が態度および発話者群によってどのように変化するのかについて検討する。
BibTeX:
@Inproceedings{Li2020,
  author    = {Xinyue Li and Carlos Toshinori Ishi and Ryoko Hayashi},
  title     = {中国語を母語とする日本語学習者による態度音声の韻律的特徴},
  booktitle = {日本音響学会2020年春季研究発表会 (ASJ2020 Spring)},
  year      = {2020},
  number    = {3-P-43},
  pages     = {1191-1192},
  address   = {埼玉大学, 埼玉},
  month     = mar,
  day       = {16-18},
  url       = {https://acoustics.jp/annualmeeting/},
  abstract  = {本研究では,日本語母語話者と中国語を母語とする日本語学習者が発話した態度音声を分析することで,態度のペアである「友好」「敵対」,「丁寧」「失礼」,「本気」「冗談」,「賞賛」「非難」の音声が態度および発話者群によってどのように変化するのかについて検討する。},
}
新谷太健, 石井カルロス寿憲, 劉超然, 石黒浩, "三者対話における遠隔操作型ロボットへの半自律視線制御支援システムの提案", HAIシンポジウム2020, 専修大学生田キャンパス, 神奈川, pp. P-55, March, 2020.
Abstract: In recent years, remote control of interactive agents has been studied.Gaze control is one important factor in dialogue robots.In this research, we propose a dialogue agent system that generates appropriate gaze movements using "speech act" and controls gaze to support remote operators in order to realize a smooth dialogue in three-party conversation.
BibTeX:
@InProceedings{新谷太健2020,
  author    = {新谷太健 and 石井カルロス寿憲 and 劉超然 and 石黒浩},
  booktitle = {HAIシンポジウム2020},
  title     = {三者対話における遠隔操作型ロボットへの半自律視線制御支援システムの提案},
  year      = {2020},
  address   = {専修大学生田キャンパス, 神奈川},
  day       = {6-7},
  etitle    = {A model of eye gaze in social robots for three-party interaction},
  month     = mar,
  pages     = {P-55},
  url       = {http://hai-conference.net/symp2020/index.php},
  abstract  = {In recent years, remote control of interactive agents has been studied.Gaze control is one important factor in dialogue robots.In this research, we propose a dialogue agent system that generates appropriate gaze movements using "speech act" and controls gaze to support remote operators in order to realize a smooth dialogue in three-party conversation.},
}
Soheil Keshmiri, "Higher Specificity of Multiscale Entropy than Permutation Entropy in Quantification of the Brain Activity in Response to Naturalistic Stimuli: a Comparative Study", In The 1st International Symposium on Human InformatiX: X-Dimensional Human Informatics and Biology, ATR, Kyoto, February, 2020.
Abstract: I provide results on the comparative analyses of these measures with the entropy of the human subjects’ EEG recordings who watched short movie clips that elicited negative, neutral, and positive affect. The analyses results identified significant anti-correlations between all MSE scales and the entropy of these EEG recordings that were stronger in the negative than the positive and the neutral states. They also showed that MSE significantly differentiated between the brain responses to these affect. On the other hand, these results indicated that PE failed to identify such significant correlations and differences between the negative, neutral, and positive affect. These results provide insights on the level of association between the entropy, the MSE, and the PE of the brain variability in response to naturalistic stimuli, thereby enabling researchers to draw more informed conclusions on quantification of the brain variability by these measures.
BibTeX:
@InProceedings{Keshmiri2020a,
  author    = {Soheil Keshmiri},
  booktitle = {The 1st International Symposium on Human InformatiX: X-Dimensional Human Informatics and Biology},
  title     = {Higher Specificity of Multiscale Entropy than Permutation Entropy in Quantification of the Brain Activity in Response to Naturalistic Stimuli: a Comparative Study},
  year      = {2020},
  address   = {ATR, Kyoto},
  day       = {27-28},
  month     = feb,
  abstract  = {I provide results on the comparative analyses of these measures with the entropy of the human subjects’ EEG recordings who watched short movie clips that elicited negative, neutral, and positive affect. The analyses results identified significant anti-correlations between all MSE scales and the entropy of these EEG recordings that were stronger in the negative than the positive and the neutral states. They also showed that MSE significantly differentiated between the brain responses to these affect. On the other hand, these results indicated that PE failed to identify such significant correlations and differences between the negative, neutral, and positive affect. These results provide insights on the level of association between the entropy, the MSE, and the PE of the brain variability in response to naturalistic stimuli, thereby enabling researchers to draw more informed conclusions on quantification of the brain variability by these measures.},
}
劉超然, 石井カルロス寿憲, "マイクロフォンアレイおよびデプスセンサーのオンラインキャリブレーションに関する考察", In 第55回人工知能学会 AI チャレンジ研究会, 慶応義塾大学 矢上キャンパス, 神奈川, pp. 18-23, November, 2019.
Abstract: RGB-D sensor and microphone array are widly used for providing an instantaneous representation of the current visual and auditory environment. Sensor pose is needed for sharing and combining sensing results together. However, manual calibration of different type of sensors is tedious and time consuming. In this paper, we propose an online calibration framework that can estimate sensors' 3D pose and works with RGB-D sensor and microphone array. In the proposed framework, the calibration problem is described as a factor graph inference problem and solved with a Graph Neural Network (GNN). Instead of frequently used visual markers, we use multiple moving people as reference objects to achieve automatic calibration.
BibTeX:
@InProceedings{劉超然2019b,
  author    = {劉超然 and 石井カルロス寿憲},
  booktitle = {第55回人工知能学会 AI チャレンジ研究会},
  title     = {マイクロフォンアレイおよびデプスセンサーのオンラインキャリブレーションに関する考察},
  year      = {2019},
  address   = {慶応義塾大学 矢上キャンパス, 神奈川},
  day       = {22},
  etitle    = {Online calibration of microphone array and depth sensors},
  month     = nov,
  pages     = {18-23},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-055/},
  abstract  = {RGB-D sensor and microphone array are widly used for providing an instantaneous representation of the current visual and auditory environment. Sensor pose is needed for sharing and combining sensing results together. However, manual calibration of different type of sensors is tedious and time consuming. In this paper, we propose an online calibration framework that can estimate sensors' 3D pose and works with RGB-D sensor and microphone array. In the proposed framework, the calibration problem is described as a factor graph inference problem and solved with a Graph Neural Network (GNN). Instead of frequently used visual markers, we use multiple moving people as reference objects to achieve automatic calibration.},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation", In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), The Venetian Macau, China, November, 2019.
Abstract: In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.
BibTeX:
@InProceedings{Keshmiri2019b,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Older People Prefrontal Cortex Activation Estimates Their Perceived Difficulty of a Humanoid-Mediated Conversation},
  year      = {2019},
  address   = {The Venetian Macau, China},
  day       = {3-8},
  month     = nov,
  url       = {https://www.iros2019.org/},
  abstract  = {In this article, we extend our recent results on prediction of the older peoples’ perceived difficulty of verbal communication during a humanoid-mediated storytelling experiment to the case of a longitudinal conversation that was conducted over a four-week period and included a battery of conversational topics. For this purpose, we used our model that estimates the older people’s perceived difficulty by mapping their prefrontal cortex (PFC) activity during the verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This enables us to differentially quantify the observed changes in PFC activity during the conversation based on the difficulty level of the WM task. We show that such a quantification forms a reliable basis for learning the PFC activation patterns in response to conversational contents. Our results indicate the ability of our model for predicting the older peoples’ perceived difficulty of a wide range of humanoid-mediated tele-conversations, regardless of their type, topic, and duration.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care", In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), The Venetian Macau, China, November, 2019.
Abstract: In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.
BibTeX:
@InProceedings{Keshmiri2019a,
  author    = {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  booktitle = {2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)},
  title     = {Decoding the Perceived Difficulty of Communicated Contents by Older People: Toward Conversational Robot-Assistive Elderly Care},
  year      = {2019},
  address   = {The Venetian Macau, China},
  day       = {3-8},
  month     = nov,
  url       = {https://www.iros2019.org/},
  abstract  = {In this study, we propose a semi-supervised learning model for decoding of the perceived difficulty of communicated content by older people. Our model is based on mapping of the older people’s prefrontal cortex (PFC) activity during their verbal communication onto fine-grained cluster spaces of a working memory (WM) task that induces loads on human’s PFC through modulation of its difficulty level. This allows for differential quantification of the observed changes in pattern of PFC activation during verbal communication with respect to the difficulty level of the WM task. We show that such a quantification establishes a reliable basis for categorization and subsequently learning of the PFC responses to more naturalistic contents such as story comprehension. Our contribution is to present evidence on effectiveness of our method for estimation of the older peoples’ perceived difficulty of the communicated contents during an online storytelling scenario.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Xiqian Zheng, Masahiro Shiomi, Takashi Minato, Hirosh Ishiguro, "What Kinds of Robot's Touch Will Match Expressed Emotions?", In The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, pp. 755-762, October, 2019.
Abstract: This study investigated the effects of touch characteristics that change the strengths and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction mainly focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot’s touch characteristics that can affect people’s perceived emotions received less focus. In this study, we focused on three kinds of touch characteristics (length, type, and part) based on arousal/valence perspectives, their effects toward the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happy, and its counterpart emotion, (i.e., sad) based on Ekman’s definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggested that a brief pat and a longer touch by the fingers are better combinations to express happy and sad emotions with our robot.
BibTeX:
@InProceedings{Zheng2019,
  author    = {Xiqian Zheng and Masahiro Shiomi and Takashi Minato and Hirosh Ishiguro},
  booktitle = {The 2019 IEEE-RAS International Conference on Humanoid Robots},
  title     = {What Kinds of Robot's Touch Will Match Expressed Emotions?},
  year      = {2019},
  address   = {Toronto, Canada},
  day       = {15-17},
  month     = oct,
  pages     = {755-762},
  url       = {http://humanoids2019.loria.fr/},
  abstract  = {This study investigated the effects of touch characteristics that change the strengths and the naturalness of the emotions perceived by people in human-robot touch interaction with an android robot that has a feminine, human-like appearance. Past studies on human-robot touch interaction mainly focused on understanding what kinds of human touches conveyed emotion to robots, i.e., the robot’s touch characteristics that can affect people’s perceived emotions received less focus. In this study, we focused on three kinds of touch characteristics (length, type, and part) based on arousal/valence perspectives, their effects toward the perceived strength/naturalness of a commonly used emotion in human-robot interaction, i.e., happy, and its counterpart emotion, (i.e., sad) based on Ekman’s definitions. Our results showed that the touch length and its type are useful to change the perceived strengths and the naturalness of the expressed emotions based on the arousal/valence perspective, although the touch part did not fit such perspective assumptions. Finally, our results suggested that a brief pat and a longer touch by the fingers are better combinations to express happy and sad emotions with our robot.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Soheil Keshmiri, "HRI and the Aging Society:Recent Findings on the Utility of Embodied Media for Stimulating the Brain Functioning", In Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019), 京都工芸繊維大学, 京都, pp. 1-28, October, 2019.
Abstract: Physical embodiment of the media plays a crucial role in generating detectable brain responses to conversational interaction Entropic measures appear as reliable mathematical tools for quantification of such brain responses.
BibTeX:
@InProceedings{Keshmiri2019k,
  author    = {Soheil Keshmiri},
  booktitle = {Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019)},
  title     = {HRI and the Aging Society:Recent Findings on the Utility of Embodied Media for Stimulating the Brain Functioning},
  year      = {2019},
  address   = {京都工芸繊維大学, 京都},
  day       = {6},
  month     = oct,
  pages     = {1-28},
  url       = {http://hai-conference.net/hai2019/},
  abstract  = {Physical embodiment of the media plays a crucial role in generating detectable brain responses to conversational interaction Entropic measures appear as reliable mathematical tools for quantification of such brain responses.},
}
Hidenobu Sumioka, Soheil Keshmiri, Masahiro Shiomi, "The influence of virtual Hug in human-human interaction", In Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019), 京都工芸繊維大学, 京都, October, 2019.
Abstract: In this presentation, we will talk about what is required to achieve social touch between a human and a robot.
BibTeX:
@InProceedings{Sumioka2019c,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Masahiro Shiomi},
  booktitle = {Workshop on Socialware in human-robot interaction for symbiotic society in 7th annual International Conference on Human-Agent Interaction(HAI 2019)},
  title     = {The influence of virtual Hug in human-human interaction},
  year      = {2019},
  address   = {京都工芸繊維大学, 京都},
  month     = oct,
  url       = {http://hai-conference.net/hai2019/},
  abstract  = {In this presentation, we will talk about what is required to achieve social touch between a human and a robot.},
}
Hidenobu Sumioka, "Mediated Social Touch to Build Human Intimate Relationship", In Emotional Attachment to Machines: New Ways of Relationship-Building in Japan, Freie Universiat, Germany, October, 2019.
Abstract: Interpersonal touch is a fundamental component of emotional attachment in social interaction and shows several effects such as stress reduction, a calming effect, and impression formation. Despite such effects on human, human-robot interactions have mainly focused on visual-auditory information. Although studies in machine-mediated interaction are developing various devices that provide tactile stimuli to human users, serious validation studies are scarce. In my talk, I present how touch interaction with our teleoperated robot and huggable communication medium affects our feeling, behavior, and physiological states, and discuss the potential for intimate interaction between human and robot at close distance.
BibTeX:
@Inproceedings{Sumioka2019d,
  author    = {Hidenobu Sumioka},
  title     = {Mediated Social Touch to Build Human Intimate Relationship},
  booktitle = {Emotional Attachment to Machines: New Ways of Relationship-Building in Japan},
  year      = {2019},
  address   = {Freie Universiat, Germany},
  month     = oct,
  day       = {25-26},
  abstract  = {Interpersonal touch is a fundamental component of emotional attachment in social interaction and shows several effects such as stress reduction, a calming effect, and impression formation. Despite such effects on human, human-robot interactions have mainly focused on visual-auditory information. Although studies in machine-mediated interaction are developing various devices that provide tactile stimuli to human users, serious validation studies are scarce. In my talk, I present how touch interaction with our teleoperated robot and huggable communication medium affects our feeling, behavior, and physiological states, and discuss the potential for intimate interaction between human and robot at close distance.},
}
石井カルロス寿憲, 内海章, 長澤勇, "車内搭載マイクロホンアレイによる車内音響アクティビティの分析", 日本音響学会2019年秋季研究発表会 (ASJ2019 Autumn), 立命館大学びわこ・くさつキャンパス, 滋賀, pp. 231-232, September, 2019.
Abstract: 車内の複数のマイクロホンアレイを搭載し、走行中に収録した運転者、助手席、車内騒音、車外騒音における音響イベントの分析を行った。運転者の発話は精度よく検出できることも検証した。
BibTeX:
@InProceedings{石井カルロス寿憲2019a,
  author    = {石井カルロス寿憲 and 内海章 and 長澤勇},
  booktitle = {日本音響学会2019年秋季研究発表会 (ASJ2019 Autumn)},
  title     = {車内搭載マイクロホンアレイによる車内音響アクティビティの分析},
  year      = {2019},
  address   = {立命館大学びわこ・くさつキャンパス, 滋賀},
  day       = {4-6},
  month     = sep,
  pages     = {231-232},
  url       = {https://acoustics.jp/},
  abstract  = {車内の複数のマイクロホンアレイを搭載し、走行中に収録した運転者、助手席、車内騒音、車外騒音における音響イベントの分析を行った。運転者の発話は精度よく検出できることも検証した。},
}
住岡英信, Sara Invitto, Alberto Grasso, Fabio Bona, Soheil Keshimiri, 港隆史, 塩見昌裕, 石黒浩, "抱擁型コミュニケーションメディアから呈示される音声と性関連 フェロモンがユーザに与える心理生理的影響", 第37回日本ロボット学会学術講演会(RSJ2019), vol. RSJ2019AC3F2-02, 早稲田大学早稲田キャンパス, 東京, pp. 1-2, September, 2019.
Abstract: 本研究では開発した静電容量布型センサの距離特性について調査した結果を報告する
BibTeX:
@Inproceedings{住岡英信2019d,
  author    = {住岡英信 and Sara Invitto and Alberto Grasso and Fabio Bona and Soheil Keshimiri and 港隆史 and 塩見昌裕 and 石黒浩},
  title     = {抱擁型コミュニケーションメディアから呈示される音声と性関連 フェロモンがユーザに与える心理生理的影響},
  booktitle = {第37回日本ロボット学会学術講演会(RSJ2019)},
  year      = {2019},
  volume    = {RSJ2019AC3F2-02},
  pages     = {1-2},
  address   = {早稲田大学早稲田キャンパス, 東京},
  month     = sep,
  day       = {3-7},
  url       = {https://ac.rsj-web.org/2019/},
  abstract  = {本研究では開発した静電容量布型センサの距離特性について調査した結果を報告する},
}
Li Xinyue, 石井カルロス寿憲, 林良子, "EGGを用いた日本語感情音声の分析 -日本語母語話者および中国人学習者による発話を対象に-", 2019年日本音響学会秋季研究発表会(ASJ2019), no. 2-Q-26, 立命館大学 びわこ・くさつキャンパス, 滋賀, pp. 1079-1080, September, 2019.
Abstract: EGGを用いて収録した日本語母語話者および中国語を母語とする日本語学習者による感情音声から音響特徴量(F0とpower)と声質(open quotient: Oq)を抽出し,それらの感情間および発話者グループ間の違いについて検討した。
BibTeX:
@InProceedings{Xinyue2019,
  author    = {Li Xinyue and 石井カルロス寿憲 and 林良子},
  booktitle = {2019年日本音響学会秋季研究発表会(ASJ2019)},
  title     = {EGGを用いた日本語感情音声の分析 -日本語母語話者および中国人学習者による発話を対象に-},
  year      = {2019},
  address   = {立命館大学 びわこ・くさつキャンパス, 滋賀},
  day       = {4-6},
  etitle    = {An analysis of Japanese Emotional Speech with EGG: Using speech produced by Japanese native speaker and Chinese learner speaker},
  month     = sep,
  number    = {2-Q-26},
  pages     = {1079-1080},
  url       = {https://acoustics.jp/},
  abstract  = {EGGを用いて収録した日本語母語話者および中国語を母語とする日本語学習者による感情音声から音響特徴量(F0とpower)と声質(open quotient: Oq)を抽出し,それらの感情間および発話者グループ間の違いについて検討した。},
}
Sara Invitto, Alberto Grasso, Fabio Bona, Soheil Keshmiri, Hidenobu Sumioka, Masahiro Shiomi, Hiroshi Ishiguro, "Embodied communication through social odor, cortical spectral power and co-presence technology", In XXV Congresso AIP Sezione Sperimentale, Milano, Italy, September, 2019.
Abstract: Embodied communication (EC) happens through multisensory channels, involving not only linguistic and cognitive processes, but also complex cross-modal perceptive pathways. This type of bidirectional communication is applicable both to human interactions and to human-robot interaction (HRI). A cross-modal technological interface can increase the interaction and the feeling of co-presence (CP), highly related to an interactive relationship. Information Communication Technology (ICT) developed, in virtual interfaces, some embodied ‘communicative’ senses, placing little attention to the olfactory sense, which, instead, is developmentally and evolutionistically linked to social and affective relation. The purpose of this work is to investigate the EC through social odor (SO), EEG cortical spectral power and CP technology.
BibTeX:
@InProceedings{Invitto2019,
  author    = {Sara Invitto and Alberto Grasso and Fabio Bona and Soheil Keshmiri and Hidenobu Sumioka and Masahiro Shiomi and Hiroshi Ishiguro},
  booktitle = {XXV Congresso AIP Sezione Sperimentale},
  title     = {Embodied communication through social odor, cortical spectral power and co-presence technology},
  year      = {2019},
  address   = {Milano, Italy},
  day       = {18-20},
  month     = sep,
  url       = {https://aipass.org/xxv-congresso-aip-sezione-sperimentale-milano-san-raffaele-18-20-settembre-2019},
  abstract  = {Embodied communication (EC) happens through multisensory channels, involving not only linguistic and cognitive processes, but also complex cross-modal perceptive pathways. This type of bidirectional communication is applicable both to human interactions and to human-robot interaction (HRI). A cross-modal technological interface can increase the interaction and the feeling of co-presence (CP), highly related to an interactive relationship. Information Communication Technology (ICT) developed, in virtual interfaces, some embodied ‘communicative’ senses, placing little attention to the olfactory sense, which, instead, is developmentally and evolutionistically linked to social and affective relation. The purpose of this work is to investigate the EC through social odor (SO), EEG cortical spectral power and CP technology.},
}
内田貴久, 港隆史, 中村泰, 吉川雄一郎, 石黒浩, "共感を目的とした対話におけるユーザの選好に対する概念獲得手法に関する検討", 2019年度 人工知能学会全国大会(第33回)(JSAI2019), 朱鷺メッセ 新潟コンベンションセンター, 新潟, pp. 1-4, June, 2019.
Abstract: 本研究の目的は,ロボットがユーザと共感を目的とした対話を行うことによって,ユーザの対話意欲を喚起することである.対話においてユーザの満足度を向上させるためには,共感的な発話を生成するだけでなく,ロボットが共感対象に関して理解していることを示す必要があると考えられる.そこで本稿では,あるアイテムに関するユーザの選好(好き嫌い)を対象として,共感を目的とした対話におけるユーザの選好に対する概念獲得手法について検討を行う.提案手法では,選好に関するデータと選好の属性ごとに類似性に関するデータを用意し,ユーザの選好に対する概念を獲得する.また,観測された少ないデータで選好や類似性に関する推定を行うためのルールを整理した.今後は,提案手法を適用した対話ロボットが行う共感による満足度を高めることができるかどうか,ユーザの対話意欲を喚起するかどうかを検証することが課題となる.
BibTeX:
@InProceedings{内田貴久2019a,
  author    = {内田貴久 and 港隆史 and 中村泰 and 吉川雄一郎 and 石黒浩},
  booktitle = {2019年度 人工知能学会全国大会(第33回)(JSAI2019)},
  title     = {共感を目的とした対話におけるユーザの選好に対する概念獲得手法に関する検討},
  year      = {2019},
  address   = {朱鷺メッセ 新潟コンベンションセンター, 新潟},
  day       = {4-7},
  etitle    = {A Study on Concept Acquisition Method for User Preference in Dialogue for Empathy},
  month     = jun,
  pages     = {1-4},
  series    = {3G4-OS-18b-02},
  url       = {https://www.ai-gakkai.or.jp/jsai2019/},
  abstract  = {本研究の目的は,ロボットがユーザと共感を目的とした対話を行うことによって,ユーザの対話意欲を喚起することである.対話においてユーザの満足度を向上させるためには,共感的な発話を生成するだけでなく,ロボットが共感対象に関して理解していることを示す必要があると考えられる.そこで本稿では,あるアイテムに関するユーザの選好(好き嫌い)を対象として,共感を目的とした対話におけるユーザの選好に対する概念獲得手法について検討を行う.提案手法では,選好に関するデータと選好の属性ごとに類似性に関するデータを用意し,ユーザの選好に対する概念を獲得する.また,観測された少ないデータで選好や類似性に関する推定を行うためのルールを整理した.今後は,提案手法を適用した対話ロボットが行う共感による満足度を高めることができるかどうか,ユーザの対話意欲を喚起するかどうかを検証することが課題となる.},
}
Takashi Minato, Kurima Sakai, Hiroshi Ishiguro, "Design of a robot's conversational capability based on desire and intention", In IoT Enabling Sensing/Network/AI and Photonics Conference 2019 (IoT-SNAP2019) at OPTICS & PHOTONICS International Congress 2019, パシフィコ横浜, 神奈川, pp. 1-6, April, 2019.
Abstract: Numbers of devices surrounding us are connected to the network and have a capability to verbally provide services. Those devices are desired to proactively interact with us since it is difficult for us to set up the all control parameters of devices. For this sake, designing the desire and intention of the device is promising approach. This paper focuses on a conversational robot and describes the design of the robot's dialogue control based on its desire and intention.
BibTeX:
@InProceedings{Minato2019,
  author    = {Takashi Minato and Kurima Sakai and Hiroshi Ishiguro},
  booktitle = {IoT Enabling Sensing/Network/AI and Photonics Conference 2019 (IoT-SNAP2019) at OPTICS \& PHOTONICS International Congress 2019},
  title     = {Design of a robot's conversational capability based on desire and intention},
  year      = {2019},
  address   = {パシフィコ横浜, 神奈川},
  day       = {23-25},
  month     = apr,
  pages     = {1-6},
  series    = {IoT-SNAP2-02},
  url       = {https://opicon.jp/ja/conferences/iot},
  abstract  = {Numbers of devices surrounding us are connected to the network and have a capability to verbally provide services. Those devices are desired to proactively interact with us since it is difficult for us to set up the all control parameters of devices. For this sake, designing the desire and intention of the device is promising approach. This paper focuses on a conversational robot and describes the design of the robot's dialogue control based on its desire and intention.},
}
Hidenobu Sumioka, Soheil Keshmiri, Hiroshi Ishiguro, "Brain Healthcare through iterated conversations with a teleoperated robot", In Toward Brain Health -The Present and the Future of Brain Data Sharing-, ITU, Geneva, Switzerland, March, 2019.
Abstract: In this presentation, we show how communication robot helps elderly people maintain health for their brain.
BibTeX:
@InProceedings{Sumioka2019a,
  author    = {Hidenobu Sumioka and Soheil Keshmiri and Hiroshi Ishiguro},
  booktitle = {Toward Brain Health -The Present and the Future of Brain Data Sharing-},
  title     = {Brain Healthcare through iterated conversations with a teleoperated robot},
  year      = {2019},
  address   = {ITU, Geneva, Switzerland},
  day       = {20},
  month     = mar,
  abstract  = {In this presentation, we show how communication robot helps elderly people maintain health for their brain.},
}
三方瑠祐, 石井カルロス寿憲, 新谷太健, 石黒浩, "アンドロイドの発話に伴うジェスチャー生成システムのオンライン化の検討", 第52回人工知能学会 AI チャレンジ研究会, 早稲田大学 西早稲田キャンパス, 東京, pp. 33-39, December, 2018.
Abstract: 日常生活における会話では,我々人間は言葉のやりとりだけではなく,視線や声の強弱,身振り手振りといったノンバーバル情報を用いてコミュニケーションを行っている.本研究では,この身振り手振り,いわゆるジェスチャーに注目し,ロボットへの実装を目標にしている.ここで扱うジェスチャーは, ハンドジェスチャーに限定し以降は, 単にジェスチャーとのみ表記する. ジェスチャーは,表現する対象物や動きによって,数種類のカテゴリーに分けることができる.人間同士の雑談対話のマルチモーダルデータを用いて, ジェスチャーに対するラベル付与および手の動きの抽出を行った. そして, ロボットのジェスチャー生成のため, 発話内容, ジェスチャーの機能, 動きの関係性を計算したベイジアンネットワークを作成した. 作成したネットワークを用いて, アンドロイドの発話に伴うジェスチャー生成を行った. この生成方法を, 人間とアンドロイドの1 体1 の自由対話をする状況において, オンライン化への実装を提案し, より汎用性のある手法へとつなげていく.
BibTeX:
@InProceedings{三方瑠祐2018,
  author    = {三方瑠祐 and 石井カルロス寿憲 and 新谷太健 and 石黒浩},
  booktitle = {第52回人工知能学会 AI チャレンジ研究会},
  title     = {アンドロイドの発話に伴うジェスチャー生成システムのオンライン化の検討},
  year      = {2018},
  address   = {早稲田大学 西早稲田キャンパス, 東京},
  day       = {3},
  etitle    = {Consideration of online processing for gesture generation accompanying android's utterance},
  month     = Dec,
  pages     = {33-39},
  series    = {SIG-Challenge-052-7 (12/3)},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-052/CFP.html},
  abstract  = {日常生活における会話では,我々人間は言葉のやりとりだけではなく,視線や声の強弱,身振り手振りといったノンバーバル情報を用いてコミュニケーションを行っている.本研究では,この身振り手振り,いわゆるジェスチャーに注目し,ロボットへの実装を目標にしている.ここで扱うジェスチャーは, ハンドジェスチャーに限定し以降は, 単にジェスチャーとのみ表記する. ジェスチャーは,表現する対象物や動きによって,数種類のカテゴリーに分けることができる.人間同士の雑談対話のマルチモーダルデータを用いて, ジェスチャーに対するラベル付与および手の動きの抽出を行った. そして, ロボットのジェスチャー生成のため, 発話内容, ジェスチャーの機能, 動きの関係性を計算したベイジアンネットワークを作成した. 作成したネットワークを用いて, アンドロイドの発話に伴うジェスチャー生成を行った. この生成方法を, 人間とアンドロイドの1 体1 の自由対話をする状況において, オンライン化への実装を提案し, より汎用性のある手法へとつなげていく.},
}
Shuichi Nishio, "Portable android robots for aged citizens: overview and current results", In Dementia & Technology, Seoul, Korea, December, 2018.
Abstract: I introduce our research acvtivities on Telenoid and Bonoid.
BibTeX:
@Inproceedings{Nishio2018a,
  author    = {Shuichi Nishio},
  title     = {Portable android robots for aged citizens: overview and current results},
  booktitle = {Dementia \& Technology},
  year      = {2018},
  address   = {Seoul, Korea},
  month     = Dec,
  day       = {17},
  url       = {http://www.docdocdoc.co.kr/event/event20.html},
  abstract  = {I introduce our research acvtivities on Telenoid and Bonoid.},
}
Masahiro Shiomi, Kodai Shatani, Takashi Minato, Hiroshi Ishiguro, "How should a Robot React before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot’s Face", In the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: This study addresses pre-touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, mainly focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another’s touch by observing a dataset of 48 human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with 30 participants. The experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.
BibTeX:
@InProceedings{Shiomi2018b,
  author    = {Masahiro Shiomi and Kodai Shatani and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  title     = {How should a Robot React before People's Touch?: Modeling a Pre-Touch Reaction Distance for a Robot’s Face},
  year      = {2018},
  address   = {Madrid, Spain},
  day       = {1-5},
  month     = oct,
  url       = {https://www.iros2018.org/},
  abstract  = {This study addresses pre-touch reaction distance effects in human-robot touch interaction with an android named ERICA that has a feminine, human-like appearance. Past studies on human-robot interaction, which enabled social robots to react to being touched by developing several sensing systems and designing reaction behaviors, mainly focused on after-touch situations, i.e., before-touch situations received less attention. In this study, we conducted a data collection to investigate the minimum comfortable distance to another’s touch by observing a dataset of 48 human-human touch interactions, modeled its distance relationships, and implemented a model with our robot. We experimentally investigated the effectiveness of the modeled minimum comfortable distance to being touched with 30 participants. The experiment results showed that they highly evaluated a robot that reacts to being touched based on the modeled minimum comfortable distance.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
Carlos T. Ishi, Daichi Machiyashiki, Ryusuke Mikata, Hiroshi Ishiguro, "A speech-driven hand gesture generation method and evaluation in android robots", In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), Madrid, Spain, October, 2018.
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.
BibTeX:
@InProceedings{Ishi2018b,
  author    = {Carlos T. Ishi and Daichi Machiyashiki and Ryusuke Mikata and Hiroshi Ishiguro},
  booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)},
  title     = {A speech-driven hand gesture generation method and evaluation in android robots},
  year      = {2018},
  address   = {Madrid, Spain},
  day       = {1-5},
  month     = Oct,
  url       = {https://www.iros2018.org/},
  abstract  = {Hand gestures commonly occur in daily dialogue interactions, and have important functions in communication. We first analyzed a multimodal human-human dialogue data and found relations between the occurrence of hand gestures and dialogue act categories. We also conducted clustering analysis on gesture motion data, and associated text information with the gesture motion clusters through gesture function categories. Using the analysis results, we proposed a speech-driven gesture generation method by taking text, prosody, and dialogue act information into account. We then implemented a hand motion control to an android robot, and evaluated the effectiveness of the proposed gesture generation method through subjective experiments. The gesture motions generated by the proposed method were judged to be relatively natural even under the robot hardware constraints.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
}
石井カルロス寿憲, 神田崇行, "暴言発話の韻律および音声特徴の分析", 日本音響学会2018年秋季研究発表会 (ASJ2018 Autumn), vol. 1, no. 4, 大分大学旦野原キャンパス, 大分, pp. 1119-1120, September, 2018.
Abstract: 暴言や低モラルの発話における音響的・韻律的特徴の分析を行った。分析には同じ発話内容を異なった発話態度(読み上げ、暴言、狂乱、冗談)で発声したデータを収集、異なった発話スタイルに関する韻律・声質特徴の違いを示した。
BibTeX:
@InProceedings{石井カルロス寿憲2018b,
  author    = {石井カルロス寿憲 and 神田崇行},
  booktitle = {日本音響学会2018年秋季研究発表会 (ASJ2018 Autumn)},
  title     = {暴言発話の韻律および音声特徴の分析},
  year      = {2018},
  address   = {大分大学旦野原キャンパス, 大分},
  day       = {12-14},
  month     = sep,
  number    = {4},
  pages     = {1119-1120},
  series    = {130},
  url       = {http://www.asj.gr.jp/annualmeeting/pdf/2018autumn_program.pdf},
  volume    = {1},
  abstract  = {暴言や低モラルの発話における音響的・韻律的特徴の分析を行った。分析には同じ発話内容を異なった発話態度(読み上げ、暴言、狂乱、冗談)で発声したデータを収集、異なった発話スタイルに関する韻律・声質特徴の違いを示した。},
}
住岡英信, Soheil Keshmiri, 石黒浩, "抱擁型コミュニケーションメディアが前頭脳血流にもたらす影響に関する情報理論的検討", 第36回 日本ロボット学会学術講演会 (RSJ2018), 中部大学春日井キャンパス, 愛知, September, 2018.
Abstract: 本講演会では、新たな社会基盤としてのロボット技術から、学術的可能性を探究するロボットサイエンスに至るまで、幅広い分野の講演を募集しており、企業、研究所、大学等からの幅広い発表、参加がある.
BibTeX:
@Inproceedings{住岡英信2018a,
  author    = {住岡英信 and Soheil Keshmiri and 石黒浩},
  title     = {抱擁型コミュニケーションメディアが前頭脳血流にもたらす影響に関する情報理論的検討},
  booktitle = {第36回 日本ロボット学会学術講演会 (RSJ2018)},
  year      = {2018},
  address   = {中部大学春日井キャンパス, 愛知},
  month     = Sep,
  day       = {4-7},
  url       = {http://rsj2018.rsj-web.org/},
  abstract  = {本講演会では、新たな社会基盤としてのロボット技術から、学術的可能性を探究するロボットサイエンスに至るまで、幅広い分野の講演を募集しており、企業、研究所、大学等からの幅広い発表、参加がある.},
}
劉超然, 石井カルロス, 石黒浩, "勾配を考慮した多チャンネルNMFによる音源分離の加速における考察", 日本音響学会2018年秋季研究発表会 (ASJ2018 Autumn), vol. 1-P-3, 大分大学旦野原キャンパス, 大分, pp. 253-254, September, 2018.
Abstract: NMFは低ランクマトリックス分解の手法として、画像・自然言語・音声信号処理など、幅広い分野で使われている。本稿では、コスト関数の降下勾配を参考に、多チャンネル音声信号の分離におけるNMFの学習を加速する手法について考察した。
BibTeX:
@InProceedings{劉超然2018,
  author    = {劉超然 and 石井カルロス and 石黒浩},
  booktitle = {日本音響学会2018年秋季研究発表会 (ASJ2018 Autumn)},
  title     = {勾配を考慮した多チャンネルNMFによる音源分離の加速における考察},
  year      = {2018},
  address   = {大分大学旦野原キャンパス, 大分},
  day       = {12-14},
  month     = sep,
  pages     = {253-254},
  url       = {http://www.asj.gr.jp/annualmeeting/pdf/2018autumn_program.pdf},
  volume    = {1-P-3},
  abstract  = {NMFは低ランクマトリックス分解の手法として、画像・自然言語・音声信号処理など、幅広い分野で使われている。本稿では、コスト関数の降下勾配を参考に、多チャンネル音声信号の分離におけるNMFの学習を加速する手法について考察した。},
}
Hidenobu Sumioka, "Robotics for elderly society", In Summer school at Osaka University 2018 : Long term care system & scientific tecnology in Japan aging society, Osaka University, Osaka, August, 2018.
Abstract: In this talk, I will introduce several possibilities how social robot help human caregivers in elderly care.
BibTeX:
@Inproceedings{Sumioka2018a,
  author    = {Hidenobu Sumioka},
  title     = {Robotics for elderly society},
  booktitle = {Summer school at Osaka University 2018 : Long term care system \& scientific tecnology in Japan aging society},
  year      = {2018},
  address   = {Osaka University, Osaka},
  month     = Aug,
  day       = {7},
  abstract  = {In this talk, I will introduce several possibilities how social robot help human caregivers in elderly care.},
}
Takashi Minato, "Development of an autonomous android that can naturally talk with people", In World Symposium on Digital Intelligence for Systems and Machines (DISA2018), Technical University of Kosice, Slovakia, pp. 19-21, August, 2018.
Abstract: Our research group have been developing a very humanlike android robot that can talk with people in a humanlike manner involving not only verbal but also non-verbal manner such as gestures, facial expressions, and gaze behaviors, while exploring essential mechanisms for generating natural conversation. Humans most effectively interact only with other humans, hence, very humanlike androids can be promising communication media to support people's daily life. The existing spoken dialogue services have mainly focused on task-oriented communication, like voice search service on smartphones and traffic information services, to serve information through natural verbal interaction. But there is no intention and agency in the dialogue system itself, and it cannot be a conversation partner for casual conversation. A conversation essentially involves mutual understanding of each intention and opinion between conversation participants; therefore, as a humanlike manner, we introduced a hierarchical model of decision-making for dialogue generation in our android, that is based on the android's desires and intentions. Furthermore, it is also important to express humanlike bodily movements for natural conversation, and we have developed a method to automatically generate humanlike motions which are synchronized with the android utterances. So far, we have studied human-android interaction in both of verbal and non-verbal aspects, and this talk will introduce some research topics which are related those studies.
BibTeX:
@Inproceedings{Minato2018,
  author    = {Takashi Minato},
  title     = {Development of an autonomous android that can naturally talk with people},
  booktitle = {World Symposium on Digital Intelligence for Systems and Machines (DISA2018)},
  year      = {2018},
  pages     = {19-21},
  address   = {Technical University of Kosice, Slovakia},
  month     = Aug,
  day       = {23-25},
  url       = {http://www.disa2018.org},
  abstract  = {Our research group have been developing a very humanlike android robot that can talk with people in a humanlike manner involving not only verbal but also non-verbal manner such as gestures, facial expressions, and gaze behaviors, while exploring essential mechanisms for generating natural conversation. Humans most effectively interact only with other humans, hence, very humanlike androids can be promising communication media to support people's daily life. The existing spoken dialogue services have mainly focused on task-oriented communication, like voice search service on smartphones and traffic information services, to serve information through natural verbal interaction. But there is no intention and agency in the dialogue system itself, and it cannot be a conversation partner for casual conversation. A conversation essentially involves mutual understanding of each intention and opinion between conversation participants; therefore, as a humanlike manner, we introduced a hierarchical model of decision-making for dialogue generation in our android, that is based on the android's desires and intentions. Furthermore, it is also important to express humanlike bodily movements for natural conversation, and we have developed a method to automatically generate humanlike motions which are synchronized with the android utterances. So far, we have studied human-android interaction in both of verbal and non-verbal aspects, and this talk will introduce some research topics which are related those studies.},
}
内田貴久, 港隆史, 中村泰, 石黒浩, "無限関係モデルを用いた対話における概念獲得手法に関する検討―主観的意見のやり取りを行う自律対話アンドロイド―", 人工知能学会 言語・音声理解と対話処理研究会(SLUD)第83回研究会, 関西学院大学梅田キャンパス, 大阪, pp. 13-18, August, 2018.
Abstract: The purpose of this study is to propose a method to estimate the users' subjective concept (e.g., preferences and interest) through non-task-oriented dialogue like a casual conversation. Estimating the subjective concepts enables the dialogue system to express a deep understanding about the users. The previous methods of concept estimation need large amount of learning data; therefore, the users need to have cumbersome interactions and might decrease their motivation to talk. On the other hand, people can estimate many things on the partner from less information in conversations. In this paper, we propose a dialogue system that can quickly estimate the subjective concept of the users by referring the other people's subjective concepts that are acquired in advance of the dialogue.
BibTeX:
@Inproceedings{内田貴久2018d,
  author    = {内田貴久 and 港隆史 and 中村泰 and 石黒浩},
  title     = {無限関係モデルを用いた対話における概念獲得手法に関する検討―主観的意見のやり取りを行う自律対話アンドロイド―},
  booktitle = {人工知能学会 言語・音声理解と対話処理研究会(SLUD)第83回研究会},
  year      = {2018},
  series    = {SIG-SLUD-B801},
  pages     = {13-18},
  address   = {関西学院大学梅田キャンパス, 大阪},
  month     = Aug,
  day       = {29},
  url       = {https://jsai-slud.github.io/sig-slud/},
  etitle    = {A Study on Concept Acquisition Method in Dialogue Using In nite Relational Model -An Autonomous Conversational Android Which Exchanges Subjective Opinion with Users-},
  abstract  = {The purpose of this study is to propose a method to estimate the users' subjective concept (e.g., preferences and interest) through non-task-oriented dialogue like a casual conversation. Estimating the subjective concepts enables the dialogue system to express a deep understanding about the users. The previous methods of concept estimation need large amount of learning data; therefore, the users need to have cumbersome interactions and might decrease their motivation to talk. On the other hand, people can estimate many things on the partner from less information in conversations. In this paper, we propose a dialogue system that can quickly estimate the subjective concept of the users by referring the other people's subjective concepts that are acquired in advance of the dialogue.},
}
内田貴久, 港隆史, 石黒浩, "対話アンドロイドの主観的意見の不自然さを軽減する対話戦略", 2018年度 人工知能学会全国大会(第32回)(JSAI2018), vol. 4L1-01, 城山観光ホテル, 鹿児島, pp. 1-4, June, 2018.
Abstract: The goal of this research is to construct a conversational android that evokes users' motivation to talk in non-task-oriented dialogue like chatting. It has been said that stating subjective opinions is eective for motivating people to talk; however, users feel it to be unnatural when a conversational android states its subjective opinions. We hypothesized that lacking the background information as to why and how the android has the subjective opinions leads to the sense of unnaturalness because the users cannot accept its subjective opinions without such information. The experimental results showed that stating the background followed by the subjective opinion was signi cantly more natural than the opposite case; whereas, the naturalness was not in uenced by the order if the speaker is a human. These results suggest that sharing background information in advance is an eective strategy for conversational androids to naturally state their subjective opinions.
BibTeX:
@Inproceedings{内田貴久2018b,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  title     = {対話アンドロイドの主観的意見の不自然さを軽減する対話戦略},
  booktitle = {2018年度 人工知能学会全国大会(第32回)(JSAI2018)},
  year      = {2018},
  volume    = {4L1-01},
  pages     = {1-4},
  address   = {城山観光ホテル, 鹿児島},
  month     = Jun,
  day       = {5-8},
  url       = {https://www.ai-gakkai.or.jp/jsai2018/},
  etitle    = {Dialogue Strategy to Reduce Unnaturalness of Subjective Opinions of Conversational Androids},
  abstract  = {The goal of this research is to construct a conversational android that evokes users' motivation to talk in non-task-oriented dialogue like chatting. It has been said that stating subjective opinions is eective for motivating people to talk; however, users feel it to be unnatural when a conversational android states its subjective opinions. We hypothesized that lacking the background information as to why and how the android has the subjective opinions leads to the sense of unnaturalness because the users cannot accept its subjective opinions without such information. The experimental results showed that stating the background followed by the subjective opinion was signicantly more natural than the opposite case; whereas, the naturalness was not in uenced by the order if the speaker is a human. These results suggest that sharing background information in advance is an eective strategy for conversational androids to naturally state their subjective opinions.},
}
内田貴久, 港隆史, 石黒浩, "対話アンドロイドに価値観を帰属させる必要性", 情報処理学会 第80回全国大会(JPSJ), vol. 6ZA-02, 早稲田大学 西早稲田キャンパス, 東京, pp. 4-271/4-272, March, 2018.
Abstract: 本研究の目的は,雑談のような非タスク指向型対話においてユーザの対話意欲を引き出す対話ロボットの構築である.対話が盛り上がる時には主観的意見のやり取りが増加することが報告されている.しかし,人間はロボットに対して価値(良し悪し)に関わる主観的な経験を帰属しにくいことが明かにされている.本研究では,対話アンドロイドへの価値観の帰属させやすさの異なる話題が,ユーザのアンドロイドに対する対話意欲に与える影響を検証した.その結果,価値観を帰属させにくい話題についてアンドロイドが対話する場合,ユーザはそのアンドロイドの主観的意見に対話意欲を刺激されるほどの意味を感じない可能性が示唆された.
BibTeX:
@Inproceedings{内田貴久2018a,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  title     = {対話アンドロイドに価値観を帰属させる必要性},
  booktitle = {情報処理学会 第80回全国大会(JPSJ)},
  year      = {2018},
  volume    = {6ZA-02},
  pages     = {4-271/4-272},
  address   = {早稲田大学 西早稲田キャンパス, 東京},
  month     = Mar,
  day       = {13-15},
  url       = {https://www.ipsj.or.jp/event/taikai/80/index.html},
  etitle    = {A necessity to attribute values to conversational androids},
  abstract  = {本研究の目的は,雑談のような非タスク指向型対話においてユーザの対話意欲を引き出す対話ロボットの構築である.対話が盛り上がる時には主観的意見のやり取りが増加することが報告されている.しかし,人間はロボットに対して価値(良し悪し)に関わる主観的な経験を帰属しにくいことが明かにされている.本研究では,対話アンドロイドへの価値観の帰属させやすさの異なる話題が,ユーザのアンドロイドに対する対話意欲に与える影響を検証した.その結果,価値観を帰属させにくい話題についてアンドロイドが対話する場合,ユーザはそのアンドロイドの主観的意見に対話意欲を刺激されるほどの意味を感じない可能性が示唆された.},
}
石井カルロス寿憲, 三方瑠祐, 石黒浩, "対話音声に伴う手振りの分析と分類の検討", 日本音響学会2018年春季研究発表会 (ASJ2018 Spring), vol. 3-6-2, 日本工業大学宮代キャンパス, 埼玉, pp. 1277-1278, March, 2018.
Abstract: 人らしい動作をロボットに表現させることを目指し、マルチモーダル対話データを収集し、対話音声に伴う手振りを分析した。ジェスチャーの動きや機能に関する分類の検討と、発話機能との関連性について報告する。
BibTeX:
@Inproceedings{石井カルロス寿憲2018,
  author    = {石井カルロス寿憲 and 三方瑠祐 and 石黒浩},
  title     = {対話音声に伴う手振りの分析と分類の検討},
  booktitle = {日本音響学会2018年春季研究発表会 (ASJ2018 Spring)},
  year      = {2018},
  volume    = {3-6-2},
  pages     = {1277-1278},
  address   = {日本工業大学宮代キャンパス, 埼玉},
  month     = Mar,
  day       = {13-15},
  url       = {http://www.asj.gr.jp/annualmeeting/asj2018springCFP_J.html},
  abstract  = {人らしい動作をロボットに表現させることを目指し、マルチモーダル対話データを収集し、対話音声に伴う手振りを分析した。ジェスチャーの動きや機能に関する分類の検討と、発話機能との関連性について報告する。},
}
Xiqian Zheng, Dylan F. Glas, Hiroshi Ishiguro, "Robot Social Memory System: Memory-Based Interaction Strategies for a Social Robot", In The 1st International Symposium on Systems Intelligence Division, A&H Hall, Osaka, January, 2018.
Abstract: Osaka University, Open and Transdisciplinary Research Initiatives (OTRI) is a new research institution that started in April 2017 and shifted from the “Cognitive Neuroscience Robotics Division (CNR)" of the Institute for Academic Initiatives (IAI) to the “Systems Intelligence Division (SID)" of the OTRI. This time, as a kick-off symposium.
BibTeX:
@Inproceedings{Zheng2018,
  author    = {Xiqian Zheng and Dylan F. Glas and Hiroshi Ishiguro},
  title     = {Robot Social Memory System: Memory-Based Interaction Strategies for a Social Robot},
  booktitle = {The 1st International Symposium on Systems Intelligence Division},
  year      = {2018},
  address   = {A\&H Hall, Osaka},
  month     = Jan,
  day       = {20-21},
  url       = {http://sid-osaka-u.org/2017/12/08/the-1st-international-symposium-on-systems-intelligence-division/},
  abstract  = {Osaka University, Open and Transdisciplinary Research Initiatives (OTRI) is a new research institution that started in April 2017 and shifted from the “Cognitive Neuroscience Robotics Division (CNR)" of the Institute for Academic Initiatives (IAI) to the “Systems Intelligence Division (SID)" of the OTRI. This time, as a kick-off symposium.},
}
Malcolm Doering, Dylan F. Glas, Hiroshi Ishiguro, "Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior", In The 1st International Symposium on Systems Intelligence Division, A&H Hall, Osaka, January, 2018.
Abstract: We present a learning-by-imitation technique that learns social robot interaction behaviors from natural human-human interaction data and requires minimum input from a designer. In particular, we focus on the problems of responding to ambiguous human actions and interpretability of the learned behaviors. To solve these problems, we introduce a novel topic clustering algorithm based on action co-occurrence frequencies to discover the topics of conversation in the training data and incorporate them into a rule learning system. The system learns human-readable rules that dictate which action the robot should take in response to a human action, given the current topic of conversation. We demonstrated our technique in a travel agent scenario where the robot learns to play the role of the travel agent. Our proposed technique outperformed several baseline techniques in qualitative and quantitative evaluations. The results showed that the proposed system responded more accurately to ambiguous questions and participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with.
BibTeX:
@Inproceedings{Doering2018,
  author    = {Malcolm Doering and Dylan F. Glas and Hiroshi Ishiguro},
  title     = {Modeling Interaction Structure for Robot Imitation Learning of Human Social Behavior},
  booktitle = {The 1st International Symposium on Systems Intelligence Division},
  year      = {2018},
  address   = {A\&H Hall, Osaka},
  month     = Jan,
  day       = {21-22},
  url       = {http://sid-osaka-u.org/2017/12/08/the-1st-international-symposium-on-systems-intelligence-division/},
  abstract  = {We present a learning-by-imitation technique that learns social robot interaction behaviors from natural human-human interaction data and requires minimum input from a designer. In particular, we focus on the problems of responding to ambiguous human actions and interpretability of the learned behaviors. To solve these problems, we introduce a novel topic clustering algorithm based on action co-occurrence frequencies to discover the topics of conversation in the training data and incorporate them into a rule learning system. The system learns human-readable rules that dictate which action the robot should take in response to a human action, given the current topic of conversation. We demonstrated our technique in a travel agent scenario where the robot learns to play the role of the travel agent. Our proposed technique outperformed several baseline techniques in qualitative and quantitative evaluations. The results showed that the proposed system responded more accurately to ambiguous questions and participants found that the proposed system was easier to understand, provided more information, and required less effort to interact with.},
  file      = {Doering2018.pdf:pdf/Doering2018.pdf:PDF},
}
住岡英信, "密着HRIの可能性", 第2回精神疾患とインタラクティブシステム研究会, 仙台, 宮城, December, 2017.
Abstract: 研究会において成果の発表を行い、ASDを中心とした精神疾患とロボットに代表されるインタラクティブシステムとの接点について,ロボット工学だけでなく、心理学、認知科学的な視点も含め議論する。
BibTeX:
@Inproceedings{住岡英信2017d,
  author    = {住岡英信},
  title     = {密着HRIの可能性},
  booktitle = {第2回精神疾患とインタラクティブシステム研究会},
  year      = {2017},
  address   = {仙台, 宮城},
  month     = Dec,
  day       = {15},
  url       = {https://www.ei.tohoku.ac.jp/xkozima/event/1712mental.html},
  abstract  = {研究会において成果の発表を行い、ASDを中心とした精神疾患とロボットに代表されるインタラクティブシステムとの接点について,ロボット工学だけでなく、心理学、認知科学的な視点も含め議論する。},
}
町屋敷大地, 石井カルロス寿憲, 劉超然, 石黒浩, "アンドロイドの動作生成に向けた自然対話中のジェスチャの認識および分類に関する検討", 第49回人工知能学会 AI チャレンジ研究会 ロボット聴覚, 慶応義塾大学 矢上キャンパス, 神奈川, pp. 47-52, November, 2017.
Abstract: ロボットの動作生成を目指して, 人の対話と同時に起こるジェスチャーの分類とそれらのジェスチャー中の手の位置や, 発話との関係を調査した.ジェスチャーの分類はアノテーターによってラベルづけされ,k-means 法によってジェスチャーの手の動きのクラスタを生成した. またWordnetからジェスチャーとともに現れる発話の上位概念を取得した. これらの取得したデータの関わりを今後も調べていき, ロボットの動作生成へとつなげていく.
BibTeX:
@Inproceedings{町屋敷大地2017,
  author    = {町屋敷大地 and 石井カルロス寿憲 and 劉超然 and 石黒浩},
  title     = {アンドロイドの動作生成に向けた自然対話中のジェスチャの認識および分類に関する検討},
  booktitle = {第49回人工知能学会 AI チャレンジ研究会 ロボット聴覚},
  year      = {2017},
  pages     = {47-52},
  address   = {慶応義塾大学 矢上キャンパス, 神奈川},
  month     = Nov,
  day       = {25},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-049/program.html},
  abstract  = {ロボットの動作生成を目指して, 人の対話と同時に起こるジェスチャーの分類とそれらのジェスチャー中の手の位置や, 発話との関係を調査した.ジェスチャーの分類はアノテーターによってラベルづけされ,k-means 法によってジェスチャーの手の動きのクラスタを生成した. またWordnetからジェスチャーとともに現れる発話の上位概念を取得した. これらの取得したデータの関わりを今後も調べていき, ロボットの動作生成へとつなげていく.},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "DNN Based Pitch Estimation Using Microphone Array", In 第49回人工知能学会 AI チャレンジ研究会, 慶応義塾大学 矢上キャンパス, 神奈川, pp. 43-46, November, 2017.
Abstract: This paper presents some preliminary experiment for pitch classification of distant speech recorded with a microphone array. The pitch classification is performed by a deep neural network. Using the microphone array to perform beamforming is beneficial to the pitch classification. However it requires a larger amount of data for training the network. The network seems to be robust to data miss-matched as pre-training with close speech data improved the results for distant speech.
BibTeX:
@Inproceedings{Even2017b,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {DNN Based Pitch Estimation Using Microphone Array},
  booktitle = {第49回人工知能学会 AI チャレンジ研究会},
  year      = {2017},
  pages     = {43-46},
  address   = {慶応義塾大学 矢上キャンパス, 神奈川},
  month     = Nov,
  day       = {25},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-049/program.html},
  abstract  = {This paper presents some preliminary experiment for pitch classification of distant speech recorded with a microphone array. The pitch classification is performed by a deep neural network. Using the microphone array to perform beamforming is beneficial to the pitch classification. However it requires a larger amount of data for training the network. The network seems to be robust to data miss-matched as pre-training with close speech data improved the results for distant speech.},
}
塩見昌裕, 港隆史, 石黒浩, "接触行為に対するロボットの反応時間がもたらす印象変化", 第35回日本ロボット学会学術講演会(RSJ2017), 東洋大学川越キャンパス, 埼玉, September, 2017.
Abstract: 本稿では,人がロボットに触れた際の反応動作と,そのタイミングがどのように印象を変化させるかを検証することで,2秒ルールが接触行為に対する反応時間にも適応されるかを検証する.
BibTeX:
@Inproceedings{塩見昌裕2017,
  author    = {塩見昌裕 and 港隆史 and 石黒浩},
  title     = {接触行為に対するロボットの反応時間がもたらす印象変化},
  booktitle = {第35回日本ロボット学会学術講演会(RSJ2017)},
  year      = {2017},
  address   = {東洋大学川越キャンパス, 埼玉},
  month     = Sep,
  day       = {11-14},
  url       = {http://rsj2017.rsj-web.org/},
  abstract  = {本稿では,人がロボットに触れた際の反応動作と,そのタイミングがどのように印象を変化させるかを検証することで,2秒ルールが接触行為に対する反応時間にも適応されるかを検証する.},
  file      = {塩見昌裕2017.pdf:pdf/塩見昌裕2017.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Motion analysis in vocalized surprise expressions and motion generation in android robots", In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, Canada, September, 2017.
Abstract: Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.
BibTeX:
@InProceedings{Ishi2017a,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  booktitle = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017)},
  title     = {Motion analysis in vocalized surprise expressions and motion generation in android robots},
  year      = {2017},
  address   = {Vancouver, Canada},
  day       = {24-28},
  month     = Sep,
  url       = {http://www.iros2017.org/},
  abstract  = {Surprise expressions often occur in dialogue interactions, and they are often accompanied by verbal interjectional utterances. We are dealing with the challenge of generating natural human-like motions during speech in android robots that have a highly human-like appearance. In this study, we focus on the analysis and motion generation of vocalized surprise expression. We first analyze facial, head and body motions during vocalized surprise appearing in human-human dialogue interactions. Analysis results indicate differences in the motion types for different types of surprise expression as well as different degrees of surprise expression. Consequently, we propose motion-generation methods based on the analysis results and evaluate the different modalities (eyebrows/eyelids, head and body torso) and different motion control levels for the proposed method. This work is carried out through subjective experiments. Evaluation results indicate the importance of each modality in the perception of surprise degree, naturalness, and the spontaneous vs. intentional expression of surprise.},
  comment   = {(also accepted and published in IEEE Robotics and Automation Letters (RA-L))},
  file      = {Ishi2017a.pdf:pdf/Ishi2017a.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Effect of Utterance Synchronized Gaze Pattern on Response Time during Human-Robot Interaction.", In 日本音響学会2017年秋季研究発表会, vol. 3-P-20, 愛媛大学城北キャンパス, 愛媛, pp. 373-374, September, 2017.
Abstract: This paper describes an experiment where the gaze pattern of a robot is modulated during speech production in order to influence the response time of the person interacting with the robot.
BibTeX:
@Inproceedings{Even2017a,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Effect of Utterance Synchronized Gaze Pattern on Response Time during Human-Robot Interaction.},
  booktitle = {日本音響学会2017年秋季研究発表会},
  year      = {2017},
  volume    = {3-P-20},
  pages     = {373-374},
  address   = {愛媛大学城北キャンパス, 愛媛},
  month     = Sep,
  day       = {25-27},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {This paper describes an experiment where the gaze pattern of a robot is modulated during speech production in order to influence the response time of the person interacting with the robot.},
  file      = {Even2017a.pdf:pdf/Even2017a.pdf:PDF},
}
住岡英信, "対人接触は存在するか", 精神疾患とインタラクティブシステム研究会, 下呂温泉望川館, 岐阜, July, 2017.
Abstract: 本発表では、対人接触による様々な効果について紹介するとともに人工システムとの接触効果との対応を議論することで、どういった要因によって対人接触効果が起こるのか議論する。
BibTeX:
@Inproceedings{住岡英信2017,
  author    = {住岡英信},
  title     = {対人接触は存在するか},
  booktitle = {精神疾患とインタラクティブシステム研究会},
  year      = {2017},
  address   = {下呂温泉望川館, 岐阜},
  month     = Jul,
  day       = {21-22},
  abstract  = {本発表では、対人接触による様々な効果について紹介するとともに人工システムとの接触効果との対応を議論することで、どういった要因によって対人接触効果が起こるのか議論する。},
}
住岡英信, "抱擁型ロボットによる擬似的スキンシップが自閉スペクトラム症児にもたらす可能性", 第113回日本精神神経学会学術総会, 名古屋国際会議場, 愛知, June, 2017.
Abstract: 本発表では、これまで我々が開発してきた存在感メディアハグビーとその効果について紹介するとともに、自閉症児に対してもたらす可能性を議論する
BibTeX:
@Inproceedings{住岡英信2017a,
  author    = {住岡英信},
  title     = {抱擁型ロボットによる擬似的スキンシップが自閉スペクトラム症児にもたらす可能性},
  booktitle = {第113回日本精神神経学会学術総会},
  year      = {2017},
  address   = {名古屋国際会議場, 愛知},
  month     = Jun,
  day       = {22-24},
  url       = {https://www.jspn.or.jp/modules/meeting/index.php?content_id=110},
  abstract  = {本発表では、これまで我々が開発してきた存在感メディアハグビーとその効果について紹介するとともに、自閉症児に対してもたらす可能性を議論する},
  file      = {住岡英信2017a.pdf:pdf/住岡英信2017a.pdf:PDF},
}
Hidenobu Sumioka, "Brain and soft body in Human-Robot interaction", In The Human Brain Project Symposium on Building Bodies for Brains & Brains for Bodies, Geneva, Switzerland, June, 2017.
Abstract: This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control" of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.
BibTeX:
@Inproceedings{Sumioka2017,
  author    = {Hidenobu Sumioka},
  title     = {Brain and soft body in Human-Robot interaction},
  booktitle = {The Human Brain Project Symposium on Building Bodies for Brains \& Brains for Bodies},
  year      = {2017},
  address   = {Geneva, Switzerland},
  month     = Jun,
  day       = {16},
  abstract  = {This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control" of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.},
  file      = {Sumioka2017.pdf:pdf/Sumioka2017.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Automatic labelling for DNN pitch classification", In 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 1-P-32, 明治大学生田キャンパス, 神奈川, pp. 595-596, march, 2017.
Abstract: This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.
BibTeX:
@Inproceedings{Even2017,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Automatic labelling for DNN pitch classification},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year      = {2017},
  volume    = {1-P-32},
  pages     = {595-596},
  address   = {明治大学生田キャンパス, 神奈川},
  month     = March,
  day       = {15},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.},
  file      = {Even2017.pdf:pdf/Even2017.pdf:PDF},
}
石井カルロス寿憲, 港隆史, 石黒浩, "驚き発話に伴う表情および動作の分析", 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 2-P-28, 明治大学生田キャンパス, 神奈川, pp. 343-344, March, 2017.
Abstract: 人らしい動作をロボットに表現させることを目指し、人が驚きを表現する際の表情や身体動作のタイミングを分析した。音声による驚き表現の度合と動作の出現度に関連がみられた。
BibTeX:
@Inproceedings{石井カルロス寿憲2017,
  author    = {石井カルロス寿憲 and 港隆史 and 石黒浩},
  title     = {驚き発話に伴う表情および動作の分析},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year      = {2017},
  volume    = {2-P-28},
  pages     = {343-344},
  address   = {明治大学生田キャンパス, 神奈川},
  month     = Mar,
  day       = {16},
  url       = {http://www.asj.gr.jp/annualmeeting/asj2017springCFP_J.html},
  abstract  = {人らしい動作をロボットに表現させることを目指し、人が驚きを表現する際の表情や身体動作のタイミングを分析した。音声による驚き表現の度合と動作の出現度に関連がみられた。},
}
石井カルロス寿憲, Jani Even, 萩田紀博, "呼び込み音声の韻律特徴の分析", 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 1-Q-37, 明治大学生田キャンパス, 神奈川, pp. 315-316, March, 2017.
Abstract: 人並みに呼び込みができるロボットの開発を目指し、人が実際にどのように呼び込みを行っているかを分析した。人の数、雑音のレベルなど、複数の条件で、呼び込みを行った際に、韻律特徴がどのように変化するかを分析した。
BibTeX:
@Inproceedings{石井カルロス寿憲2017a,
  author    = {石井カルロス寿憲 and Jani Even and 萩田紀博},
  title     = {呼び込み音声の韻律特徴の分析},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year      = {2017},
  volume    = {1-Q-37},
  pages     = {315-316},
  address   = {明治大学生田キャンパス, 神奈川},
  month     = Mar,
  day       = {15},
  url       = {http://www.asj.gr.jp/annualmeeting/asj2017springCFP_J.html},
  abstract  = {人並みに呼び込みができるロボットの開発を目指し、人が実際にどのように呼び込みを行っているかを分析した。人の数、雑音のレベルなど、複数の条件で、呼び込みを行った際に、韻律特徴がどのように変化するかを分析した。},
}
劉超然, 石井カルロス寿憲, 石黒浩, "会話ロボットのための談話機能推定", 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 2-P-8, 明治大学生田キャンパス, 神奈川, pp. 153-154, March, 2017.
Abstract: 話者の頷きなどの頭部動作と談話機能の相関関係が報告されてきた。本稿では、この相関を利用し、発話音声からロボットの発話動作を生成する為、言語情報を用いた談話機能の推定モデルを提案・評価した。
BibTeX:
@Inproceedings{劉超然2017,
  author    = {劉超然 and 石井カルロス寿憲 and 石黒浩},
  title     = {会話ロボットのための談話機能推定},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year      = {2017},
  volume    = {2-P-8},
  pages     = {153-154},
  address   = {明治大学生田キャンパス, 神奈川},
  month     = Mar,
  day       = {16},
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {話者の頷きなどの頭部動作と談話機能の相関関係が報告されてきた。本稿では、この相関を利用し、発話音声からロボットの発話動作を生成する為、言語情報を用いた談話機能の推定モデルを提案・評価した。},
  file      = {劉超然2017.pdf:pdf/劉超然2017.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using utterance timing to generate gaze pattern", In 第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016), vol. SIG-Challenge-046-09, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, pp. 50-55, November, 2016.
Abstract: This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.
BibTeX:
@Inproceedings{Even2016,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Using utterance timing to generate gaze pattern},
  booktitle = {第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016)},
  year      = {2016},
  volume    = {SIG-Challenge-046-09},
  pages     = {50-55},
  address   = {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month     = Nov,
  day       = {9},
  url       = {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-046/program.html},
  abstract  = {This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.},
}
劉超然, 石井カルロス寿憲, 石黒浩, "言語情報を用いた談話機能推定及びロボット頭部動作生成への応用", 人工知能学会 合同研究会2016, vol. SIG-Challenge-046-07, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, pp. 37-42, November, 2016.
Abstract: コミュニケーション中の頭部動作は話者・聴者双方に置いて,会話を円滑化する役割を果たしている. 遠隔操作ロボットを介した会話では,操作者側の環境と遠隔地一致しないなどの原因で,操作者の頭部動作をロボットにマッピングするのは不十分である. 本稿では,発話機能を架け橋とし,発話音声から頷きを生成するモデルを提案・評価した. 話者の音声発話はまず自動音声認識システムによりテキストに変換され,複数の分類器の分類結果投票によって発話機能を推定した. 各発話機能クラスに置いた頷きの生起確率し従い,頷き動作パターンの分布から動作特徴を選出し,生成した動作コマンドを音声と合わせてロボットに送る. 評価者実験では,提案手法により生成した動作をアンドロイドロボット ERICA で再生し,自然さ・人間らしさを評価した.
BibTeX:
@Inproceedings{劉超然2016a,
  author    = {劉超然 and 石井カルロス寿憲 and 石黒浩},
  title     = {言語情報を用いた談話機能推定及びロボット頭部動作生成への応用},
  booktitle = {人工知能学会 合同研究会2016},
  year      = {2016},
  volume    = {SIG-Challenge-046-07},
  pages     = {37-42},
  address   = {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month     = Nov,
  url       = {https://www.ai-gakkai.or.jp/sigconf/},
  etitle    = {Dialog utterance classification and nod generation for humanoid robot},
  abstract  = {コミュニケーション中の頭部動作は話者・聴者双方に置いて,会話を円滑化する役割を果たしている. 遠隔操作ロボットを介した会話では,操作者側の環境と遠隔地一致しないなどの原因で,操作者の頭部動作をロボットにマッピングするのは不十分である. 本稿では,発話機能を架け橋とし,発話音声から頷きを生成するモデルを提案・評価した. 話者の音声発話はまず自動音声認識システムによりテキストに変換され,複数の分類器の分類結果投票によって発話機能を推定した. 各発話機能クラスに置いた頷きの生起確率し従い,頷き動作パターンの分布から動作特徴を選出し,生成した動作コマンドを音声と合わせてロボットに送る. 評価者実験では,提案手法により生成した動作をアンドロイドロボット ERICA で再生し,自然さ・人間らしさを評価した.},
  file      = {劉超然2016a.pdf:pdf/劉超然2016a.pdf:PDF},
}
桑村海光, 西尾修一, "非言語情報と相互状態推定に基づく自動対話生成モデル", 第78回 人工知能学会 言語・音声理解と対話処理研究会(第7回対話システムシンポジウム)(SIG-SLUD-B505), 早稲田大学, 東京, pp. 13-18, October, 2016.
Abstract: Recently there are researches on development of an automatic dialogue system. However, most of such systems are based on textual corpora. Face-to-face communication involves transfer of nonverbal information. This information enables us to perceive a more detailed mental and emotional state of the conversation partner, such as whether they are enjoying the dialogue or not. The interlocutor's and our internal state are used to form a dialogue strategy. For example, if the interlocutor somehow express doubt about what they are saying and we are willing to express our point of view over theirs, we can interrupt them and start to talk. This is not possible in a text-based instant messaging environment. We express our internal state, such as joy and excitement and estimate the interlocutor's mental and emotional state by nonverbal information and use it in forming the dialogue strategy. In this paper, we present an automatic dialogue system which focuses on this nonverbal information and the estimation of conversation partner's state of mind and emotions.
BibTeX:
@Inproceedings{桑村海光2016a,
  author    = {桑村海光 and 西尾修一},
  title     = {非言語情報と相互状態推定に基づく自動対話生成モデル},
  booktitle = {第78回 人工知能学会 言語・音声理解と対話処理研究会(第7回対話システムシンポジウム)(SIG-SLUD-B505)},
  year      = {2016},
  pages     = {13-18},
  address   = {早稲田大学, 東京},
  month     = Oct,
  url       = {http://www.lai.kyutech.ac.jp/sig-slud/},
  etitle    = {Automatic Dialogue Based on Nonverbal Information and Estimation of Mutual Emotional and Mental States},
  abstract  = {Recently there are researches on development of an automatic dialogue system. However, most of such systems are based on textual corpora. Face-to-face communication involves transfer of nonverbal information. This information enables us to perceive a more detailed mental and emotional state of the conversation partner, such as whether they are enjoying the dialogue or not. The interlocutor's and our internal state are used to form a dialogue strategy. For example, if the interlocutor somehow express doubt about what they are saying and we are willing to express our point of view over theirs, we can interrupt them and start to talk. This is not possible in a text-based instant messaging environment. We express our internal state, such as joy and excitement and estimate the interlocutor's mental and emotional state by nonverbal information and use it in forming the dialogue strategy. In this paper, we present an automatic dialogue system which focuses on this nonverbal information and the estimation of conversation partner's state of mind and emotions.},
  file      = {桑村海光2016a.pdf:pdf/桑村海光2016a.pdf:PDF},
}
桑村海光, 西尾修一, 佐藤眞一, "認知症高齢者を対象としたロボットによる対話支援", 2016年度人工知能学会全国大会(第30回)(JSAI2016), 北九州国際会議場, 福岡, pp. 2H3-NFC-03a-5, June, 2016.
Abstract: In this study, we compare robot-mediated and face-to-face communication with three residents with Alzheimer's disease (AD). The result shows that, two of the three residents with moderate AD showed a positive impression to a teleoperated robot called Telenoid, and the other resident with severe dementia used gestures and physical contacts to interact with the robot. From the result, we discuss the possibilities of using a robot as a tool for seniors to encourage communication.
BibTeX:
@Inproceedings{桑村海光2016,
  author    = {桑村海光 and 西尾修一 and 佐藤眞一},
  title     = {認知症高齢者を対象としたロボットによる対話支援},
  booktitle = {2016年度人工知能学会全国大会(第30回)(JSAI2016)},
  year      = {2016},
  pages     = {2H3-NFC-03a-5},
  address   = {北九州国際会議場, 福岡},
  month     = Jun,
  url       = {http://www.ai-gakkai.or.jp/jsai2016/},
  etitle    = {Communication Support Using Robot for Senior with Alzheimer's Disease},
  abstract  = {In this study, we compare robot-mediated and face-to-face communication with three residents with Alzheimer's disease (AD). The result shows that, two of the three residents with moderate AD showed a positive impression to a teleoperated robot called Telenoid, and the other resident with severe dementia used gestures and physical contacts to interact with the robot. From the result, we discuss the possibilities of using a robot as a tool for seniors to encourage communication.},
  file      = {桑村海光2016.pdf:pdf/桑村海光2016.pdf:PDF},
}
車谷広大, Christian Penaloza, 西尾修一, "アンドロイドBMI操作時のエラー関連陰性電位の検出", 2016年度人工知能学会全国大会(第30回)(JSAI2016), 北九州市(福岡), pp. 1I5-3, June, 2016.
Abstract: エラー関連陰性電位(ERN) は人が何かの失敗を検出した際に脳に生じる事象関連電位である。ロボットの発話や行動の誤りなどをERNにより検出できれば、修正や学習に利用できる。本研究ではアンドロイドの脳波による操作時のERN利用を目的として、機械学習によるオンライン検出を試みたのでその結果を報告する。
BibTeX:
@Inproceedings{車谷広大2016,
  author    = {車谷広大 and Christian Penaloza and 西尾修一},
  title     = {アンドロイドBMI操作時のエラー関連陰性電位の検出},
  booktitle = {2016年度人工知能学会全国大会(第30回)(JSAI2016)},
  year      = {2016},
  pages     = {1I5-3},
  address   = {北九州市(福岡)},
  month     = Jun,
  url       = {http://www.ai-gakkai.or.jp/jsai2016/},
  abstract  = {エラー関連陰性電位(ERN) は人が何かの失敗を検出した際に脳に生じる事象関連電位である。ロボットの発話や行動の誤りなどをERNにより検出できれば、修正や学習に利用できる。本研究ではアンドロイドの脳波による操作時のERN利用を目的として、機械学習によるオンライン検出を試みたのでその結果を報告する。},
  file      = {車谷広大2016.pdf:pdf/車谷広大2016.pdf:PDF},
}
内田貴久, 港隆史, 石黒浩, "対話意欲を喚起する価値観肯定・否定割合に基づく自律対話ロボットの対話戦略", 2016年度人工知能学会全国大会(第30回)(JSAI2016), 北九州市(福岡), pp. 1I5-1, June, 2016.
Abstract: ロボットによる対話サービスなど対話そのものを目的とする場合,ユーザの対話意欲を喚起するような,相互の価値観を理解し合う対話が望まれる.そこで本研究では,自律対話ロボットの価値観に依拠した対話戦略を構築する.情報理論の観点から,価値観の相互理解に重要な自他価値観の相違を顕著に知覚する上で有用な肯定・否定発言の割合に関する仮説を導き,対話内容に依存せずその割合が対話意欲に効果的であること検証する
BibTeX:
@Inproceedings{内田貴久2016,
  author    = {内田貴久 and 港隆史 and 石黒浩},
  title     = {対話意欲を喚起する価値観肯定・否定割合に基づく自律対話ロボットの対話戦略},
  booktitle = {2016年度人工知能学会全国大会(第30回)(JSAI2016)},
  year      = {2016},
  pages     = {1I5-1},
  address   = {北九州市(福岡)},
  month     = Jun,
  url       = {http://www.ai-gakkai.or.jp/jsai2016/},
  abstract  = {ロボットによる対話サービスなど対話そのものを目的とする場合,ユーザの対話意欲を喚起するような,相互の価値観を理解し合う対話が望まれる.そこで本研究では,自律対話ロボットの価値観に依拠した対話戦略を構築する.情報理論の観点から,価値観の相互理解に重要な自他価値観の相違を顕著に知覚する上で有用な肯定・否定発言の割合に関する仮説を導き,対話内容に依存せずその割合が対話意欲に効果的であること検証する},
  file      = {内田貴久2016.pdf:pdf/内田貴久2016.pdf:PDF},
}
大久保正隆, 西尾修一, 石黒浩, "腹部運動で操作可能な仮想肢への身体感覚の拡張", 2016年度人工知能学会全国大会(第30回)(JSAI2016), 北九州市(福岡), pp. 1G3-1, June, 2016.
Abstract: 両手に物を持っていて鍵をとりたいときなど、腕がもう一本あると役に立つ。私たちは人間の身体を拡張し、追加肢を追加できるかという問題に取り組み、本研究では腹部運動によって操作される仮想肢を追加し、自分の一部と感じるかを調べた。その結果、操作性が高い場合には自分の身体の一部と感じやすく、また自分にボールがぶつかったとより強く感じることが分かった。
BibTeX:
@Inproceedings{大久保正隆2016,
  author    = {大久保正隆 and 西尾修一 and 石黒浩},
  title     = {腹部運動で操作可能な仮想肢への身体感覚の拡張},
  booktitle = {2016年度人工知能学会全国大会(第30回)(JSAI2016)},
  year      = {2016},
  pages     = {1G3-1},
  address   = {北九州市(福岡)},
  month     = Jun,
  url       = {http://www.ai-gakkai.or.jp/jsai2016/},
  abstract  = {両手に物を持っていて鍵をとりたいときなど、腕がもう一本あると役に立つ。私たちは人間の身体を拡張し、追加肢を追加できるかという問題に取り組み、本研究では腹部運動によって操作される仮想肢を追加し、自分の一部と感じるかを調べた。その結果、操作性が高い場合には自分の身体の一部と感じやすく、また自分にボールがぶつかったとより強く感じることが分かった。},
  file      = {大久保正隆2016.pdf:pdf/大久保正隆2016.pdf:PDF},
}
石井カルロス寿憲, 劉超然, Jani Even, "音環境知能技術を活用した聴覚支援システムの利用効果における予備的評価", 日本音響学会2016年春季研究発表会, 桐蔭横浜大学, 神奈川, pp. 1469-1470, March, 2016.
Abstract: 音環境知能技術を活用し、環境内の音を取捨選択でき、選択された音に対する空間的感覚を再構築できる聴覚支援システムのプロトタイプを開発した。本発表では、開発したシステムの利用効果について予備的評価の結果を報告する
BibTeX:
@Inproceedings{石井カルロス寿憲2016,
  author    = {石井カルロス寿憲 and 劉超然 and Jani Even},
  title     = {音環境知能技術を活用した聴覚支援システムの利用効果における予備的評価},
  booktitle = {日本音響学会2016年春季研究発表会},
  year      = {2016},
  pages     = {1469-1470},
  address   = {桐蔭横浜大学, 神奈川},
  month     = Mar,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {音環境知能技術を活用し、環境内の音を取捨選択でき、選択された音に対する空間的感覚を再構築できる聴覚支援システムのプロトタイプを開発した。本発表では、開発したシステムの利用効果について予備的評価の結果を報告する},
}
劉超然, 石井カルロス寿憲, 石黒浩, "言語・韻律情報を用いた話者交替推定の検討", 日本音響学会2016年春季研究発表会, 桐蔭横浜大学, 神奈川, pp. 3-4, March, 2016.
Abstract: 自然会話中話者交替は常に行われる。我々は人対ロボットの円滑な対話インタラクションを目指している。本稿では人間同士の自然会話を分析し、言語及び韻律情報を用いて、SVM並びにニューラルネットベースの発話終了(話者交替)の推定を試みた。
BibTeX:
@Inproceedings{劉超然2016,
  author    = {劉超然 and 石井カルロス寿憲 and 石黒浩},
  title     = {言語・韻律情報を用いた話者交替推定の検討},
  booktitle = {日本音響学会2016年春季研究発表会},
  year      = {2016},
  pages     = {3-4},
  address   = {桐蔭横浜大学, 神奈川},
  month     = Mar,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {自然会話中話者交替は常に行われる。我々は人対ロボットの円滑な対話インタラクションを目指している。本稿では人間同士の自然会話を分析し、言語及び韻律情報を用いて、SVM並びにニューラルネットベースの発話終了(話者交替)の推定を試みた。},
  file      = {劉超然2016.pdf:pdf/劉超然2016.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using Sensor Network for Android gaze control", In 第43回 人工知能学会 AIチャレンジ研究会, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, November, 2015.
Abstract: This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.
BibTeX:
@Inproceedings{Even2015a,
  author    = {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title     = {Using Sensor Network for Android gaze control},
  booktitle = {第43回 人工知能学会 AIチャレンジ研究会},
  year      = {2015},
  address   = {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month     = Nov,
  abstract  = {This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.},
  file      = {Even2015a.pdf:pdf/Even2015a.pdf:PDF},
}
石井カルロス寿憲, 劉超然, エヴァンヤニ, "音環境知能技術を活用した聴覚支援システムのプロトタイプの開発", 第43回 人工知能学会 AIチャレンジ研究会, 慶応大学日吉キャンパス 来往舎, 神奈川, November, 2015.
Abstract: 音環境知能技術を活用し、環境内の音を取捨選択でき、選択された音に対する空間的感覚を再構築できる聴覚支援システムのプロトタイプの開発について進捗を報告する。
BibTeX:
@Inproceedings{石井カルロス寿憲2015e,
  author    = {石井カルロス寿憲 and 劉超然 and エヴァンヤニ},
  title     = {音環境知能技術を活用した聴覚支援システムのプロトタイプの開発},
  booktitle = {第43回 人工知能学会 AIチャレンジ研究会},
  year      = {2015},
  address   = {慶応大学日吉キャンパス 来往舎, 神奈川},
  month     = NOV,
  url       = {http://winnie.kuis.kyoto-u.ac.jp/sig-challenge/},
  abstract  = {音環境知能技術を活用し、環境内の音を取捨選択でき、選択された音に対する空間的感覚を再構築できる聴覚支援システムのプロトタイプの開発について進捗を報告する。},
  file      = {石井カルロス寿憲2015e.pdf:pdf/石井カルロス寿憲2015e.pdf:PDF},
}
境くりま, 港隆史, 石井カルロス寿憲, 石黒浩, "身体的拘束に基づく音声駆動体幹動作生成システム", 第43回 人工知能学会 AIチャレンジ研究会, 慶応大学日吉キャンパス 来往舎, 神奈川, November, 2015.
Abstract: 人間が発話している際の身体的拘束に基づいて,ロボットの発話に同期した体幹動作(頷きなど)を自動で生成するシステムを提案する.
BibTeX:
@Inproceedings{境くりま2015,
  author    = {境くりま and 港隆史 and 石井カルロス寿憲 and 石黒浩},
  title     = {身体的拘束に基づく音声駆動体幹動作生成システム},
  booktitle = {第43回 人工知能学会 AIチャレンジ研究会},
  year      = {2015},
  address   = {慶応大学日吉キャンパス 来往舎, 神奈川},
  month     = NOV,
  url       = {http://winnie.kuis.kyoto-u.ac.jp/sig-challenge/},
  abstract  = {人間が発話している際の身体的拘束に基づいて,ロボットの発話に同期した体幹動作(頷きなど)を自動で生成するシステムを提案する.},
  file      = {境くりま2015.pdf:pdf/境くりま2015.pdf:PDF},
}
石井カルロス寿憲, エヴァンイアニ, モラレスサイキルイスヨウイチ, 渡辺敦志, "複数のマイクロホンアレイの連携による音環境知能技術の研究開発", In ICTイノベーションフォーラム2015, 幕張メッセ, 千葉, October, 2015.
Abstract: 平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」
BibTeX:
@Inproceedings{石井カルロス寿憲2015c,
  author    = {石井カルロス寿憲 and エヴァンイアニ and モラレスサイキルイスヨウイチ and 渡辺敦志},
  title     = {複数のマイクロホンアレイの連携による音環境知能技術の研究開発},
  booktitle = {ICTイノベーションフォーラム2015},
  year      = {2015},
  address   = {幕張メッセ, 千葉},
  month     = OCT,
  abstract  = {平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」},
  file      = {石井カルロス寿憲2015c.pdf:pdf/石井カルロス寿憲2015c.pdf:PDF},
}
石井カルロス寿憲, 港隆史, 石黒浩, "笑い声に伴うアンドロイドロボットの動作生成の検討", 第33回日本ロボット学会学術講演会, 東京電機大学, 東京, September, 2015.
BibTeX:
@Inproceedings{石井カルロス寿憲2015b,
  author    = {石井カルロス寿憲 and 港隆史 and 石黒浩},
  title     = {笑い声に伴うアンドロイドロボットの動作生成の検討},
  booktitle = {第33回日本ロボット学会学術講演会},
  year      = {2015},
  address   = {東京電機大学, 東京},
  month     = SEP,
  url       = {http://rsj2015.rsj-web.org/index.html},
  file      = {carlos-rsj2015-v5p.pdf:pdf/carlos-rsj2015-v5p.pdf:PDF},
}
石井カルロス寿憲, 波多野博顕, 石黒浩, "笑いの種類とそれに伴う表情および身体動作の分析", 日本音響学会 2015年秋季研究発表会, 会津大学, 福島, pp. 1327-1328(2-1-11), September, 2015.
Abstract: 対面対話データにおける笑いイベントに焦点を当て、笑い方の種類やパラ言語的機能と、それに伴う表情や身体動作との関連を分析した。
BibTeX:
@Inproceedings{石井カルロス寿憲2015a,
  author    = {石井カルロス寿憲 and 波多野博顕 and 石黒浩},
  title     = {笑いの種類とそれに伴う表情および身体動作の分析},
  booktitle = {日本音響学会 2015年秋季研究発表会},
  year      = {2015},
  pages     = {1327-1328(2-1-11)},
  address   = {会津大学, 福島},
  month     = SEP,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {対面対話データにおける笑いイベントに焦点を当て、笑い方の種類やパラ言語的機能と、それに伴う表情や身体動作との関連を分析した。},
  file      = {石井カルロス寿憲2015a.pdf:pdf/石井カルロス寿憲2015a.pdf:PDF},
}
船山智, 港隆史, 石井カルロス寿憲, 石黒浩, "遠隔操作型アンドロイドの笑い動作の付加効果", 情報処理学会関西支部大会, 大阪大学中之島センター, 大阪, pp. 1-7, September, 2015.
Abstract: 遠隔操作型アンドロイドは,操作者の声や動きを実体によって相手に伝えることができる一方で,自由度の制約上,動作表現が乏しくなる問題がある.本論文では感情表現に着目し,操作者が笑った時に,操作者の笑い動作と異なる代替動作を付加することで表現を補えることを示した.
BibTeX:
@Inproceedings{船山智2015,
  author    = {船山智 and 港隆史 and 石井カルロス寿憲 and 石黒浩},
  title     = {遠隔操作型アンドロイドの笑い動作の付加効果},
  booktitle = {情報処理学会関西支部大会},
  year      = {2015},
  series    = {C-07},
  pages     = {1-7},
  address   = {大阪大学中之島センター, 大阪},
  month     = SEP,
  abstract  = {遠隔操作型アンドロイドは,操作者の声や動きを実体によって相手に伝えることができる一方で,自由度の制約上,動作表現が乏しくなる問題がある.本論文では感情表現に着目し,操作者が笑った時に,操作者の笑い動作と異なる代替動作を付加することで表現を補えることを示した.},
  file      = {船山智2015.pdf:pdf/船山智2015.pdf:PDF},
}
陣内寛大, 住岡英信, 港隆史, 石黒浩, "人型携帯電話が対人関係構築にもたらす効果", 第33回日本ロボット学会 学術講演会, 東京電機大学, 東京, September, 2015.
Abstract: 近年、コミュニケーションメディアの多様化によって、我々は様々なメディアを用いて人と繋がっており、対人関係構築の様子や、結果として構築される対人関係も多様化したと言える。そのような中で、メディアを通して構築される対人関係の希薄さが指摘され、より実際に対面している状況に近いコミュニケーションを実現するために、人の存在を伝達する、「存在感メディア」の研究が盛んに行われている。しかし、そういったメディアを使用して対人関係を構築していく際に、既存のメディアに比べて親密な関係を構築できるのかということに関する長期的な検討は行われていない。そこで、本研究では、初めて会った他人同士に、存在感メディアあるいは携帯電話を用いて約一ヶ月間定期的に同じ相手と通話を行ってもらい、存在感メディアを通した対話が携帯電話に比べてどういった効果をもたらすのかを調査した。 結果、存在感メディアを用いて長期的に通話を行うことで、携帯電話を用いて通話を行った場合と比べて、自己開示が促進され、結果としてより親密で良好な対人関係が構築されることが明らかとなった。また、対話中の無意識的な行動を分析することで、存在感メディアを用いることが、対話中における大きなジェスチャーや、存在感メディアを頻繁に触るといった行動を引き起こすことが分かり、本研究で使用した存在感メディアの持つ、「人のような実体を呈示することで通話相手の存在を伝達すること」や「人の肌のような柔らかさ」や「親密なコミュニケーション状況」といった効果が、自己開示を促した原因であることが示唆された。
BibTeX:
@Inproceedings{陣内寛大2015,
  author    = {陣内寛大 and 住岡英信 and 港隆史 and 石黒浩},
  title     = {人型携帯電話が対人関係構築にもたらす効果},
  booktitle = {第33回日本ロボット学会 学術講演会},
  year      = {2015},
  address   = {東京電機大学, 東京},
  month     = SEP,
  abstract  = {近年、コミュニケーションメディアの多様化によって、我々は様々なメディアを用いて人と繋がっており、対人関係構築の様子や、結果として構築される対人関係も多様化したと言える。そのような中で、メディアを通して構築される対人関係の希薄さが指摘され、より実際に対面している状況に近いコミュニケーションを実現するために、人の存在を伝達する、「存在感メディア」の研究が盛んに行われている。しかし、そういったメディアを使用して対人関係を構築していく際に、既存のメディアに比べて親密な関係を構築できるのかということに関する長期的な検討は行われていない。そこで、本研究では、初めて会った他人同士に、存在感メディアあるいは携帯電話を用いて約一ヶ月間定期的に同じ相手と通話を行ってもらい、存在感メディアを通した対話が携帯電話に比べてどういった効果をもたらすのかを調査した。 結果、存在感メディアを用いて長期的に通話を行うことで、携帯電話を用いて通話を行った場合と比べて、自己開示が促進され、結果としてより親密で良好な対人関係が構築されることが明らかとなった。また、対話中の無意識的な行動を分析することで、存在感メディアを用いることが、対話中における大きなジェスチャーや、存在感メディアを頻繁に触るといった行動を引き起こすことが分かり、本研究で使用した存在感メディアの持つ、「人のような実体を呈示することで通話相手の存在を伝達すること」や「人の肌のような柔らかさ」や「親密なコミュニケーション状況」といった効果が、自己開示を促した原因であることが示唆された。},
  file      = {陣内寛大2015.pdf:pdf/陣内寛大2015.pdf:PDF},
}
波多野博顕, 石井カルロス寿憲, 石黒浩, "相槌の「はい」における丁寧度と音響特徴の関係について", 日本音響学会2015年秋季研究発表会, 会津大学. 福島県, pp. 303-304 (1-Q-38), September, 2015.
Abstract: 見かけの異なるロボットとの対話音声から得られた音声データのうち,本研究では相槌「はい」に注目し,その丁寧度と音響特徴の関係について分析を行なった。
BibTeX:
@Inproceedings{波多野博顕2015a,
  author    = {波多野博顕 and 石井カルロス寿憲 and 石黒浩},
  title     = {相槌の「はい」における丁寧度と音響特徴の関係について},
  booktitle = {日本音響学会2015年秋季研究発表会},
  year      = {2015},
  pages     = {303-304 (1-Q-38)},
  address   = {会津大学. 福島県},
  month     = SEP,
  url       = {http://www.asj.gr.jp/annualmeeting/index.html},
  abstract  = {見かけの異なるロボットとの対話音声から得られた音声データのうち,本研究では相槌「はい」に注目し,その丁寧度と音響特徴の関係について分析を行なった。},
  file      = {波多野博顕2015a.pdf:pdf/波多野博顕2015a.pdf:PDF},
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Investigation of motion generation in android robots during laughing speech", In International Workshop on Speech Robotics, Dresden, Germany, September, 2015.
Abstract: In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.
BibTeX:
@Inproceedings{Ishi2015c,
  author    = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title     = {Investigation of motion generation in android robots during laughing speech},
  booktitle = {International Workshop on Speech Robotics},
  year      = {2015},
  address   = {Dresden, Germany},
  month     = SEP,
  url       = {https://register-tubs.de/interspeech},
  abstract  = {In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.},
  file      = {Ishi2015c.pdf:pdf/Ishi2015c.pdf:PDF},
}
住岡英信, 石黒浩, "人のミニマムデザインが乳幼児にもたらす効果", 第4回発達神経科学学会, 大阪大学会館, September, 2015.
BibTeX:
@Inproceedings{住岡英信2015,
  author    = {住岡英信 and 石黒浩},
  title     = {人のミニマムデザインが乳幼児にもたらす効果},
  booktitle = {第4回発達神経科学学会},
  year      = {2015},
  address   = {大阪大学会館},
  month     = SEP,
  file      = {住岡英信2015.pdf:pdf/住岡英信2015.pdf:PDF},
}
Jani Even, Jonas Furrer Michael, Carlos Toshinori Ishi, Norihiro Hagita, "In situ automated impulse response measurement with a mobile robot", In 日本音響学会 2015年春季研究発表会, 中央大学後楽園キャンパス(東京都文京区), March, 2015.
Abstract: This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.
BibTeX:
@Inproceedings{Jani2015,
  author    = {Jani Even and Jonas Furrer Michael and Carlos Toshinori Ishi and Norihiro Hagita},
  title     = {In situ automated impulse response measurement with a mobile robot},
  booktitle = {日本音響学会 2015年春季研究発表会},
  year      = {2015},
  address   = {中央大学後楽園キャンパス(東京都文京区)},
  month     = Mar,
  abstract  = {This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.},
  file      = {Even2015.pdf:pdf/Even2015.pdf:PDF},
}
石井カルロス寿憲, Jani Even, 萩田紀博, "音環境知能を利用した家庭内音の識別", 日本音響学会 2015年春季研究発表会, 中央大学後楽園キャンパス(東京都文京区), March, 2015.
Abstract: 本研究では,マイクロホンアレイ技術による音源位置推定と人位置情報を組み合わせて,時空間的情報を利用した家庭内の音イベントの識別を検討した。
BibTeX:
@Inproceedings{石井カルロス寿憲2015,
  author    = {石井カルロス寿憲 and Jani Even and 萩田紀博},
  title     = {音環境知能を利用した家庭内音の識別},
  booktitle = {日本音響学会 2015年春季研究発表会},
  year      = {2015},
  address   = {中央大学後楽園キャンパス(東京都文京区)},
  month     = Mar,
  abstract  = {本研究では,マイクロホンアレイ技術による音源位置推定と人位置情報を組み合わせて,時空間的情報を利用した家庭内の音イベントの識別を検討した。},
  file      = {Ishi2015.pdf:pdf/Ishi2015.pdf:PDF},
}
劉超然, 石井カルロス寿憲, 石黒浩, 萩田紀博, "臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~", In 第41回 人工知能学会 AIチャレンジ研究会, 慶應義塾大学日吉キャンパス 来住舎(東京), pp. 26-32, November, 2014.
Abstract: 本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。
BibTeX:
@Inproceedings{劉超然2014,
  author    = {劉超然 and 石井カルロス寿憲 and 石黒浩 and 萩田紀博},
  title     = {臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~},
  booktitle = {第41回 人工知能学会 AIチャレンジ研究会},
  year      = {2014},
  pages     = {26-32},
  address   = {慶應義塾大学日吉キャンパス 来住舎(東京)},
  month     = Nov,
  abstract  = {本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。},
  file      = {劉超然2014.pdf:pdf/劉超然2014.pdf:PDF},
}
中西惇也, 住岡英信, 境くりま, 中道大介, 桑村海光, 石黒浩, "聞く力を引き出すHuman-robot Intimate Interaction", 第32回日本ロボット学会学術講演会, 九州産業大学(福岡), pp. RSJ2014AC3P1-07, September, 2014.
Abstract: 本研究では人の存在を感じさせる存在感対話メディア「ハグビー」によって構築される話者との親密な関係が会話への集中力を高め,児童の未熟な聞く能力を補助あるいは向上させることを提案した.実際に未就学児童への読み聞かせ場面にハグビーを導入し,振る舞いを観察した結果,ハグビーを用いることで児童が話に集中することができることが示唆された.これは存在感対話メディアによって授業中の立ち歩きや私語などによる学級全体の授業が成り立たなくなるという小1プロブレムを回避できる可能性を示している.
BibTeX:
@Inproceedings{中西惇也2014,
  author    = {中西惇也 and 住岡英信 and 境くりま and 中道大介 and 桑村海光 and 石黒浩},
  title     = {聞く力を引き出すHuman-robot Intimate Interaction},
  booktitle = {第32回日本ロボット学会学術講演会},
  year      = {2014},
  pages     = {RSJ2014AC3P1-07},
  address   = {九州産業大学(福岡)},
  month     = Sep,
  abstract  = {本研究では人の存在を感じさせる存在感対話メディア「ハグビー」によって構築される話者との親密な関係が会話への集中力を高め,児童の未熟な聞く能力を補助あるいは向上させることを提案した.実際に未就学児童への読み聞かせ場面にハグビーを導入し,振る舞いを観察した結果,ハグビーを用いることで児童が話に集中することができることが示唆された.これは存在感対話メディアによって授業中の立ち歩きや私語などによる学級全体の授業が成り立たなくなるという小1プロブレムを回避できる可能性を示している.},
  file      = {中西惇也2014a.pdf:pdf/中西惇也2014a.pdf:PDF},
  journal   = {第32回日本ロボット学会学術講演会 (RSJ2014)},
}
大久保正隆, 西尾修一, 石黒浩, "遠隔操作ロボットへの身体感覚転移における実体の有無と見かけの影響", 第32回日本ロボット学会学術講演会, 九州産業大学(福岡), pp. RSJ2014AC1B2-02, September, 2014.
Abstract: 人と似た外観を有する遠隔操作型アンドロイド・ロボットを操作していると、 アンドロイドを自分の身体の一部のように感じることがある。アンドロイドの 動きと操作者の動きの同期性が高いとき、身体感覚の転移が生じる。本研究で は、この操作対象への身体感覚転移がアンドロイド以外のロボットや、実体をもたない操作対象へも生じるかを調べた。
BibTeX:
@Inproceedings{大久保正隆2014,
  author    = {大久保正隆 and 西尾修一 and 石黒浩},
  title     = {遠隔操作ロボットへの身体感覚転移における実体の有無と見かけの影響},
  booktitle = {第32回日本ロボット学会学術講演会},
  year      = {2014},
  pages     = {RSJ2014AC1B2-02},
  address   = {九州産業大学(福岡)},
  month     = Sep,
  abstract  = {人と似た外観を有する遠隔操作型アンドロイド・ロボットを操作していると、 アンドロイドを自分の身体の一部のように感じることがある。アンドロイドの 動きと操作者の動きの同期性が高いとき、身体感覚の転移が生じる。本研究で は、この操作対象への身体感覚転移がアンドロイド以外のロボットや、実体をもたない操作対象へも生じるかを調べた。},
  file      = {大久保正隆2014a.pdf:pdf/大久保正隆2014a.pdf:PDF},
  funding   = {萌芽},
  journal   = {第32回 日本ロボット学会学術講演会(RSJ2014)},
}
境くりま, 石井カルロス寿憲, 港隆史, 石黒浩, "発話者の音声に対応する動作生成と遠隔操作ロボットへの動作の付加効果", 第39回人工知能学会 AI チャレンジ研究会, 京都大学, 京都, pp. 7-13, March, 2014.
Abstract: 本論文では,遠隔操作対話ロボットの頭部動作を操作者の音声情報のみから自動生成するシステムを提案する.遠隔対話では発話音声と一致した頭部動作の表現が必要となるため,発話の意味(相槌や発話の保持などの談話機能)を言語情報と韻律情報を用いてをリアルタイムで推定し,推定した談話機能に基づき頭部動作を生成する.提案システムには推定誤りが含まれ,対話に適さない動作が生成される場合がある.そのため,提案システムを用いた対話時の動作の印象を被験者実験により評価した.主観評価から,提案システムによる動作を付加することで,ロボットの動作がより対話に適したものになることが示された.
BibTeX:
@Inproceedings{境くりま2014,
  author    = {境くりま and 石井カルロス寿憲 and 港隆史 and 石黒浩},
  title     = {発話者の音声に対応する動作生成と遠隔操作ロボットへの動作の付加効果},
  booktitle = {第39回人工知能学会 AI チャレンジ研究会},
  year      = {2014},
  pages     = {7-13},
  address   = {京都大学, 京都},
  month     = Mar,
  day       = {18},
  url       = {http://www.ai-gakkai.or.jp/sig-challenge-39/},
  etitle    = {Online speech-driven head motion generation system and evaluation on a tele-operated robot},
  abstract  = {本論文では,遠隔操作対話ロボットの頭部動作を操作者の音声情報のみから自動生成するシステムを提案する.遠隔対話では発話音声と一致した頭部動作の表現が必要となるため,発話の意味(相槌や発話の保持などの談話機能)を言語情報と韻律情報を用いてをリアルタイムで推定し,推定した談話機能に基づき頭部動作を生成する.提案システムには推定誤りが含まれ,対話に適さない動作が生成される場合がある.そのため,提案システムを用いた対話時の動作の印象を被験者実験により評価した.主観評価から,提案システムによる動作を付加することで,ロボットの動作がより対話に適したものになることが示された.},
  file      = {境くりま2014.pdf:pdf/境くりま2014.pdf:PDF},
}
Ryuji Yamazaki, Marco Nørskov, "Self-alteration in HRI", Poster presentation at International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany, February, 2014.
BibTeX:
@Inproceedings{Yamazaki2014,
  author    = {Ryuji Yamazaki and Marco N\orskov},
  title     = {Self-alteration in HRI},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year      = {2014},
  address   = {Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany},
  month     = Feb,
  day       = {13-15},
  file      = {Yamazaki2014.pdf:pdf/Yamazaki2014.pdf:PDF},
}
桑村海光, 山崎竜二, 西尾修一, 石黒浩, "テレノイドによる高齢者支援 ~ 特別養護老人ホームへの導入の経過報告 ~", 電子情報通信学会技術研究報告, 福祉情報工学研究会, no. WIT2013-47, 霧島観光ホテル, 鹿児島, pp. 23-28, October, 2013.
Abstract: 近年、介護を支援するロボットや機器の導入が注目されており、新しい技術を導入することで、高齢者の自立性を高め、介護福祉士の負担を軽減することが来たいされている。しかし、新しい機器を導入するためには、施設を運営するスタッフや現場で働く介護福祉士の理解が必要で、特に実際に用いる介護福祉士には使い方、気を付ける点を熟知する必要がある。また、入居している高齢者にも新しい機器に対して不安を感じないように慣れる必要がある。本論文では、遠隔操作型アンドロイド「テレノイド」を施設に導入した事例を通して、新しい機器の導入時の注意事項、そしてその際の改善案について述べる。
BibTeX:
@Inproceedings{桑村海光2013a,
  author    = {桑村海光 and 山崎竜二 and 西尾修一 and 石黒浩},
  title     = {テレノイドによる高齢者支援 ~ 特別養護老人ホームへの導入の経過報告 ~},
  booktitle = {電子情報通信学会技術研究報告},
  year      = {2013},
  number    = {WIT2013-47},
  pages     = {23-28},
  address   = {霧島観光ホテル, 鹿児島},
  month     = Oct,
  publisher = {福祉情報工学研究会},
  day       = {25-27},
  url       = {http://www.ieice.org/ken/paper/20131026wBGW/},
  etitle    = {Elderly Support using Telenoid : A case report introducing to the care facility},
  abstract  = {近年、介護を支援するロボットや機器の導入が注目されており、新しい技術を導入することで、高齢者の自立性を高め、介護福祉士の負担を軽減することが来たいされている。しかし、新しい機器を導入するためには、施設を運営するスタッフや現場で働く介護福祉士の理解が必要で、特に実際に用いる介護福祉士には使い方、気を付ける点を熟知する必要がある。また、入居している高齢者にも新しい機器に対して不安を感じないように慣れる必要がある。本論文では、遠隔操作型アンドロイド「テレノイド」を施設に導入した事例を通して、新しい機器の導入時の注意事項、そしてその際の改善案について述べる。},
  eabstract = {With the progress of the aging society, numbers of robots and other new devices for elderly care are intensively studied and developed. However, when introducing such new devices, we need to be careful, especially in the elderly facilities. Care staffs have to obtain deep understanding of their usages and effects. Besides, efforts have to be paid so that elderlies can gradually become familiar with the devices. In this paper, we describe the case study of introducing teleoperated android "Telenoid" to the elderly facilities and discuss issues that need attention on introducing new technology to the elderly facilities.},
  file      = {桑村海光2013a.pdf:pdf/桑村海光2013a.pdf:PDF},
  keywords  = {介護ロボット, 山崎竜二, 西尾修一, 石黒浩},
}
中道大介, 西尾修一, 石黒浩, "遠隔操作型アンドロイド「テレノイド」の遠隔操作とその訓練", 情報処理学会関西支部大会, 大阪大学中之島センター, 大阪, pp. G-01, September, 2013.
Abstract: 遠隔操作型アンドロイドの操作には慣れる時間を要する操作者も多く,効果的な訓練方法の確立が必要である. 本研究では,馴化訓練による操作感覚への影響を,アンドロイドへの身体感覚の転移を指標として検証した.
BibTeX:
@Inproceedings{中道大介2013,
  author          = {中道大介 and 西尾修一 and 石黒浩},
  title           = {遠隔操作型アンドロイド「テレノイド」の遠隔操作とその訓練},
  booktitle       = {情報処理学会関西支部大会},
  year            = {2013},
  pages           = {G-01},
  address         = {大阪大学中之島センター, 大阪},
  month           = Sep,
  day             = {25},
  url             = {https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=96872&item_no=1&page_id=13&block_id=8},
  etitle          = {Teleoperation of Telenoid and it's training},
  abstract        = {遠隔操作型アンドロイドの操作には慣れる時間を要する操作者も多く,効果的な訓練方法の確立が必要である. 本研究では,馴化訓練による操作感覚への影響を,アンドロイドへの身体感覚の転移を指標として検証した.},
  file            = {中道大介2013.pdf:pdf/中道大介2013.pdf:PDF},
}
中道大介, 住岡英信, 西尾修一, 石黒浩, "操作訓練による遠隔操作型アンドロイドへの身体感覚転移の度合いの向上", 第18回日本バーチャルリアリティ学会大会, グランフロント大阪, 大阪, pp. 331-334, September, 2013.
Abstract: 遠隔操作型アンドロイド「テレノイド」は操作者の身体動作と同期し,操作者の存在感を遠隔地に伝達できるロボットである.操作者がその操作に慣れ,テレノイドを自らの分身と感じて操作できるようになれば,遠隔コミュニケーションはより豊かになると考えられる.そこで本稿では,操作訓練とその効果を,身体感覚転移の度合いを指標として検証した.身体感覚転移とは,操作時にロボットが自らの身体であると錯覚する現象である.この検証の結果,テレノイドの後頭部を見て首の動作の同期を確認する訓練によって身体感覚転移の度合いが向上することが分かった.
BibTeX:
@Inproceedings{中道大介2013a,
  author          = {中道大介 and 住岡英信 and 西尾修一 and 石黒浩},
  title           = {操作訓練による遠隔操作型アンドロイドへの身体感覚転移の度合いの向上},
  booktitle       = {第18回日本バーチャルリアリティ学会大会},
  year            = {2013},
  pages           = {331-334},
  address         = {グランフロント大阪, 大阪},
  month           = Sep,
  day             = {18-20},
  url             = {http://conference.vrsj.org/ac2013/program/144/},
  etitle          = {Enhancement of body ownership transfer to teleoperated android through training},
  abstract        = {遠隔操作型アンドロイド「テレノイド」は操作者の身体動作と同期し,操作者の存在感を遠隔地に伝達できるロボットである.操作者がその操作に慣れ,テレノイドを自らの分身と感じて操作できるようになれば,遠隔コミュニケーションはより豊かになると考えられる.そこで本稿では,操作訓練とその効果を,身体感覚転移の度合いを指標として検証した.身体感覚転移とは,操作時にロボットが自らの身体であると錯覚する現象である.この検証の結果,テレノイドの後頭部を見て首の動作の同期を確認する訓練によって身体感覚転移の度合いが向上することが分かった.},
  file            = {中道大介2013a.pdf:pdf/中道大介2013a.pdf:PDF},
  keywords        = {遠隔操作; 訓練; 身体認知; 遠隔存在感},
}
住岡英信, 幸田健介, 西尾修一, 港隆史, 石黒浩, "土偶の変遷に基づくコミュニケーションメディアのミニマルデザインの検討", 情報処理学会関西支部大会, 大阪大学中之島センター, 大阪, pp. C-08, September, 2013.
Abstract: コミュニケーションアバターの外観は遠隔地の話者を感じるために大きな役割を果たす.本研究では,アバターを対話者と感じるための最小要素を人型表現への取り組みである土偶の発展から考察し,心理評価実験により検討する.
BibTeX:
@Inproceedings{住岡英信2013,
  author          = {住岡英信 and 幸田健介 and 西尾修一 and 港隆史 and 石黒浩},
  title           = {土偶の変遷に基づくコミュニケーションメディアのミニマルデザインの検討},
  booktitle       = {情報処理学会関西支部大会},
  year            = {2013},
  pages           = {C-08},
  address         = {大阪大学中之島センター, 大阪},
  month           = Sep,
  day             = {25},
  url             = {https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=96828&item_no=1&page_id=13&block_id=8},
  etitle          = {Design considerations from chronologial development of Dogu},
  abstract        = {コミュニケーションアバターの外観は遠隔地の話者を感じるために大きな役割を果たす.本研究では,アバターを対話者と感じるための最小要素を人型表現への取り組みである土偶の発展から考察し,心理評価実験により検討する.},
  file            = {住岡英信2013.pdf:pdf/住岡英信2013.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kaiko Kuwamura, "Identity Construction of the Hybrid of Robot and Human", In 22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications, Gyeongju, Korea, August, 2013.
BibTeX:
@Inproceedings{Yamazaki2013,
  author    = {Ryuji Yamazaki and Shuichi Nishio and Kaiko Kuwamura},
  title     = {Identity Construction of the Hybrid of Robot and Human},
  booktitle = {22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications},
  year      = {2013},
  address   = {Gyeongju, Korea},
  month     = Aug,
  day       = {26-29},
}
Astrid M. von der Pütten, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploration and Analysis of People's Nonverbal Behavior Towards an Android", In the annual meeting of the International Communication Association, Phoenix, USA, May, 2012.
BibTeX:
@Inproceedings{Putten2012,
  author    = {Astrid M. von der P\"{u}tten and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title     = {Exploration and Analysis of People's Nonverbal Behavior Towards an Android},
  booktitle = {the annual meeting of the International Communication Association},
  year      = {2012},
  address   = {Phoenix, USA},
  month     = May,
}
石井カルロス寿憲, 劉超然, 石黒浩, 萩田紀博, "フォルマントによる口唇動作生成の試み", 日本音響学会2012年春季研究発表会, 神奈川大学, 神奈川, pp. 373-374, March, 2012.
Abstract: 遠隔ロボットの口唇動作を遠隔者の声から自動的に生成することを背景に、フォルマント空間の写像変換による口唇動作生成手法を提案した。
BibTeX:
@Inproceedings{石井カルロス寿憲2012a,
  author          = {石井カルロス寿憲 and 劉超然 and 石黒浩 and 萩田紀博},
  title           = {フォルマントによる口唇動作生成の試み},
  booktitle       = {日本音響学会2012年春季研究発表会},
  year            = {2012},
  series          = {2-11-17},
  pages           = {373--374},
  address         = {神奈川大学, 神奈川},
  month           = Mar,
  day             = {13-15},
  etitle          = {Trials on a formant-based lip motion generation},
  abstract        = {遠隔ロボットの口唇動作を遠隔者の声から自動的に生成することを背景に、フォルマント空間の写像変換による口唇動作生成手法を提案した。},
  file            = {石井カルロス寿憲2012a.pdf:pdf/石井カルロス寿憲2012a.pdf:PDF},
}
山本知幸, 平田雅之, 池田尊司, 西尾修一, 松下光次郎, Maryam Alimardani, 石黒浩, "認知脳ロボティクスにおけるBMI・情動研究", 日本ロボット学会学術講演会, 東京, pp. AC1A2-2, September, 2011.
Abstract: 本稿では、BMI(Brain Machine Interface)を用いた上記3領域の融合研究に関して紹介する。
BibTeX:
@Inproceedings{山本知幸2011,
  author    = {山本知幸 and 平田雅之 and 池田尊司 and 西尾修一 and 松下光次郎 and Maryam Alimardani and 石黒浩},
  title     = {認知脳ロボティクスにおける{BMI}・情動研究},
  booktitle = {日本ロボット学会学術講演会},
  year      = {2011},
  pages     = {{AC1A}2-2},
  address   = {東京},
  month     = Sep,
  abstract  = {本稿では、{BMI}(Brain Machine Interface)を用いた上記3領域の融合研究に関して紹介する。},
  file      = {山本知幸2011.pdf:山本知幸2011.pdf:PDF},
}
港隆史, 西尾修一, 小川浩平, 石黒浩, "携帯型遠隔操作アンドロイド「エルフォイド」の研究開発", 日本ロボット学会学術講演会, 東京, pp. RSJ2011AC3O2-4, September, 2011.
Abstract: 本研究では遠隔操作型アンドロイドを携帯電話サイズに小型化することにより,何時でも何処でも誰でも自身の存在を遠隔地に伝達することができる新たなコミュニケーションメディアの実現を目指し,最初のプロトタイプとして「エルフォイドP1」を開発した.エルフォイドは人間の見かけのミニマルデザインを採用することで,誰もがエルフォイドを通して存在感を伝えることができるようにデザインされている.通信手段として携帯電話機能を有しており,携帯電話と同様,時間と場所を選ばずコミュニケーションが可能である.また外装には,人間の皮膚のような柔らかい素材を用いており,その触感からも人間を連想させるようになっている.本報告では開発したエルフォイドのプロトタイプを紹介するとともに,携帯型遠隔操作アンドロイド実現に向けた研究課題について述べる.
BibTeX:
@Inproceedings{港隆史2011,
  author    = {港隆史 and 西尾修一 and 小川浩平 and 石黒浩},
  title     = {携帯型遠隔操作アンドロイド「エルフォイド」の研究開発},
  booktitle = {日本ロボット学会学術講演会},
  year      = {2011},
  pages     = {{RSJ2011AC}3O2-4},
  address   = {東京},
  month     = Sep,
  day       = {7-9},
  abstract  = {本研究では遠隔操作型アンドロイドを携帯電話サイズに小型化することにより,何時でも何処でも誰でも自身の存在を遠隔地に伝達することができる新たなコミュニケーションメディアの実現を目指し,最初のプロトタイプとして「エルフォイド{P}1」を開発した.エルフォイドは人間の見かけのミニマルデザインを採用することで,誰もがエルフォイドを通して存在感を伝えることができるようにデザインされている.通信手段として携帯電話機能を有しており,携帯電話と同様,時間と場所を選ばずコミュニケーションが可能である.また外装には,人間の皮膚のような柔らかい素材を用いており,その触感からも人間を連想させるようになっている.本報告では開発したエルフォイドのプロトタイプを紹介するとともに,携帯型遠隔操作アンドロイド実現に向けた研究課題について述べる.},
  file      = {港隆史2011.pdf:港隆史2011.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Tele-operating the lip motion of humanoid robots from the operator's voice", In 第29回日本ロボット学会学術講演会, 芝浦工業大学豊洲キャンパス, 東京, pp. C1J3-6, September, 2011.
BibTeX:
@Inproceedings{Ishi2011,
  author          = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title           = {Tele-operating the lip motion of humanoid robots from the operator's voice},
  booktitle       = {第29回日本ロボット学会学術講演会},
  year            = {2011},
  pages           = {C1J3-6},
  address         = {芝浦工業大学豊洲キャンパス, 東京},
  month           = Sep,
  day             = {7-9},
  file            = {Ishi2011.pdf:pdf/Ishi2011.pdf:PDF},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An android in the field. How people react towards Geminoid HI-1 in a real world scenario", In the 7th Conference of the Media Psychology Division of the German Psychological Society, Jacobs University, Bremen, Germany, August, 2011.
BibTeX:
@Inproceedings{Putten2011a,
  author    = {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title     = {An android in the field. How people react towards Geminoid HI-1 in a real world scenario},
  booktitle = {the 7th Conference of the Media Psychology Division of the German Psychological Society},
  year      = {2011},
  address   = {Jacobs University, Bremen, Germany},
  month     = Aug,
  day       = {10-11},
}
山崎竜二, 西尾修一, 小川浩平, 石黒浩, 幸田健介, 松村耕平, 藤波努, 寺井紀裕, "遠隔操作ロボットの福祉教育への適用", 人工知能学会全国大会, 岩手県(盛岡市), pp. 1A2-NFC1b-10, June, 2011.
Abstract: The old sense of community among Japanese has been weakening and the social isolation of elderly people is becoming a major issue in the acceleration of demographic aging. For facilitating the intergenerational communica- tion as a welfare education, we started our research project to immerse a teleoperated humanoid robot, Telenoid, in an elementary school and to see the reaction of children. Also, we conducted a telenoid testing for the elderly with dementia to see their reaction. We discuss how the robot can be accepted in the intergenerational communication for creating a children-centric community for dementia care that is based on our embodiment as human beings.
BibTeX:
@Inproceedings{山崎竜二2011,
  author          = {山崎竜二 and 西尾修一 and 小川浩平 and 石黒浩 and 幸田健介 and 松村耕平 and 藤波努 and 寺井紀裕},
  title           = {遠隔操作ロボットの福祉教育への適用},
  booktitle       = {人工知能学会全国大会},
  year            = {2011},
  pages           = {1A2-NFC1b-10},
  address         = {岩手県(盛岡市)},
  month           = Jun,
  day             = {1-3},
  url             = {https://kaigi.org/jsai/webprogram/2011/pdf/412.pdf},
  etitle          = {The Application of a Teleoperated Robot to the Welfare Education in an Elementary School},
  abstract        = {The old sense of community among Japanese has been weakening and the social isolation of elderly people is becoming a major issue in the acceleration of demographic aging. For facilitating the intergenerational communica- tion as a welfare education, we started our research project to immerse a teleoperated humanoid robot, Telenoid, in an elementary school and to see the reaction of children. Also, we conducted a telenoid testing for the elderly with dementia to see their reaction. We discuss how the robot can be accepted in the intergenerational communication for creating a children-centric community for dementia care that is based on our embodiment as human beings.},
  eabstract       = {The old sense of community among Japanese has been weakening and the social isolation of elderly people is becoming a major issue in the acceleration of demographic aging. For facilitating the intergenerational communica- tion as a welfare education, we started our research project to immerse a teleoperated humanoid robot, Telenoid, in an elementary school and to see the reaction of children. Also, we conducted a telenoid testing for the elderly with dementia to see their reaction. We discuss how the robot can be accepted in the intergenerational communication for creating a children-centric community for dementia care that is based on our embodiment as human beings.},
  file            = {山崎竜二2011a.pdf:山崎竜二2011a.pdf:PDF},
  organization    = {人工知能学会},
}
劉超然, 石井カルロス寿憲, 石黒浩, "人とロボットの対話インタラクションにおける頭部動作効果の考察", 人工知能学会全国大会, 岩手県(盛岡市), pp. 3B1-OS22c-5, June, 2011.
Abstract: this paper proposes a model for generating head tilting and nodding based on rules inferred from analyzing the relationship between head motion and dialogue acts, and evaluates the model using two types of humanoid robot (one very human-like android, ``Geminoid F'', and one typical humanoid robot, ``Robovie R2''). Subjective scores show that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place.
BibTeX:
@Inproceedings{劉超然2011,
  author          = {劉超然 and 石井カルロス寿憲 and 石黒浩},
  title           = {人とロボットの対話インタラクションにおける頭部動作効果の考察},
  booktitle       = {人工知能学会全国大会},
  year            = {2011},
  pages           = {3B1-OS22c-5},
  address         = {岩手県(盛岡市)},
  month           = Jun,
  day             = {1-3},
  url             = {https://kaigi.org/jsai/webprogram/2011/pdf/421.pdf},
  etitle          = {Effects of Head Motion during Human-Robot Conversation Interaction},
  abstract        = {this paper proposes a model for generating head tilting and nodding based on rules inferred from analyzing the relationship between head motion and dialogue acts, and evaluates the model using two types of humanoid robot (one very human-like android, ``Geminoid F'', and one typical humanoid robot, ``Robovie R2''). Subjective scores show that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place.},
  eabstract       = {this paper proposes a model for generating head tilting and nodding based on rules inferred from analyzing the relationship between head motion and dialogue acts, and evaluates the model using two types of humanoid robot (one very human-like android, ``Geminoid F'', and one typical humanoid robot, ``Robovie R2''). Subjective scores show that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place.},
  file            = {劉超然2011.pdf:劉超然2011.pdf:PDF},
  organization    = {人工知能学会},
}
Panikos Heracleous, Norihiro Hagita, "A visual mode for communication in the deaf society", In Spring Meeting of Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 57-60, March, 2011.
Abstract: In this article, automatic recognition of Cued Speech in French based on hidden Markov models (HMMs) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.
BibTeX:
@Inproceedings{Heracleous2011d,
  author          = {Panikos Heracleous and Norihiro Hagita},
  title           = {A visual mode for communication in the deaf society},
  booktitle       = {Spring Meeting of Acoustical Society of Japan},
  year            = {2011},
  series          = {2-5-6},
  pages           = {57--60},
  address         = {Waseda University, Tokyo, Japan},
  month           = Mar,
  abstract        = {In this article, automatic recognition of Cued Speech in French based on hidden Markov models ({HMM}s) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.},
  file            = {Heracleous2011d.pdf:Heracleous2011d.pdf:PDF},
}
岡本恵里奈, 西尾修一, 石黒浩, "遠隔操作型アンドロイドによる存在感の伝達に必須な要素の検証", ヒューマンインタフェース学会研究会, vol. 12, no. 10, Nara, Japan, pp. 13-20, November, 2010.
Abstract: 対面して会話をする際,人は外見や声,動き,話の内容など,さまざまな要素を伝達しているが,これまで,それらの要素を個別に操作する(人の外見のみを変える,動作のみを別の人のものにする,といった実験をする)ことはできなかった。しかし,ジェミノイド(人間をモデルに開発されたロボット)という,いわば心身の分離を実現できる新しい遠隔コミュニケーションツールを用いることで,この検証がある程度可能になった。そこで,本稿では,ジェミノイドを拡張し,外見と声を操作者本人とは異なる人のそれに置き換え,本人らしさがどの程度伝達しうるのか,外見や声が対人認知にどのような影響を及ぼしているのかを,4人の被験者のジェミノイドを介したコミュニケーション実験によって検証した。その結果,対話相手の個性が認識できたのは37.0%で,「友人同士の場合の方が正解率が高い」という仮説に反して,友人が含まれる場合はさらに低くなること明らかになった。
BibTeX:
@Inproceedings{岡本恵里奈2010,
  author          = {岡本恵里奈 and 西尾修一 and 石黒浩},
  title           = {遠隔操作型アンドロイドによる存在感の伝達に必須な要素の検証},
  booktitle       = {ヒューマンインタフェース学会研究会},
  year            = {2010},
  volume          = {12},
  number          = {10},
  pages           = {13--20},
  address         = {Nara, Japan},
  month           = Nov,
  abstract        = {対面して会話をする際,人は外見や声,動き,話の内容など,さまざまな要素を伝達しているが,これまで,それらの要素を個別に操作する(人の外見のみを変える,動作のみを別の人のものにする,といった実験をする)ことはできなかった。しかし,ジェミノイド(人間をモデルに開発されたロボット)という,いわば心身の分離を実現できる新しい遠隔コミュニケーションツールを用いることで,この検証がある程度可能になった。そこで,本稿では,ジェミノイドを拡張し,外見と声を操作者本人とは異なる人のそれに置き換え,本人らしさがどの程度伝達しうるのか,外見や声が対人認知にどのような影響を及ぼしているのかを,4人の被験者のジェミノイドを介したコミュニケーション実験によって検証した。その結果,対話相手の個性が認識できたのは37.0%で,「友人同士の場合の方が正解率が高い」という仮説に反して,友人が含まれる場合はさらに低くなること明らかになった。},
  file            = {岡本恵里奈2010.pdf:岡本恵里奈2010.pdf:PDF},
}
山森崇義, 小川浩平, 西尾修一, 石黒浩, "ロボットの見かけや動きが目が合う条件に及ぼす影響", 電子情報通信学会技術研究報告, ヒューマンコミュニケーション基礎研究会, vol. HCS2008-60, 島根, pp. 13-8, March, 2009.
Abstract: 近年,見かけが人間に酷似したロボット,アンドロイドの研究がおこなわれている.一方で,視線,とりわけ相手と目を合わせるという動作(アイコンタクト)は,我々の社会生活にとって重要な役割を果たしている.近年の研究から,ロボットや対話メディアとのとのコミュニケーションにおいても視線の重要性が報告されており,目を合わせる機能の必要性が増えつつある.しかし,視線知覚そのものに関する実験的な研究はそれほど多くはないため,人と目が合うための条件や,条件に影響を与える要因が明らかではない.そこで,人と目を合わせる機能が最も必要とされるアンドロイドや他の対話メディアを用いて,人と目が合うための条件を検証する実験をおこなった.実験により,人と目が合うための条件に影響を与える要因を特定する.人と目が合うための条件に影響を与える要因として,目の形状や対象の見た目,視線の動き方が影響することが明らかになった.
BibTeX:
@Inproceedings{山森崇義2009,
  author          = {山森崇義 and 小川浩平 and 西尾修一 and 石黒浩},
  title           = {ロボットの見かけや動きが目が合う条件に及ぼす影響},
  booktitle       = {電子情報通信学会技術研究報告},
  year            = {2009},
  volume          = {HCS2008-60},
  pages           = {13--8},
  address         = {島根},
  month           = Mar,
  publisher       = {ヒューマンコミュニケーション基礎研究会},
  url             = {http://ci.nii.ac.jp/naid/110007325133},
  etitle          = {Effects to the condition for eye contact by differences of appearance and movement},
  abstract        = {近年,見かけが人間に酷似したロボット,アンドロイドの研究がおこなわれている.一方で,視線,とりわけ相手と目を合わせるという動作(アイコンタクト)は,我々の社会生活にとって重要な役割を果たしている.近年の研究から,ロボットや対話メディアとのとのコミュニケーションにおいても視線の重要性が報告されており,目を合わせる機能の必要性が増えつつある.しかし,視線知覚そのものに関する実験的な研究はそれほど多くはないため,人と目が合うための条件や,条件に影響を与える要因が明らかではない.そこで,人と目を合わせる機能が最も必要とされるアンドロイドや他の対話メディアを用いて,人と目が合うための条件を検証する実験をおこなった.実験により,人と目が合うための条件に影響を与える要因を特定する.人と目が合うための条件に影響を与える要因として,目の形状や対象の見た目,視線の動き方が影響することが明らかになった.},
  file            = {山森崇義2009.pdf:山森崇義2009.pdf:PDF},
  issn            = {09135685},
  journal         = {電子情報通信学会技術研究報告},
}
渡辺哲矢, 小川浩平, 西尾修一, 石黒浩, "遠隔操作型アンドロイドとの同調感により誘起される身体感覚の延長", 電子情報通信学会技術研究報告, ヒューマンコミュニケーション基礎研究会, vol. HCS2008-61, 島根, pp. 19-24, March, 2009.
Abstract: 遠隔操作型アンドロイドを操作する際,視覚フィードバックしかないのにも関わらず,ロボットの体に触られると自分の体に触られたように感じることがある.類似の現象として,視覚刺激に同期して触覚刺激を与えると身体感覚の延長が生ずる「Rubber Hand Illusion」が知られているが,視覚刺激のみでの研究事例は少ない.そこで我々はアンドロイドの遠隔操作時の同調性を制御した被験者実験を行い,実際に錯覚が生じているのか,またはどのような条件がそろえば,そのような錯覚が生じるか検証した.その結果同調感を高めると,視覚刺激のみでも身体感覚の延長が引き起こされることがわかった.
BibTeX:
@Inproceedings{渡辺哲矢2009,
  author    = {渡辺哲矢 and 小川浩平 and 西尾修一 and 石黒浩},
  title     = {遠隔操作型アンドロイドとの同調感により誘起される身体感覚の延長},
  booktitle = {電子情報通信学会技術研究報告},
  year      = {2009},
  volume    = {HCS2008-61},
  pages     = {19--24},
  address   = {島根},
  month     = Mar,
  publisher = {ヒューマンコミュニケーション基礎研究会},
  url       = {http://ci.nii.ac.jp/naid/110007325132},
  etitle    = {Body image extension induced by synchronization with teleoperated android robot},
  abstract  = {遠隔操作型アンドロイドを操作する際,視覚フィードバックしかないのにも関わらず,ロボットの体に触られると自分の体に触られたように感じることがある.類似の現象として,視覚刺激に同期して触覚刺激を与えると身体感覚の延長が生ずる「Rubber Hand Illusion」が知られているが,視覚刺激のみでの研究事例は少ない.そこで我々はアンドロイドの遠隔操作時の同調性を制御した被験者実験を行い,実際に錯覚が生じているのか,またはどのような条件がそろえば,そのような錯覚が生じるか検証した.その結果同調感を高めると,視覚刺激のみでも身体感覚の延長が引き起こされることがわかった.},
  eabstract = {It is known that teleoperators of android robots occasionally experience bodily image extension. Even though only visual feedback is provided, when others touch at the robotic body, the operator feels as if s/he had been touched. A similar phenomenon named "Rubber Hand Illusion" is known which is said to reflect a three-way interaction among vision, touch and proprioception. In this research, we examined whether similar interaction occurs when replacing tactile sensation with synchronization of operating an android robot. The result showed that as the degree of synchronization raise, the participants began feeling the robotic body as part of their own body.},
  file      = {渡辺哲矢2009.pdf:渡辺哲矢2009.pdf:PDF},
  journal   = {電子情報通信学会技術研究報告},
}
山森崇義, 小川浩平, 西尾修一, 石黒浩, "対象の持つ身体性の違いが被注視感に及ぼす影響", 情報処理学会関西支部大会, 京都, pp. 255-258, October, 2008.
Abstract: 対象の自然さと視線知覚の関係を検証した実験について報告した。具体的には,先行研究で得られた知見から「対象の自然さと被注視感には相関関係がある」と「対象の自然さと視線の判断のしやすさには相関関係がある」という2つの仮説を設定し,観察対象,視線角度,視線動作を要因としてその検証を行ったこと,実験の結果,横軸を視線角度,縦軸を目が合う割合および見られる割合とした場合,グラフが山型になることが分かり,このことから,自然さが上がると目が合う割合と見られる割合が上がると言え,仮説の1つが支持されたこと,一方で,自然さと判断のしやすさに関しては,傾向としては見られるものの有意な相関関係は伺えなかったこと-等を述べた。
BibTeX:
@Inproceedings{山森崇義2008,
  author          = {山森崇義 and 小川浩平 and 西尾修一 and 石黒浩},
  title           = {対象の持つ身体性の違いが被注視感に及ぼす影響},
  booktitle       = {情報処理学会関西支部大会},
  year            = {2008},
  pages           = {255--258},
  address         = {京都},
  month           = Oct,
  etitle          = {Effects to feeling of being looked by differences of objects' embodiment},
  abstract        = {対象の自然さと視線知覚の関係を検証した実験について報告した。具体的には,先行研究で得られた知見から「対象の自然さと被注視感には相関関係がある」と「対象の自然さと視線の判断のしやすさには相関関係がある」という2つの仮説を設定し,観察対象,視線角度,視線動作を要因としてその検証を行ったこと,実験の結果,横軸を視線角度,縦軸を目が合う割合および見られる割合とした場合,グラフが山型になることが分かり,このことから,自然さが上がると目が合う割合と見られる割合が上がると言え,仮説の1つが支持されたこと,一方で,自然さと判断のしやすさに関しては,傾向としては見られるものの有意な相関関係は伺えなかったこと-等を述べた。},
  file            = {山森崇義2008.pdf:山森崇義2008.pdf:PDF},
}
山森崇義, 坂本大介, 西尾修一, 石黒浩, "アンドロイドとのアイコンタクトの成立条件の検証", 情報処理学会関西支部大会, 大阪, pp. 71-74, October, 2007.
BibTeX:
@Inproceedings{山森崇義2007,
  author          = {山森崇義 and 坂本大介 and 西尾修一 and 石黒浩},
  title           = {アンドロイドとのアイコンタクトの成立条件の検証},
  booktitle       = {情報処理学会関西支部大会},
  year            = {2007},
  pages           = {71--74},
  address         = {大阪},
  month           = Oct,
  file            = {山森崇義2007.pdf:山森崇義2007.pdf:PDF},
}
坂本大介, 神田崇行, 小野哲雄, 石黒浩, 萩田紀博, "遠隔操作型アンドロイド・ロボットシステムの開発と評価", 電子情報通信学会ネットワークロボット時限研究会, 京都, pp. 21-26, November, 2006.
Abstract: 本稿では非常に人間に近い外見を持ったロボットであるGeminoid HI-1を使用した実験について述べる.この実験の結果から、本ロボットが人間に近い存在感を持ち,人間らしく自然であるが,不気味であるという結果が得られた。
BibTeX:
@Inproceedings{坂本大介2006a,
  author          = {坂本大介 and 神田崇行 and 小野哲雄 and 石黒浩 and 萩田紀博},
  title           = {遠隔操作型アンドロイド・ロボットシステムの開発と評価},
  booktitle       = {電子情報通信学会ネットワークロボット時限研究会},
  year            = {2006},
  series          = {NR-TG-2-11},
  pages           = {21--26},
  address         = {京都},
  month           = Nov,
  etitle          = {Development and Evaluation of Tele-operated Android Robot system},
  abstract        = {本稿では非常に人間に近い外見を持ったロボットであるGeminoid HI-1を使用した実験について述べる.この実験の結果から、本ロボットが人間に近い存在感を持ち,人間らしく自然であるが,不気味であるという結果が得られた。},
  eabstract       = {We developed an tele-communication system that uses human-like android robot called Geminoid HI-1. We conducted an experiment to verify the usefulness of this system. From the experimental result, we confirmed that our system has strong presence and human-likeness.},
  file            = {坂本大介2006a.pdf:坂本大介2006a.pdf:PDF},
  keywords        = {アンドロイド・ロボット;ヒューマノイド・ロボット;遠隔コミュニケーション;遠隔存在感 Android; Humanoid Robot; Tele-communication; Tele-presence},
  organization    = {電子情報通信学会 情報・ システムソサイエティ},
}
坂本大介, 神田崇行, 小野哲雄, 石黒浩, 萩田紀博, "遠隔存在感メディアとしてのアンドロイド・ロボットの可能性", 情報処理学会関西支部大会, 大阪, pp. 127-130, October, 2006.
Abstract: 本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.
BibTeX:
@Inproceedings{坂本大介2006,
  author    = {坂本大介 and 神田崇行 and 小野哲雄 and 石黒浩 and 萩田紀博},
  title     = {遠隔存在感メディアとしてのアンドロイド・ロボットの可能性},
  booktitle = {情報処理学会関西支部大会},
  year      = {2006},
  pages     = {127--130},
  address   = {大阪},
  month     = Oct,
  abstract  = {本研究では人間の存在感を伝達するために遠隔操作型アンドロイド・ロボットシステムを開発した.本システムでは非常に人に近い外見を持つアンドロイド・ロボットであるGeminoid HI-1を使用する.本システムを使用した実,験の結果Geminoid HI-1を通して伝わる人間の存在感はビデオ会議システムを使用した場合の人間の存在感を上回ったことが確認された.さらに,被験者はビデオ会議システムと同程度に本システムにおいて人間らしく自然な会話ができたことが確認された.本稿ではこれらのシステムと実験について述べたあと,遠隔操作型アンドロイド・ロボットシステムによる遠隔存在感の実現についての議論を行う.},
  issn      = {03875806},
}