Seguir
Yoichi Matsuyama
Yoichi Matsuyama
Associate Research Professor, Waseda University
Dirección de correo verificada de pcl.cs.waseda.ac.jp - Página principal
Título
Citado por
Citado por
Año
Socially-aware animated intelligent personal assistant agent
Y Matsuyama, A Bhardwaj, R Zhao, O Romeo, S Akoju, J Cassell
Proceedings of the 17th annual meeting of the special interest group on …, 2016
1042016
Four-participant group conversation: A facilitation robot controlling engagement density as the fourth participant
Y Matsuyama, I Akiba, S Fujie, T Kobayashi
Computer Speech & Language 33 (1), 1-24, 2015
882015
A model of social explanations for a conversational movie recommendation system
F Pecune, S Murali, V Tsai, Y Matsuyama, J Cassell
Proceedings of the 7th international conference on human-agent interaction …, 2019
652019
Framework of Communication Activation Robot Participating in Multiparty Conversation.
Y Matsuyama, H Taniyama, S Fujie, T Kobayashi
AAAI fall symposium: dialog with robots, 68-73, 2010
412010
Field trial analysis of socially aware robot assistant
F Pecune, J Chen, Y Matsuyama, J Cassell
Proceedings of the 17th international conference on autonomous agents and …, 2018
392018
Conversation robot participating in and activating a group communication.
S Fujie, Y Matsuyama, H Taniyama, T Kobayashi
InterSpeech, 264-267, 2009
352009
A user simulator architecture for socially-aware conversational agents
A Jain, F Pecune, Y Matsuyama, J Cassell
Proceedings of the 18th international conference on intelligent virtual …, 2018
272018
Initiating and maintaining collaborations and facilitating understanding in interdisciplinary group research
SJ Beck, AL Meinecke, Y Matsuyama, CC Lee
Small Group Research 48 (5), 532-543, 2017
222017
Automatic expressive opinion sentence generation for enjoyable conversational systems
Y Matsuyama, A Saito, S Fujie, T Kobayashi
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (2), 313-326, 2014
212014
System design of group communication activator: an entertainment task for elderly care
Y Matsuyama, H Taniyama, S Fujie, T Kobayashi
Proceedings of the 4th ACM/IEEE international conference on Human robot …, 2009
192009
Analysis of multimodal features for speaking proficiency scoring in an interview dialogue
M Saeki, Y Matsuyama, S Kobashikawa, T Ogawa, T Kobayashi
2021 IEEE Spoken Language Technology Workshop (SLT), 629-635, 2021
142021
Psychological evaluation of a group communication activativation robot in a party game
Y Matsuyama, S Fujie, H Taniyama, T Kobayashi
Interspeech2010, 3046-3049, 2010
132010
人同士のコミュニケーションに参加し活性化する会話ロボット
藤江真也, 松山洋一, 谷山輝, 小林哲則
電子情報通信学会論文誌 A 95 (1), 37-45, 2012
122012
Designing communication activation system in group communication
Y Matsuyama, H Taniyama, S Fujie, T Kobayashi
Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots, 629-634, 2008
122008
IEICE Technical Report
ST KIT, Y Ota, T Tajiri, J Tatebayashi, S Iwamoto, Y Arakawa, ...
Workshop Date 118 (128), 1988
12*1988
Refinement of utterance fluency feature extraction and automated scoring of L2 oral fluency with dialogic features
R Matsuura, S Suzuki, M Saeki, T Ogawa, Y Matsuyama
2022 Asia-Pacific signal and information processing association annual …, 2022
112022
Socially-Conditioned Task Reasoning for a Virtual Tutoring Agent.
Z Zhao, M Madaio, F Pecune, Y Matsuyama, J Cassell
Grantee Submission, 2018
102018
SCHEMA: multi-party interaction-oriented humanoid robot
Y Matsuyama, K Hosoya, H Taniyama, H Tsuboi, S Fujie, T Kobayashi
ACM SIGGRAPH ASIA 2009 Art Gallery & Emerging Technologies: Adaptation, 82-82, 2009
102009
Multimodal turn-taking model using visual cues for end-of-utterance prediction in spoken dialogue systems
F Kurata, M Saeki, S Fujie, Y Matsuyama
Proc. Interspeech, 2658-2662, 2023
82023
A woz study for an incremental proficiency scoring interview agent eliciting ratable samples
M Saeki, W Demkow, T Kobayashi, Y Matsuyama
Conversational AI for Natural Human-Centric Interaction: 12th International …, 2022
82022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20