팔로우
Qiaozi Gao
Qiaozi Gao
Amazon Alexa AI
amazon.com의 이메일 확인됨
제목
인용
인용
연도
Recent advances in natural language inference: A survey of benchmarks, resources, and approaches
S Storks, Q Gao, JY Chai
arXiv preprint arXiv:1904.01172, 2019
1102019
Language to Action: Towards Interactive Task Learning with Physical Agents.
JY Chai, Q Gao, L She, S Yang, S Saba-Sadiya, G Xu
IJCAI 7, 2-9, 2018
1062018
Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches
S Storks, Q Gao, JY Chai
arXiv preprint arXiv:1904.01172, 1-60, 2019
872019
Embodied bert: A transformer model for embodied, language-guided visual task completion
A Suglia, Q Gao, J Thomason, G Thattai, G Sukhatme
arXiv preprint arXiv:2108.04927, 2021
782021
Dialfred: Dialogue-enabled agents for embodied instruction following
X Gao, Q Gao, R Gong, K Lin, G Thattai, GS Sukhatme
IEEE Robotics and Automation Letters 7 (4), 10049-10056, 2022
712022
Physical causality of action verbs in grounded language understanding
Q Gao, M Doering, S Yang, J Chai
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
542016
Grounded semantic role labeling
S Yang, Q Gao, C Liu, C Xiong, SC Zhu, J Chai
Proceedings of the 2016 Conference of the North American Chapter of the …, 2016
532016
What action causes this? towards naive physical action-effect prediction
Q Gao, S Yang, J Chai, L Vanderwende
Proceedings of the 56th Annual Meeting of the Association for Computational …, 2018
362018
Towards large-scale interpretable knowledge graph reasoning for dialogue systems
YL Tuan, S Beygi, M Fazel-Zarandi, Q Gao, A Cervone, WY Wang
arXiv preprint arXiv:2203.10610, 2022
282022
Tiered reasoning for intuitive physics: Toward verifiable commonsense language understanding
S Storks, Q Gao, Y Zhang, J Chai
arXiv preprint arXiv:2109.04947, 2021
282021
Alexa arena: A user-centric interactive platform for embodied ai
Q Gao, G Thattai, S Shakiah, X Gao, S Pansare, V Sharma, G Sukhatme, ...
Advances in Neural Information Processing Systems 36, 2024
262024
Groundhog: Grounding large language models to holistic segmentation
Y Zhang, Z Ma, X Gao, S Shakiah, Q Gao, J Chai
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2024
202024
Luminous: Indoor scene generation for embodied ai challenges
Y Zhao, K Lin, Z Jia, Q Gao, G Thattai, J Thomason, GS Sukhatme
arXiv preprint arXiv:2111.05527, 2021
162021
Commonsense justification for action explanation
S Yang, Q Gao, S Saba-Sadiya, J Chai
Proceedings of the 2018 Conference on Empirical Methods in Natural Language …, 2018
162018
Learning to act with affordance-aware multimodal neural slam
Z Jia, K Lin, Y Zhao, Q Gao, G Thattai, GS Sukhatme
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2022
152022
Are we there yet? learning to localize in embodied instruction following
S Storks, Q Gao, G Thattai, G Tur
arXiv preprint arXiv:2101.03431, 2021
102021
Interactive teaching for conversational ai
Q Ping, F Niu, G Thattai, J Chengottusseriyil, Q Gao, A Reganti, ...
arXiv preprint arXiv:2012.00958, 2020
102020
Lemma: Learning language-conditioned multi-robot manipulation
R Gong, X Gao, Q Gao, S Shakiah, G Thattai, GS Sukhatme
IEEE Robotics and Automation Letters, 2023
92023
Inter-functional analysis of high-throughput phenotype data by non-parametric clustering and its application to photosynthesis
Q Gao, E Ostendorf, JA Cruz, R Jin, DM Kramer, J Chen
Bioinformatics 32 (1), 67-76, 2016
92016
Mastering robot manipulation with multimodal prompts through pretraining and multi-task fine-tuning
J Li, Q Gao, M Johnston, X Gao, X He, S Shakiah, H Shi, R Ghanadan, ...
arXiv preprint arXiv:2310.09676, 2023
72023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20