팔로우
Chongyang Tao
제목
인용
인용
연도
WizardLM: Empowering Large Language Models to Follow Complex Instructions
C Xu*, Q Sun*, K Zheng*, X Geng, P Zhao, J Feng, C Tao, D Jiang
Proc. ICLR, 2023
725*2023
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Z Luo, C Xu, P Zhao, Q Sun, X Geng, W Hu, C Tao, J Ma, Q Lin, D Jiang
Proc. ICLR, 2023
4662023
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
H Luo, Q Sun, C Xu, P Zhao, J Lou, C Tao, X Geng, Q Lin, S Chen, ...
arXiv preprint arXiv:2308.09583, 2023
2712023
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems
C Tao, L Mou, D Zhao, R Yan
Proc. AAAI, 722--729, 2018
2562018
Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
X Zhao, W Wu, C Xu, C Tao, D Zhao, R Yan
Proc. EMNLP, 2020
2172020
Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation
W Hu*, Z Lin*, B Liu*, C Tao, Z Tao, J Ma, D Zhao, R Yan
Proc. ICLR, 2018
1832018
Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism.
C Tao, S Gao, M Shang, W Wu, D Zhao, R Yan
Proc. IJCAI, 4418-4424, 2018
1592018
Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots
C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan
Proc. WSDM, 267-275, 2019
1562019
One Time of Interaction May Not be Enough: Go Deep With an Interaction-over-interaction Network for Response Selection in Dialogues
C Tao, W Wu, C Xu, W Hu, D Zhao, R Yan
Proc. ACL, 2019
1392019
Low-Resource Knowledge-Grounded Dialogue Generation
X Zhao, W Wu, C Tao, C Xu, D Zhao, R Yan
Proc. ICLR, 2020
1122020
PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks
Y Wang, C Xu, Q Sun, H Hu, C Tao, X Geng, D Jiang
Proc. ACL, 2022
852022
A survey on knowledge distillation of large language models
X Xu, M Li, C Tao#, T Shen, R Cheng, J Li, C Xu, D Tao, T Zhou
arXiv preprint arXiv:2402.13116, 2024
822024
Zero-Resource Knowledge-Grounded Dialogue Generation
L Li, C Xu, W Wu, Y Zhao, X Zhao, C Tao
Proc. NeurIPS, 2020
762020
Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues
R Xu, C Tao, D Jiang, X Zhao, D Zhao, R Yan
Proc. AAAI, 2021
682021
Multi-Granularity Structural Knowledge Distillation for Language Model Compression
C Liu, C Tao, J Feng, D Zhao
Proc. ACL, 1001-1011, 2022
502022
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding
JC Gu, C Tao, ZH Ling, C Xu, X Geng, D Jiang
Proc. ACL, 2021
492021
A Document-grounded Matching Network for Response Selection in Retrieval-based Chatbots
X Zhao*, C Tao*, W Wu, C Xu, D Zhao, R Yan
Proc. IJCAI, 2019
452019
Iterative Document Representation Learning Towards Summarization with Polishing
X Chen, S Gao, C Tao, Y Song, D Zhao, R Yan
Proc. EMNLP, 2018
442018
Neural Response Generation with Meta-Words
C Xu, W Wu, C Tao, H Hu, M Schuerman, Y Wang
Proc. ACL, 2019
412019
Leveraging Large Language Models for NLG Evaluation: Advances and Challenges
Z Li*, X Xu*, T Shen, C Xu, JC Gu, Y Lai, C Tao#, S Ma
Proc. EMNLP, 2024
40*2024
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20