Obserwuj
Tianyi Tang
Tianyi Tang
Qwen Team, Alibaba Group & Renmin University of China
Zweryfikowany adres z alibaba-inc.com - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
A survey of large language models
WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou, Y Min, B Zhang, J Zhang, ...
arXiv preprint arXiv:2303.18223 1 (2), 2023
4562*2023
Qwen2. 5 Technical Report
A Yang, B Yang, B Zhang, B Hui, B Zheng, B Yu, C Li, D Liu, F Huang, ...
arXiv preprint arXiv:2412.15115, 2024
1224*2024
A survey of pretrained language models based text generation
J Li, T Tang, WX Zhao, JY Nie, JR Wen
arXiv preprint arXiv:2201.05273, 2022
468*2022
Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting
H Huang, T Tang, D Zhang, WX Zhao, T Song, Y Xia, F Wei
arXiv preprint arXiv:2305.07004, 2023
1122023
Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models
Z Dong, T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2309.13345, 2023
612023
A survey on long text modeling with transformers
Z Dong, T Tang, L Li, WX Zhao
arXiv preprint arXiv:2302.14502, 2023
592023
Language-specific neurons: The key to multilingual capabilities in large language models
T Tang, W Luo, H Huang, D Zhang, X Wang, X Zhao, F Wei, JR Wen
arXiv preprint arXiv:2402.16438, 2024
552024
Few-shot knowledge graph-to-text generation with pretrained language models
J Li, T Tang, WX Zhao, Z Wei, NJ Yuan, JR Wen
arXiv preprint arXiv:2106.01623, 2021
552021
Learning to transfer prompts for text generation
J Li, T Tang, JY Nie, JR Wen, WX Zhao
arXiv preprint arXiv:2205.01543, 2022
432022
Textbox 2.0: A text generation library with pre-trained language models
T Tang, J Li, Z Chen, Y Hu, Z Yu, W Dai, Z Dong, X Cheng, Y Wang, ...
arXiv preprint arXiv:2212.13005, 2022
40*2022
Mvp: Multi-task supervised pre-training for natural language generation
T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2206.12131, 2022
362022
Context-tuning: Learning contextualized prompts for natural language generation
T Tang, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2201.08670, 2022
322022
Beyond imitation: Leveraging fine-grained quality signals for alignment
G Guo, R Zhao, T Tang, WX Zhao, JR Wen
arXiv preprint arXiv:2311.04072, 2023
212023
The web can be your oyster for improving large language models
J Li, T Tang, WX Zhao, J Wang, JY Nie, JR Wen
arXiv preprint arXiv:2305.10998, 2023
21*2023
ELMER: A non-autoregressive pre-trained language model for efficient and effective text generation
J Li, T Tang, WX Zhao, JY Nie, JR Wen
arXiv preprint arXiv:2210.13304, 2022
182022
Not all metrics are guilty: Improving nlg evaluation by diversifying references
T Tang, H Lu, YE Jiang, H Huang, D Zhang, WX Zhao, T Kocmi, F Wei
arXiv preprint arXiv:2305.15067, 2023
16*2023
Learning to imagine: Visually-augmented natural language generation
T Tang, Y Chen, Y Du, J Li, WX Zhao, JR Wen
arXiv preprint arXiv:2305.16944, 2023
142023
Zero-shot visual question answering with language model feedback
Y Du, J Li, T Tang, WX Zhao, JR Wen
arXiv preprint arXiv:2305.17006, 2023
92023
Eliteplm: an empirical study on general language ability evaluation of pretrained language models
J Li, T Tang, Z Gong, L Yang, Z Yu, Z Chen, J Wang, WX Zhao, JR Wen
arXiv preprint arXiv:2205.01523, 2022
62022
Towards effective ancient Chinese translation: dataset, model, and evaluation
G Guo, J Yang, F Lu, J Qin, T Tang, WX Zhao
CCF International Conference on Natural Language Processing and Chinese …, 2023
52023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20