A survey of large language models WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou, Y Min, B Zhang, J Zhang, ... arXiv preprint arXiv:2303.18223 1 (2), 2023 | 4562* | 2023 |
Pre-trained language models for text generation: A survey J Li, T Tang, WX Zhao, JY Nie, JR Wen ACM Computing Surveys 56 (9), 1-39, 2024 | 468 | 2024 |
Halueval: A large-scale hallucination evaluation benchmark for large language models J Li, X Cheng, WX Zhao, JY Nie, JR Wen Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 414 | 2023 |
A survey of vision-language pre-trained models Y Du, Z Liu, J Li, WX Zhao arXiv preprint arXiv:2202.10936, 2022 | 232 | 2022 |
WenLan: Bridging vision and language by large-scale multi-modal pre-training Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ... arXiv preprint arXiv:2103.06561, 2021 | 147 | 2021 |
The dawn after the dark: An empirical study on factuality hallucination in large language models J Li, J Chen, R Ren, X Cheng, WX Zhao, JY Nie, JR Wen arXiv preprint arXiv:2401.03205, 2024 | 79 | 2024 |
Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models Z Dong, T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2309.13345, 2023 | 61 | 2023 |
A survey on long text modeling with transformers Z Dong, T Tang, L Li, WX Zhao arXiv preprint arXiv:2302.14502, 2023 | 59 | 2023 |
Few-shot knowledge graph-to-text generation with pretrained language models J Li, T Tang, WX Zhao, Z Wei, NJ Yuan, JR Wen Findings of The 59th Annual Meeting of the Association for Computational …, 2021 | 55 | 2021 |
Mining implicit entity preference from user-item interaction data for knowledge graph completion via adversarial learning G He, J Li, WX Zhao, P Liu, JR Wen Proceedings of the web conference 2020, 740-751, 2020 | 46 | 2020 |
Learning to Transfer Prompts for Text Generation J Li, T Tang, JY Nie, JR Wen, WX Zhao NAACL 2022, 2022 | 43 | 2022 |
Textbox 2.0: A text generation library with pre-trained language models T Tang, J Li, Z Chen, Y Hu, Z Yu, W Dai, Z Dong, X Cheng, Y Wang, ... arXiv preprint arXiv:2212.13005, 2022 | 40* | 2022 |
Knowledge-enhanced personalized review generation with capsule graph neural network J Li, S Li, WX Zhao, G He, Z Wei, NJ Yuan, JR Wen Proceedings of the 29th ACM International Conference on Information …, 2020 | 40 | 2020 |
Generating long and informative reviews with aspect-aware coarse-to-fine decoding J Li, WX Zhao, JR Wen, Y Song The 57th Annual Meeting of the Association for Computational Linguistics (ACL), 2019 | 39 | 2019 |
Mvp: Multi-task supervised pre-training for natural language generation T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2206.12131, 2022 | 36 | 2022 |
Context-tuning: Learning contextualized prompts for natural language generation T Tang, J Li, WX Zhao, JR Wen arXiv preprint arXiv:2201.08670, 2022 | 32 | 2022 |
Knowledge-based review generation by coherence enhanced text planning J Li, WX Zhao, Z Wei, NJ Yuan, JR Wen The 44th International ACM SIGIR Conference on Research and Development in …, 2021 | 29 | 2021 |
Rear: A relevance-aware retrieval-augmented framework for open-domain question answering Y Wang, R Ren, J Li, WX Zhao, J Liu, JR Wen arXiv preprint arXiv:2402.17497, 2024 | 21 | 2024 |
The web can be your oyster for improving large language models J Li, T Tang, WX Zhao, J Wang, JY Nie, JR Wen arXiv preprint arXiv:2305.10998, 2023 | 21* | 2023 |
ELMER: A non-autoregressive pre-trained language model for efficient and effective text generation J Li, T Tang, WX Zhao, JY Nie, JR Wen arXiv preprint arXiv:2210.13304, 2022 | 18 | 2022 |