Retrieve only when it needs: Adaptive retrieval augmentation for hallucination mitigation in large language models H Ding, L Pang, Z Wei, H Shen, X Cheng arXiv preprint arXiv:2402.10612, 2024 | 37 | 2024 |
Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models J Deng, Z Wei, L Pang, H Ding, H Shen, X Cheng arXiv preprint arXiv:2405.15349, 2024 | 12* | 2024 |
Mlake: Multilingual knowledge editing benchmark for large language models Z Wei, J Deng, L Pang, H Ding, H Shen, X Cheng arXiv preprint arXiv:2404.04990, 2024 | 11 | 2024 |
Stable knowledge editing in large language models Z Wei, L Pang, H Ding, J Deng, H Shen, X Cheng arXiv preprint arXiv:2402.13048, 2024 | 10 | 2024 |
When to trust llms: Aligning confidence with response quality S Tao, L Yao, H Ding, Y Xie, Q Cao, F Sun, J Gao, H Shen, B Ding arXiv preprint arXiv:2404.17287, 2024 | 9 | 2024 |
Maclasa: Multi-aspect controllable text generation via efficient sampling from compact latent space H Ding, L Pang, Z Wei, H Shen, X Cheng, TS Chua arXiv preprint arXiv:2305.12785, 2023 | 8 | 2023 |
Revisiting Robust RAG: Do We Still Need Complex Robust Training in the Era of Powerful LLMs? H Ding, S Tao, L Pang, Z Wei, L Chen, K Xu, H Shen, X Cheng arXiv preprint arXiv:2502.11400, 2025 | | 2025 |
ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large Language Models H Ding, S Tao, L Pang, Z Wei, J Gao, B Ding, H Shen, X Cheng arXiv preprint arXiv:2502.11404, 2025 | | 2025 |