Ikuti
Hanxing Ding
Judul
Dikutip oleh
Dikutip oleh
Tahun
Retrieve only when it needs: Adaptive retrieval augmentation for hallucination mitigation in large language models
H Ding, L Pang, Z Wei, H Shen, X Cheng
arXiv preprint arXiv:2402.10612, 2024
372024
Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models
J Deng, Z Wei, L Pang, H Ding, H Shen, X Cheng
arXiv preprint arXiv:2405.15349, 2024
12*2024
Mlake: Multilingual knowledge editing benchmark for large language models
Z Wei, J Deng, L Pang, H Ding, H Shen, X Cheng
arXiv preprint arXiv:2404.04990, 2024
112024
Stable knowledge editing in large language models
Z Wei, L Pang, H Ding, J Deng, H Shen, X Cheng
arXiv preprint arXiv:2402.13048, 2024
102024
When to trust llms: Aligning confidence with response quality
S Tao, L Yao, H Ding, Y Xie, Q Cao, F Sun, J Gao, H Shen, B Ding
arXiv preprint arXiv:2404.17287, 2024
92024
Maclasa: Multi-aspect controllable text generation via efficient sampling from compact latent space
H Ding, L Pang, Z Wei, H Shen, X Cheng, TS Chua
arXiv preprint arXiv:2305.12785, 2023
82023
Revisiting Robust RAG: Do We Still Need Complex Robust Training in the Era of Powerful LLMs?
H Ding, S Tao, L Pang, Z Wei, L Chen, K Xu, H Shen, X Cheng
arXiv preprint arXiv:2502.11400, 2025
2025
ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large Language Models
H Ding, S Tao, L Pang, Z Wei, J Gao, B Ding, H Shen, X Cheng
arXiv preprint arXiv:2502.11404, 2025
2025
Sistem tidak dapat melakukan operasi ini. Coba lagi nanti.
Artikel 1–8