Internlm2 technical report Z Cai, M Cao, H Chen, K Chen, K Chen, X Chen, X Chen, Z Chen, Z Chen, ... arXiv preprint arXiv:2403.17297, 2024 | 457* | 2024 |
Redeem myself: Purifying backdoors in deep learning models using self attention distillation X Gong, Y Chen, W Yang, Q Wang, Y Gu, H Huang, C Shen 2023 IEEE Symposium on Security and Privacy (SP), 755-772, 2023 | 22 | 2023 |
Lagent: a lightweight open-source framework that allows users to efficiently build large language model (LLM)-based agents LD Team https://github.com/InternLM/lagent, 2023 | 6* | 2023 |
ANAH: Analytical Annotation of Hallucinations in Large Language Models Z Ji*, Y Gu*, W Zhang, C Lyu, D Lin, K Chen Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024 | 5 | 2024 |
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models Y Gu*, Z Ji*, W Zhang, C Lyu, D Lin, K Chen Advances in Neural Information Processing Systems 37, 60012-60039, 2024 | 3 | 2024 |
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning C Lyu*, S Gao*, Y Gu*, W Zhang*, J Gao, K Liu, Z Wang, S Li, Q Zhao, ... arXiv preprint arXiv:2502.06781, 2025 | 1 | 2025 |
BackCache: Mitigating contention-based cache timing attacks by hiding cache line evictions Q Wang, X Zhang, H Wang, Y Gu, M Tang arXiv preprint arXiv:2304.10268, 2023 | 1 | 2023 |
Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs Y Gu, W Zhang, C Lyu, D Lin, K Chen The Thirteenth International Conference on Learning Representations, 2025 | | 2025 |
One more set: Mitigating conflict-based cache side-channel attacks by extending cache set Y Gu, M Tang, Q Wang, H Wang, H Ding Journal of Systems Architecture 144, 102997, 2023 | | 2023 |