Ikuti
Runxin Xu
Runxin Xu
DeepSeek AI | Peking University
Email yang diverifikasi di stu.pku.edu.cn - Beranda
Judul
Dikutip oleh
Dikutip oleh
Tahun
Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning
DG DeepSeek-AI, D Yang, H Zhang, J Song, R Zhang, R Xu, Q Zhu, S Ma, ...
arXiv preprint arXiv:2501.12948, 2025
3652025
Deepseekmath: Pushing the limits of mathematical reasoning in open language models
Z Shao, P Wang, Q Zhu, R Xu, J Song, X Bi, H Zhang, M Zhang, YK Li, ...
arXiv preprint arXiv:2402.03300, 2024
349*2024
Double graph based reasoning for document-level relation extraction
S Zeng, R Xu, B Chang, L Li
arXiv preprint arXiv:2009.13752, 2020
2712020
Deepseek llm: Scaling open-source language models with longtermism
X Bi, D Chen, G Chen, S Chen, D Dai, C Deng, H Ding, K Dong, Q Du, ...
arXiv preprint arXiv:2401.02954, 2024
2562024
Math-shepherd: Verify and reinforce llms step-by-step without human annotations
P Wang, L Li, Z Shao, RX Xu, D Dai, Y Li, D Chen, Y Wu, Z Sui
arXiv preprint arXiv:2312.08935, 2023
232*2023
Deepseek-v3 technical report
A Liu, B Feng, B Xue, B Wang, B Wu, C Lu, C Zhao, C Deng, C Zhang, ...
arXiv preprint arXiv:2412.19437, 2024
2262024
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models
D Dai, C Deng, C Zhao, RX Xu, H Gao, D Chen, J Li, W Zeng, X Yu, Y Wu, ...
arXiv preprint arXiv:2401.06066, 2024
1902024
Raise a child in large language model: Towards effective and generalizable fine-tuning
R Xu, F Luo, Z Zhang, C Tan, B Chang, S Huang, F Huang
arXiv preprint arXiv:2109.05687, 2021
1902021
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DS AI
1642024
Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence
Q Zhu, D Guo, Z Shao, D Yang, P Wang, R Xu, Y Wu, Y Li, H Gao, S Ma, ...
arXiv preprint arXiv:2406.11931, 2024
1472024
Document-level event extraction via heterogeneous graph-based interaction model with a tracker
R Xu, T Liu, L Li, B Chang
arXiv preprint arXiv:2105.14924, 2021
1202021
Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models
L Li, Y Wang, R Xu, P Wang, X Feng, L Kong, Q Liu
arXiv preprint arXiv:2403.00231, 2024
832024
A two-stream AMR-enhanced model for document-level event argument extraction
R Xu, P Wang, T Liu, S Zeng, B Chang, Z Sui
arXiv preprint arXiv:2205.00241, 2022
642022
An enhanced span-based decomposition method for few-shot sequence labeling
P Wang, R Xu, T Liu, Q Zhou, Y Cao, B Chang, Z Sui
arXiv preprint arXiv:2109.13023, 2021
572021
Omni-math: A universal olympiad level mathematic benchmark for large language models
B Gao, F Song, Z Yang, Z Cai, Y Miao, Q Dong, L Li, C Ma, L Chen, R Xu, ...
arXiv preprint arXiv:2410.07985, 2024
43*2024
Making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning
Z Xu, C Wang, M Qiu, F Luo, R Xu, S Huang, J Huang
Proceedings of the sixteenth ACM international conference on web search and …, 2023
312023
From dense to sparse: Contrastive pruning for better pre-trained language model compression
R Xu, F Luo, C Wang, B Chang, J Huang, S Huang, F Huang
Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 11547 …, 2022
272022
Behind the scenes: An exploration of trigger biases problem in few-shot event classification
P Wang, R Xun, T Liu, D Dai, B Chang, Z Sui
Proceedings of the 30th ACM International Conference on Information …, 2021
172021
ATP: AMRize then parse! enhancing AMR parsing with PseudoAMRs
L Chen, P Wang, R Xu, T Liu, Z Sui, B Chang
arXiv preprint arXiv:2204.08875, 2022
162022
Llm critics help catch bugs in mathematics: Towards a better mathematical verifier with natural language feedback
B Gao, Z Cai, R Xu, P Wang, C Zheng, R Lin, K Lu, D Liu, C Zhou, W Xiao, ...
arXiv preprint arXiv:2406.14024, 2024
13*2024
Sistem tidak dapat melakukan operasi ini. Coba lagi nanti.
Artikel 1–20