Follow
Xin Jiang
Title
Cited by
Cited by
Year
Tinybert: Distilling bert for natural language understanding
X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu
arXiv preprint arXiv:1909.10351, 2019
19132019
ERNIE: Enhanced Language Representation with Informative Entities
Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu
arXiv preprint arXiv:1905.07129, 2019
17442019
FILIP: Fine-grained Interactive Language-Image Pre-Training
L Yao, R Huang, L Hou, G Lu, M Niu, H Xu, X Liang, Z Li, X Jiang, C Xu
arXiv preprint arXiv:2111.07783, 2021
5482021
Dynabert: Dynamic bert with adaptive width and depth
L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu
Advances in Neural Information Processing Systems 33, 2020
3112020
Paraphrase generation with deep reinforcement learning
Z Li, X Jiang, L Shang, H Li
arXiv preprint arXiv:1711.00279, 2017
2672017
Neural generative question answering
J Yin, X Jiang, Z Lu, L Shang, H Li, X Li
arXiv preprint arXiv:1512.01337, 2015
2642015
Aligning Large Language Models with Human: A Survey
Y Wang, W Zhong, L Li, F Mi, X Zeng, W Huang, L Shang, X Jiang, Q Liu
arXiv preprint arXiv:2307.12966, 2023
2472023
BinaryBERT: Pushing the Limit of BERT Quantization
H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King
arXiv preprint arXiv:2012.15701, 2020
2262020
PanGu-: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
W Zeng, X Ren, T Su, H Wang, Y Liao, Z Wang, X Jiang, ZZ Yang, K Wang, ...
arXiv preprint arXiv:2104.12369, 2021
2252021
A ranking approach to keyphrase extraction
X Jiang, Y Hu, H Li
Proceedings of the 32nd international ACM SIGIR conference on Research and …, 2009
2132009
TernaryBERT: Distillation-aware Ultra-low Bit BERT
W Zhang, L Hou, Y Yin, L Shang, X Chen, X Jiang, Q Liu
arXiv preprint arXiv:2009.12812, 2020
2042020
Affective neural response generation
N Asghar, P Poupart, J Hoey, X Jiang, L Mou
European Conference on Information Retrieval, 154-166, 2018
1932018
Integrating Graph Contextualized Knowledge into Pre-trained Language Models
B He, D Zhou, J Xiao, Q Liu, NJ Yuan, T Xu
arXiv preprint arXiv:1912.00147, 2019
1652019
On position embeddings in bert
B Wang, L Shang, C Lioma, X Jiang, H Yang, Q Liu, JG Simonsen
International Conference on Learning Representations, 2020
1452020
NEZHA: Neural Contextualized Representation for Chinese Language Understanding
J Wei, X Ren, X Li, W Huang, Y Liao, Y Wang, J Lin, X Jiang, X Chen, ...
arXiv preprint arXiv:1909.00204, 2019
1332019
SYNCOBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
X Wang, FM Yasheng Wang, P Zhou, Y Wan, X Liu, L Li, H Wu, J Liu, ...
1202021
Generate & Rank: A Multi-task Framework for Math Word Problems
J Shen, Y Yin, L Li, L Shang, X Jiang, M Zhang, Q Liu
arXiv preprint arXiv:2109.03034, 2021
1122021
Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark
J Gu, X Meng, G Lu, L Hou, N Minzhe, X Liang, L Yao, R Huang, W Zhang, ...
Advances in Neural Information Processing Systems 35, 26418-26431, 2022
1062022
Decomposable Neural Paraphrase Generation
Z Li, X Jiang, L Shang, Q Liu
arXiv preprint arXiv:1906.09741, 2019
992019
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
W Dai, L Hou, L Shang, X Jiang, Q Liu, P Fung
arXiv preprint arXiv:2203.06386, 2022
952022
The system can't perform the operation now. Try again later.
Articles 1–20