关注
Tianyu Gao
Tianyu Gao
在 princeton.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Simcse: Simple contrastive learning of sentence embeddings
T Gao, X Yao, D Chen
arXiv preprint arXiv:2104.08821, 2021
32502021
Making pre-trained language models better few-shot learners
T Gao, A Fisch, D Chen
arXiv preprint arXiv:2012.15723, 2020
18552020
KEPLER: A unified model for knowledge embedding and pre-trained language representation
X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang
Transactions of the Association for Computational Linguistics 9, 176-194, 2021
7162021
Hybrid attention-based prototypical networks for noisy few-shot relation classification
T Gao, X Han, Z Liu, M Sun
Proceedings of the AAAI conference on artificial intelligence 33 (01), 6407-6414, 2019
4112019
FewRel 2.0: Towards More Challenging Few-Shot Relation Classification
T Gao, X Han, H Zhu, Z Liu, P Li, M Sun, J Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
2832019
Enabling large language models to generate text with citations
T Gao, H Yen, J Yu, D Chen
arXiv preprint arXiv:2305.14627, 2023
2082023
Learning from context or names? an empirical study on neural relation extraction
H Peng, T Gao, X Han, Y Lin, P Li, Z Liu, M Sun, J Zhou
arXiv preprint arXiv:2010.01923, 2020
2062020
OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction
X Han, T Gao, Y Yao, D Ye, Z Liu, M Sun
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
1892019
More data, more relations, more context and more openness: A review and outlook for relation extraction
X Han, T Gao, Y Lin, H Peng, Y Yang, C Xiao, Z Liu, P Li, M Sun, J Zhou
arXiv preprint arXiv:2004.03186, 2020
1692020
Sheared llama: Accelerating language model pre-training via structured pruning
M Xia, T Gao, Z Zeng, D Chen
arXiv preprint arXiv:2310.06694, 2023
1632023
Should you mask 15% in masked language modeling?
A Wettig, T Gao, Z Zhong, D Chen
arXiv preprint arXiv:2202.08005, 2022
1562022
Fine-tuning language models with just forward passes
S Malladi, T Gao, E Nichani, A Damian, JD Lee, D Chen, S Arora
Advances in Neural Information Processing Systems 36, 53038-53075, 2023
1402023
Few-shot relation extraction via bayesian meta-learning on relation graphs
M Qu, T Gao, LP Xhonneux, J Tang
International conference on machine learning, 7867-7876, 2020
1352020
Continual relation learning via episodic memory activation and reconsolidation
X Han, Y Dai, T Gao, Y Lin, Z Liu, P Li, M Sun, J Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
1002020
Evaluating large language models at evaluating instruction following
Z Zeng, J Yu, T Gao, Y Meng, T Goyal, D Chen
arXiv preprint arXiv:2310.07641, 2023
982023
Neural snowball for few-shot relation learning
T Gao, X Han, R Xie, Z Liu, F Lin, L Lin, M Sun
Proceedings of the AAAI conference on artificial intelligence 34 (05), 7772-7779, 2020
852020
What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning
J Pan, T Gao, H Chen, D Chen
Findings of the Association for Computational Linguistics: ACL 2023, 2023
792023
Recovering private text in federated learning of language models
S Gupta, Y Huang, Z Zhong, T Gao, K Li, D Chen
Advances in neural information processing systems 35, 8130-8143, 2022
772022
Meta-information guided meta-learning for few-shot relation classification
B Dong, Y Yao, R Xie, T Gao, X Han, Z Liu, F Lin, L Lin, M Sun
Proceedings of the 28th international conference on computational …, 2020
462020
The cringe loss: Learning what language not to model
L Adolphs, T Gao, J Xu, K Shuster, S Sukhbaatar, J Weston
arXiv preprint arXiv:2211.05826, 2022
332022
系统目前无法执行此操作,请稍后再试。
文章 1–20