Folgen
Sheng Shen
Sheng Shen
Bestätigte E-Mail-Adresse bei berkeley.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
16752021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
16042023
The llama 3 herd of models
A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ...
arXiv preprint arXiv:2407.21783, 2024
11092024
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
6102022
Q-bert: Hessian based ultra low precision quantization of bert
S Shen, Z Dong, J Ye, L Ma, Z Yao, A Gholami, MW Mahoney, K Keutzer
AAAI 2020, 2019
5902019
How Much Can CLIP Benefit Vision-and-Language Tasks?
S Shen*, LH Li*, H Tan, M Bansal, A Rohrbach, KW Chang, Z Yao, ...
ICLR 2022, 2021
4232021
Agentbench: Evaluating llms as agents
X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ...
arXiv preprint arXiv:2308.03688, 2023
322*2023
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
3112020
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z Yao, A Gholami, S Shen, K Keutzer, MW Mahoney
AAAI 2021, 2020
2732020
Llava-next: Improved reasoning, ocr, and world knowledge
H Liu, C Li, Y Li, B Li, Y Zhang, S Shen, YJ Lee
2012024
Aligning large multimodal models with factually augmented rlhf
Z Sun*, S Shen*, S Cao*, H Liu, C Li, Y Shen, C Gan, LY Gui, YX Wang, ...
arXiv preprint arXiv:2309.14525, 2023
1752023
Learned token pruning for transformers
S Kim*, S Shen*, D Thorsley, A Gholami, W Kwon, J Hassoun, K Keutzer
KDD 2022, 2021
1382021
Poisoning Language Models During Instruction Tuning
A Wan*, E Wallace*, S Shen, D Klein
ICML 2023, 2023
1372023
SqueezeLLM: Dense-and-Sparse Quantization
S Kim*, C Hooper*, A Gholami*, Z Dong, X Li, S Shen, MW Mahoney, ...
arXiv preprint arXiv:2306.07629, 2023
1352023
An annotated dataset of literary entities
D Bamman, S Popat, S Shen
NAACL 2019, 2019
1082019
What Language Model to Train if You Have One Million GPU Hours?
T Le Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
EMNLP 2022, 2022
1042022
Powernorm: Rethinking batch normalization in transformers
S Shen, Z Yao, A Gholami, M Mahoney, K Keutzer
ICML 2020, 2020
892020
Ermes: Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification
Z Chen*, S Shen*, Z Hu, X Lu, Q Mei, X Liu
WWW 2019, 2018
89*2018
Raft: Adapting language model to domain specific rag
T Zhang, SG Patil, N Jain, S Shen, M Zaharia, I Stoica, JE Gonzalez
arXiv preprint arXiv:2403.10131, 2024
882024
K-lite: Learning transferable visual models with external knowledge
S Shen, C Li, X Hu, Y Xie, J Yang, P Zhang, A Rohrbach, Z Gan, L Wang, ...
NeurIPS 2022, 2022
882022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20