Obserwuj
Tianle Cai
Tianle Cai
Zweryfikowany adres z princeton.edu - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Do Transformers Really Perform Badly for Graph Representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
NeurIPS 2021, arXiv preprint arXiv:2106.05234, 2021
12882021
Graphnorm: A principled approach to accelerating graph neural network training
T Cai, S Luo, K Xu, D He, T Liu, L Wang
ICML 2021, arXiv preprint arXiv:2009.03294, 2020
1892020
Adversarially robust generalization just requires more unlabeled data
R Zhai, T Cai, D He, C Dan, K He, J Hopcroft, L Wang
arXiv preprint arXiv:1906.00555, 2019
1712019
Medusa: Simple llm inference acceleration framework with multiple decoding heads
T Cai, Y Li, Z Geng, H Peng, JD Lee, D Chen, T Dao
arXiv preprint arXiv:2401.10774, 2024
150*2024
Convergence of adversarial training in overparametrized neural networks
R Gao, T Cai, H Li, CJ Hsieh, L Wang, JD Lee
NeurIPS 2019 Spotlight, arXiv preprint arXiv:1906.07916, 13029-13040, 2019
1472019
Large language models as tool makers
T Cai, X Wang, T Ma, X Chen, D Zhou
ICLR 2024, arXiv preprint arXiv:2305.17126, 2023
1372023
Towards a Theoretical Framework of Out-of-Distribution Generalization
H Ye, C Xie, T Cai, R Li, Z Li, L Wang
NeurIPS 2021, arXiv preprint arXiv:2106.04496, 2021
1122021
What Makes Convolutional Models Great on Long Sequence Modeling?
Y Li, T Cai, Y Zhang, D Chen, D Dey
ICLR 2023, arXiv preprint arXiv:2210.09298, 2022
972022
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
J Su, Y Chen, T Cai, T Wu, R Gao, L Wang, JD Lee
NeurIPS 2020, arXiv preprint arXiv:2009.11094, 2020
882020
Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons
B Zhang, T Cai, Z Lu, D He, L Wang
ICML 2021, arXiv preprint arXiv:2102.05363, 12368-12379, 2021
68*2021
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
T Cai, R Gao, J Hou, S Chen, D Wang, D He, Z Zhang, L Wang
NeurIPS 2019 Beyond First Order Methods in ML Workshop, arXiv preprint arXiv …, 2019
672019
Locally differentially private (contextual) bandits learning
K Zheng, T Cai, W Huang, Z Li, L Wang
NeurIPS 2020, arXiv preprint arXiv:2006.00701, 2020
612020
Rest: Retrieval-based speculative decoding
Z He, Z Zhong, T Cai, JD Lee, D He
NAACL 2024, arXiv preprint arXiv:2311.08252, 2023
592023
A Theory of Label Propagation for Subpopulation Shift
T Cai, R Gao, JD Lee, Q Lei
ICML 2021, arXiv preprint arXiv:2102.11203, 2021
542021
Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
NeurIPS 2021, arXiv preprint arXiv:2106.12566, 2021
482021
Snapkv: Llm knows what you are looking for before generation
Y Li, Y Huang, B Yang, B Venkitesh, A Locatelli, H Ye, T Cai, P Lewis, ...
NeurIPS 2024, arXiv preprint arXiv:2404.14469, 2024
372024
DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
M Li, T Cai, J Cao, Q Zhang, H Cai, J Bai, Y Jia, MY Liu, K Li, S Han
CVPR 2024, arXiv preprint arXiv:2402.19481, 2024
242024
Defective Convolutional Networks
T Luo, T Cai, M Zhang, S Chen, D He, L Wang
arXiv preprint arXiv:1911.08432, 2019
24*2019
Reward collapse in aligning large language models
Z Song, T Cai, JD Lee, WJ Su
arXiv preprint arXiv:2305.17608, 2023
232023
Jetmoe: Reaching llama2 performance with 0.1 m dollars
Y Shen, Z Guo, T Cai, Z Qin
arXiv preprint arXiv:2404.07413, 2024
222024
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20