Seguir
Yunzhen Feng
Yunzhen Feng
Dirección de correo verificada de nyu.edu
Título
Citado por
Citado por
Año
A Tale of Tails: Model Collapse as a Change of Scaling Laws
E Dohmatob, Y Feng, P Yang, F Charton, J Kempe
Proceedings of the International Conference on Machine Learning (ICML), 2024
342024
Enhancing Certified Robustness of Smoothed Classifiers via Weighted Model Ensembling
C Liu, Y Feng, R Wang, B Dong
ICML 2021 Workshop on Adversarial Machine Learning., 2020
19*2020
Model Collapse Demystified: The Case of Regression
E Dohmatob, Y Feng, J Kempe
NeurIPS 2024, 2024
182024
Embarrassingly Simple Dataset Distillation
Y Feng, SR Vedantam, J Kempe
The Twelfth International Conference on Learning Representations, 2023
122023
Transferred Discrepancy: Quantifying the Difference Between Representations
Y Feng, R Zhai, D He, L Wang, B Dong
arXiv preprint arXiv:2007.12446, 2020
122020
Do Efficient Transformers Really Save Computation?
K Yang, J Ackermann, Z He, G Feng, B Zhang, Y Feng, Q Ye, D He, ...
Proceedings of the International Conference on Machine Learning (ICML), 2024
112024
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement
Y Feng, E Dohmatob, P Yang, F Charton, J Kempe
arXiv preprint arXiv:2406.07515, 2024
72024
Strong Model Collapse
E Dohmatob, Y Feng, A Subramonian, J Kempe
arXiv preprint arXiv:2410.04840, 2024
22024
Attacking Bayes: Are Bayesian Neural Networks Inherently Robust?
Y Feng, TGJ Rudner, N Tsilivis, J Kempe
Transactions on Machine Learning Research (TMLR), 2023
1*2023
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–9