Seguir
Fanny Yang
Título
Citado por
Citado por
Año
Adversarial training can hurt generalization
A Raghunathan, SM Xie, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
2772019
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan, SM Xie, F Yang, J Duchi, P Liang
arXiv preprint arXiv:2002.10716, 2020
2602020
Regularized learning for domain adaptation under label shifts
K Azizzadenesheli, A Liu, F Yang, A Anandkumar
arXiv preprint arXiv:1903.09734, 2019
2282019
Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Y Wei, F Yang, MJ Wainwright
Advances in Neural Information Processing Systems 30, 2017
922017
A framework for multi-a (rmed)/b (andit) testing with online fdr control
F Yang, A Ramdas, KG Jamieson, MJ Wainwright
Advances in Neural Information Processing Systems 30, 2017
762017
Online control of the false discovery rate with decaying memory
A Ramdas, F Yang, MJ Wainwright, MI Jordan
Advances in neural information processing systems 30, 2017
742017
Statistical and computational guarantees for the Baum-Welch algorithm
F Yang, S Balakrishnan, MJ Wainwright
Journal of Machine Learning Research 18 (125), 1-53, 2017
712017
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
F Yang, Z Wang, C Heinze-Deml
Advances in neural information processing systems 32, 2019
442019
Tight bounds for minimum -norm interpolation of noisy data
G Wang, K Donhauser, F Yang
International Conference on Artificial Intelligence and Statistics, 10572-10602, 2022
382022
Phaseless signal recovery in infinite dimensional spaces using structured modulations
V Pohl, F Yang, H Boche
Journal of Fourier Analysis and Applications 20, 1212-1233, 2014
382014
How unfair is private learning?
A Sanyal, Y Hu, F Yang
Uncertainty in Artificial Intelligence, 1738-1748, 2022
282022
How rotational invariance of common kernels prevents generalization in high dimensions
K Donhauser, M Wu, F Yang
International Conference on Machine Learning, 2804-2814, 2021
272021
Fast rates for noisy interpolation require rethinking the effect of inductive bias
K Donhauser, N Ruggeri, S Stojanovic, F Yang
International Conference on Machine Learning, 5397-5428, 2022
252022
Self-supervised reinforcement learning with independently controllable subgoals
A Zadaianchuk, G Martius, F Yang
Conference on Robot Learning, 384-394, 2022
222022
Phase retrieval via structured modulations in Paley-Wiener spaces
F Yang, V Pohl, H Boche
arXiv preprint arXiv:1302.4258, 2013
222013
Phase retrieval from low-rate samples
V Pohl, H Boche, F Yang
Sampling Theory in Signal and Image Processing 14, 71-99, 2015
212015
Semi-supervised novelty detection using ensembles with regularized disagreement
A Tifrea, E Stavarache, F Yang
Uncertainty in Artificial Intelligence, 1939-1948, 2022
162022
Why adversarial training can hurt robust accuracy
J Clarysse, J Hörrmann, F Yang
arXiv preprint arXiv:2203.02006, 2022
162022
Margin-based sampling in high dimensions: When being active is less efficient than staying passive
A Tifrea, J Clarysse, F Yang
International Conference on Machine Learning, 34222-34262, 2023
14*2023
Interpolation can hurt robust generalization even when there is no noise
K Donhauser, A Tifrea, M Aerni, R Heckel, F Yang
Advances in Neural Information Processing Systems 34, 23465-23477, 2021
142021
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20