Follow
Gilad Yehudai
Gilad Yehudai
Postdoctoral Associate, New York University
Verified email at weizmann.ac.il - Homepage
Title
Cited by
Cited by
Year
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
3312020
On the power and limitations of random features for understanding neural networks
G Yehudai, O Shamir
Advances in Neural Information Processing Systems, 2019
2202019
Reconstructing training data from trained neural networks
N Haim, G Vardi, G Yehudai, O Shamir, M Irani
Advances in Neural Information Processing Systems 35, 22911-22924, 2022
1642022
From Local Structures to Size Generalization in Graph Neural Networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
arXiv preprint arXiv:2010.08853, 2020
1502020
Learning a single neuron with gradient methods
G Yehudai, S Ohad
Conference on Learning Theory, 3756-3786, 2020
872020
The effects of mild over-parameterization on the optimization landscape of shallow relu neural networks
IM Safran, G Yehudai, O Shamir
Conference on Learning Theory, 3889-3934, 2021
402021
On the optimal memorization power of relu neural networks
G Vardi, G Yehudai, O Shamir
arXiv preprint arXiv:2110.03187, 2021
312021
Gradient methods provably converge to non-robust networks
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 35, 20921-20932, 2022
292022
From tempered to benign overfitting in relu neural networks
G Kornowski, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 36, 58011-58046, 2023
262023
Learning a single neuron with bias using gradient descent
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 34, 28690-28700, 2021
242021
The connection between approximation, depth separation and learnability in neural networks
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
Conference on Learning Theory, 3265-3295, 2021
222021
Deconstructing data reconstruction: Multiclass, weight decay and general losses
G Buzaglo, N Haim, G Yehudai, G Vardi, Y Oz, Y Nikankin, M Irani
Advances in Neural Information Processing Systems 36, 51515-51535, 2023
202023
Width is less important than depth in ReLU neural networks
G Vardi, G Yehudai, O Shamir
Conference on learning theory, 1249-1281, 2022
192022
When Can Transformers Count to n?
G Yehudai, H Kaplan, A Ghandeharioun, M Geva, A Globerson
arXiv preprint arXiv:2407.15160, 2024
92024
Generating collection rules based on security rules
NA Arbel, L Lazar, G Yehudai
US Patent 11,330,016, 2022
92022
On size generalization in graph neural networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
62020
Adversarial examples exist in two-layer relu networks for low dimensional linear subspaces
O Melamed, G Yehudai, G Vardi
Advances in Neural Information Processing Systems 36, 5028-5049, 2023
5*2023
Reconstructing training data from real world models trained with transfer learning
Y Oz, G Yehudai, G Vardi, I Antebi, M Irani, N Haim
arXiv preprint arXiv:2407.15845, 2024
32024
On the benefits of rank in attention layers
N Amsel, G Yehudai, J Bruna
arXiv preprint arXiv:2407.16153, 2024
22024
Reconstructing Training Data from Multiclass Neural Networks
G Buzaglo, N Haim, G Yehudai, G Vardi, M Irani
arXiv preprint arXiv:2305.03350, 2023
12023
The system can't perform the operation now. Try again later.
Articles 1–20