Suivre
Avi Schwarzschild
Avi Schwarzschild
Adresse e-mail validée de cmu.edu - Page d'accueil
Titre
Citée par
Citée par
Année
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
M Goldblum, D Tsipras, C Xie, X Chen, A Schwarzschild, D Song, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (2), 1563-1580, 2022
341*2022
Saint: Improved neural networks for tabular data via row attention and contrastive pre-training
G Somepalli, M Goldblum, A Schwarzschild, CB Bruss, T Goldstein
arXiv preprint arXiv:2106.01342, 2021
330*2021
Baseline defenses for adversarial attacks against aligned language models
N Jain, A Schwarzschild, Y Wen, G Somepalli, J Kirchenbauer, P Chiang, ...
arXiv preprint arXiv:2309.00614, 2023
251*2023
Universal guidance for diffusion models
A Bansal, HM Chu, A Schwarzschild, S Sengupta, M Goldblum, J Geiping, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
196*2023
Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks
A Schwarzschild, M Goldblum, A Gupta, JP Dickerson, T Goldstein
International Conference on Machine Learning (ICML) 2021, 2020
1852020
A Cookbook of Self-Supervised Learning,(2023)
R Balestriero, M Ibrahim, V Sobal, A Morcos, S Shekhar, T Goldstein, ...
arXiv preprint arXiv:2304.12210, 0
100*
Tofu: A task of fictitious unlearning for llms
P Maini, Z Feng, A Schwarzschild, ZC Lipton, JZ Kolter
arXiv preprint arXiv:2401.06121, 2024
92*2024
Transfer learning with deep tabular models
R Levin, V Cherepanova, A Schwarzschild, A Bansal, CB Bruss, ...
arXiv preprint arXiv:2206.15306, 2022
72*2022
Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks
A Schwarzschild, E Borgnia, A Gupta, F Huang, U Vishkin, M Goldblum, ...
Advances in Neural Information Processing Systems 34, 6695-6706, 2021
722021
Neftune: Noisy embeddings improve instruction finetuning
N Jain, P Chiang, Y Wen, J Kirchenbauer, HM Chu, G Somepalli, ...
arXiv preprint arXiv:2310.05914, 2023
63*2023
Truth or backpropaganda? An empirical investigation of deep learning theory
M Goldblum, J Geiping, A Schwarzschild, M Moeller, T Goldstein
International Conference on Learning Representations (ICLR) 2020, 2019
44*2019
Spotting llms with binoculars: Zero-shot detection of machine-generated text
A Hans, A Schwarzschild, V Cherepanova, H Kazemi, A Saha, ...
arXiv preprint arXiv:2401.12070, 2024
42*2024
End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
A Bansal, A Schwarzschild, E Borgnia, Z Emam, F Huang, M Goldblum, ...
36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
41*2022
Adversarial attacks on machine learning systems for high-frequency trading
M Goldblum, A Schwarzschild, A Patel, T Goldstein
Proceedings of the Second ACM International Conference on AI in Finance, 1-9, 2021
34*2021
Transformers Can Do Arithmetic with the Right Embeddings
S McLeish, A Bansal, A Stein, N Jain, J Kirchenbauer, BR Bartoldson, ...
arXiv preprint arXiv:2405.17399, 2024
18*2024
Rethinking llm memorization through the lens of adversarial compression
A Schwarzschild, Z Feng, P Maini, ZC Lipton, JZ Kolter
arXiv preprint arXiv:2404.15146, 2024
182024
Saint: Improved neural networks for tabular data via row attention and contrastive pre-training. arXiv
G Somepalli, M Goldblum, A Schwarzschild, CB Bruss, T Goldstein
arXiv preprint arXiv:2106.01342, 2021
122021
The Uncanny Similarity of Recurrence and Depth
A Schwarzschild, A Gupta, M Goldblum, T Goldstein
International Conference on Learning Representations (ICLR) 2022, 2022
102022
Datasets for studying generalization from easy to hard examples
A Schwarzschild, E Borgnia, A Gupta, A Bansal, Z Emam, F Huang, ...
arXiv preprint arXiv:2108.06011, 2021
102021
MetaBalance: high-performance neural networks for class-imbalanced data
A Bansal, M Goldblum, V Cherepanova, A Schwarzschild, CB Bruss, ...
arXiv preprint arXiv:2106.09643, 2021
92021
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20