Seguir
Thomas Unterthiner
Thomas Unterthiner
Google DeepMind
Dirección de correo verificada de pm.me
Título
Citado por
Citado por
Año
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ...
International Conference on Learning Representations (ICLR), 2021
490332021
GANs trained by a two time-scale update rule converge to a local nash equilibrium
M Heusel, H Ramsauer, T Unterthiner, B Nessler, S Hochreiter
Advances in Neural Information Processing Systems, 6626-6637, 2017
141812017
Fast and accurate deep network learning by exponential linear units (ELUs)
DA Clevert, T Unterthiner, S Hochreiter
International Conference on Learning Representations (ICLR), 2016
75952016
Self-normalizing neural networks
G Klambauer, T Unterthiner, A Mayr, S Hochreiter
Advances in Neural Information Processing Systems (NeurIPS), 2017
34822017
MLP-Mixer: An All-MLP Architecture for Vision
I Tolstikhin, N Houlsby, A Kolesnikov, L Beyer, X Zhai, T Unterthiner, ...
Advances in Neural Information Processing Systems (NeurIPS), 2021
28082021
Do Vision Transformers See Like Convolutional Neural Networks?
M Raghu, T Unterthiner, S Kornblith, C Zhang, A Dosovitskiy
Advances in Neural Information Processing Systems (NeurIPS), 2021
10842021
DeepTox: toxicity prediction using deep learning
A Mayr, G Klambauer, T Unterthiner, S Hochreiter
Frontiers in Environmental Science 3, 80, 2016
10172016
Object-Centric Learning with Slot Attention
F Locatello, D Weissenborn, T Unterthiner, A Mahendran, G Heigold, ...
Advances in Neural Information Processing Systems (NeurIPS), 2020
8192020
Large-scale comparison of machine learning methods for drug target prediction on ChEMBL
A Mayr, G Klambauer, T Unterthiner, M Steijaert, JK Wegner, ...
Chemical science 9 (24), 5441-5451, 2018
5452018
Towards accurate generative models of video: A new metric & challenges
T Unterthiner, S Van Steenkiste, K Kurach, R Marinier, M Michalski, ...
arXiv preprint arXiv:1812.01717, 2018
5042018
Understanding Robustness of Transformers for Image Classification
S Bhojanapalli, A Chakrabarti, D Glasner, D Li, T Unterthiner, A Veit
International Conference on Computer Vision (ICCV), 2021
4492021
Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery
K Preuer, P Renz, T Unterthiner, S Hochreiter, G Klambauer
Journal of chemical information and modeling 58 (9), 1736-1741, 2018
3702018
Speeding up Semantic Segmentation for Autonomous Driving
M Treml, J Arjona-Medina, T Unterthiner, R Durgesh, F Friedmann, ...
Workshop on Machine Learning for Intelligent Transportation Systems (NIPS 2016), 2016
3522016
Rudder: Return decomposition for delayed rewards
JA Arjona-Medina, M Gillhofer, M Widrich, T Unterthiner, J Brandstetter, ...
Advances in Neural Information Processing Systems (NeurIPS), 2018
2592018
Deep Learning as an Opportunity in Virtual Screening
T Unterthiner, A Mayr, G ünter Klambauer, M Steijaert, J Wenger, ...
Deep Learning and Representation Learning Workshop (NIPS 2014), 2014
2482014
Interpretable deep learning in drug discovery
K Preuer, G Klambauer, F Rippmann, S Hochreiter, T Unterthiner
Explainable AI: interpreting, explaining and visualizing deep learning, 331-345, 2019
1382019
Toxicity prediction using deep learning
T Unterthiner, A Mayr, G Klambauer, S Hochreiter
arXiv preprint arXiv:1503.01445, 2015
1272015
FVD: A new metric for video generation
T Unterthiner, S van Steenkiste, K Kurach, R Marinier, M Michalski, ...
ICLR Workshop on Deep Generative Models for Highly Structured Data, 2019
1262019
Using transcriptomics to guide lead optimization in drug discovery projects: Lessons learned from the QSTAR project
B Verbist, G Klambauer, L Vervoort, W Talloen, Z Shkedy, O Thas, ...
Drug discovery today 20 (5), 505-513, 2015
1102015
Differentiable Patch Selection for Image Recognition
JB Cordonnier, A Mahendran, A Dosovitskiy, D Weissenborn, J Uszkoreit, ...
Computer Vision and Pattern Recognition (CVPR), 2351-2360, 2021
972021
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20