Seguir
Eric Nalisnick
Eric Nalisnick
Assistant Professor, Johns Hopkins University
E-mail confirmado em jhu.edu - Página inicial
Título
Citado por
Citado por
Ano
Normalizing flows for probabilistic modeling and inference
G Papamakarios, E Nalisnick, DJ Rezende, S Mohamed, ...
Journal of Machine Learning Research 22 (57), 1-64, 2021
20232021
Do deep generative models know what they don't know?
E Nalisnick, A Matsukawa, YW Teh, D Gorur, B Lakshminarayanan
arXiv preprint arXiv:1810.09136, 2018
8772018
Detecting out-of-distribution inputs to deep generative models using typicality
E Nalisnick, A Matsukawa, YW Teh, B Lakshminarayanan
arXiv preprint arXiv:1906.02994, 2019
2162019
Improving document ranking with dual word embeddings
E Nalisnick, B Mitra, N Craswell, R Caruana
Proceedings of the 25th international conference companion on world wide web …, 2016
2112016
Stick-breaking variational autoencoders
E Nalisnick, P Smyth
arXiv preprint arXiv:1605.06197, 2016
2062016
A dual embedding space model for document ranking
B Mitra, E Nalisnick, N Craswell, R Caruana
arXiv preprint arXiv:1602.01137, 2016
1842016
Bayesian batch active learning as sparse subset approximation
R Pinsler, J Gordon, E Nalisnick, JM Hernández-Lobato
Advances in neural information processing systems 32, 2019
1472019
Bayesian deep learning via subnetwork inference
E Daxberger, E Nalisnick, JU Allingham, J Antorán, ...
International Conference on Machine Learning, 2510-2521, 2021
132*2021
Character-to-character sentiment analysis in Shakespeare’s plays
ET Nalisnick, HS Baird
Proceedings of the 51st Annual Meeting of the Association for Computational …, 2013
1252013
Approximate inference for deep latent gaussian mixtures
E Nalisnick, L Hertel, P Smyth
NIPS Workshop on Bayesian Deep Learning 2, 131, 2016
1152016
Hybrid models with deep and invertible features
E Nalisnick, A Matsukawa, YW Teh, D Gorur, B Lakshminarayanan
International Conference on Machine Learning, 4723-4732, 2019
1002019
Do Bayesian neural networks need to be fully stochastic?
M Sharma, S Farquhar, E Nalisnick, T Rainforth
International Conference on Artificial Intelligence and Statistics, 7694-7722, 2023
602023
Calibrated learning to defer with one-vs-all classifiers
R Verma, E Nalisnick
International Conference on Machine Learning, 22184-22202, 2022
542022
Dropout as a structured shrinkage prior
E Nalisnick, JM Hernández-Lobato, P Smyth
International Conference on Machine Learning, 4712-4722, 2019
482019
Extracting sentiment networks from Shakespeare's plays
ET Nalisnick, HS Baird
2013 12th International Conference on Document Analysis and Recognition, 758-762, 2013
482013
Learning to defer to multiple experts: Consistent surrogate losses, confidence calibration, and conformal ensembles
R Verma, D Barrejón, E Nalisnick
International Conference on Artificial Intelligence and Statistics, 11415-11434, 2023
392023
On priors for Bayesian neural networks
ET Nalisnick
University of California, Irvine, 2018
392018
Adapting the linearised laplace model evidence for modern deep learning
J Antorán, D Janz, JU Allingham, E Daxberger, RR Barbano, E Nalisnick, ...
International Conference on Machine Learning, 796-821, 2022
362022
A scale mixture perspective of multiplicative noise in neural networks
E Nalisnick, A Anandkumar, P Smyth
arXiv preprint arXiv:1506.03208, 2015
272015
Hate speech criteria: A modular approach to task-specific hate speech definitions
U Khurana, I Vermeulen, E Nalisnick, M Van Noorloos, A Fokkens
arXiv preprint arXiv:2206.15455, 2022
262022
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20