关注
Dzmitry Bahdanau
Dzmitry Bahdanau
ServiceNow Research
在 servicenow.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Neural machine translation by jointly learning to align and translate
D Bahdanau
arXiv preprint arXiv:1409.0473, 2014
367422014
Learning phrase representations using RNN encoder-decoder for statistical machine translation
K Cho
arXiv preprint arXiv:1406.1078, 2014
322332014
On the Properties of Neural Machine Translation: Encoder-decoder Approaches
K Cho
arXiv preprint arXiv:1409.1259, 2014
94722014
Attention-based models for speech recognition
JK Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio
Advances in neural information processing systems 28, 2015
33372015
End-to-end attention-based large vocabulary speech recognition
D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio
2016 IEEE international conference on acoustics, speech and signal …, 2016
15242016
Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014
K Cho, B Van Merrienboer, C Gulcehre, D Bahdanau, F Bougares, ...
arXiv preprint arXiv:1406.1078, 2020
13272020
Theano: A Python framework for fast computation of mathematical expressions
R Al-Rfou, G Alain, A Almahairi, C Angermueller, D Bahdanau, N Ballas, ...
arXiv e-prints, arXiv: 1605.02688, 2016
1104*2016
Neural machine translation by jointly learning to align and translate. arXiv 2014
D Bahdanau, K Cho, Y Bengio
arXiv preprint arXiv:1409.0473, 2014
8642014
An actor-critic algorithm for sequence prediction
D Bahdanau, P Brakel, K Xu, A Goyal, R Lowe, J Pineau, A Courville, ...
arXiv preprint arXiv:1607.07086, 2016
7342016
Starcoder: may the source be with you!
R Li, LB Allal, Y Zi, N Muennighoff, D Kocetkov, C Mou, M Marone, C Akiki, ...
arXiv preprint arXiv:2305.06161, 2023
6842023
End-to-end continuous speech recognition using attention-based recurrent nn: First results
J Chorowski, D Bahdanau, K Cho, Y Bengio
arXiv preprint arXiv:1412.1602, 2014
6092014
BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop
M Chevalier-Boisvert, D Bahdanau, S Lahlou, L Willems, C Saharia, ...
arXiv preprint arXiv:1810.08272, 2018
409*2018
PICARD: Parsing incrementally for constrained auto-regressive decoding from language models
T Scholak, N Schucher, D Bahdanau
arXiv preprint arXiv:2109.05093, 2021
3392021
The stack: 3 tb of permissively licensed source code
D Kocetkov, R Li, LB Allal, J Li, C Mou, CM Ferrandis, Y Jernite, M Mitchell, ...
arXiv preprint arXiv:2211.15533, 2022
2342022
Blocks and fuel: Frameworks for deep learning
B Van Merriënboer, D Bahdanau, V Dumoulin, D Serdyuk, ...
arXiv preprint arXiv:1506.00619, 2015
2062015
SantaCoder: don't reach for the stars!
LB Allal, R Li, D Kocetkov, C Mou, C Akiki, CM Ferrandis, N Muennighoff, ...
arXiv preprint arXiv:2301.03988, 2023
2012023
Learning to understand goal specifications by modelling reward
D Bahdanau, F Hill, J Leike, E Hughes, A Hosseini, P Kohli, ...
arXiv preprint arXiv:1806.01946, 2018
200*2018
Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control
N Jaques, S Gu, D Bahdanau, JM Hernández-Lobato, RE Turner, D Eck
International Conference on Machine Learning, 1645-1654, 2017
1912017
Systematic generalization: what is required and can it be learned?
D Bahdanau, S Murty, M Noukhovitch, TH Nguyen, H de Vries, A Courville
arXiv preprint arXiv:1811.12889, 2018
1902018
Evaluating the text-to-sql capabilities of large language models
N Rajkumar, R Li, D Bahdanau
arXiv preprint arXiv:2204.00498, 2022
1212022
系统目前无法执行此操作,请稍后再试。
文章 1–20