Improving transformer-based end-to-end speech recognition with connectionist temporal classification and language model integration T Nakatani proc. INTERSPEECH 2019, 1408-1412, 2019 | 270 | 2019 |
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices T Yoshioka, N Ito, M Delcroix, A Ogawa, K Kinoshita, M Fujimoto, C Yu, ... 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU …, 2015 | 269 | 2015 |
Single channel target speaker extraction and recognition with speaker beam M Delcroix, K Zmolikova, K Kinoshita, A Ogawa, T Nakatani 2018 IEEE international conference on acoustics, speech and signal …, 2018 | 214 | 2018 |
Linear prediction-based dereverberation with advanced speech enhancement and recognition technologies for the REVERB challenge M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ... Reverb workshop, 2014 | 128 | 2014 |
Speaker-aware neural network based beamformer for speaker extraction in speech mixtures K Žmolíková, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani Proc. Interspeech 2017, 2655-2659, 2017 | 123 | 2017 |
Low-latency real-time meeting recognition and understanding using distant microphones and omni-directional camera T Hori, S Araki, T Yoshioka, M Fujimoto, S Watanabe, T Oba, A Ogawa, ... IEEE transactions on audio, speech, and language processing 20 (2), 499-513, 2011 | 106 | 2011 |
Error detection and accuracy estimation in automatic speech recognition using deep bidirectional recurrent neural networks A Ogawa, T Hori Speech Communication 89, 70-83, 2017 | 92 | 2017 |
Semi-Supervised End-to-End Speech Recognition. S Karita, S Watanabe, T Iwata, A Ogawa, M Delcroix Interspeech, 2-6, 2018 | 79 | 2018 |
Strategies for distant speech recognitionin reverberant environments M Delcroix, T Yoshioka, A Ogawa, Y Kubo, M Fujimoto, N Ito, K Kinoshita, ... EURASIP Journal on Advances in Signal Processing 2015, 1-15, 2015 | 76 | 2015 |
Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues. T Ochiai, M Delcroix, K Kinoshita, A Ogawa, T Nakatani INTERSPEECH, 2718-2722, 2019 | 60 | 2019 |
Learning speaker representation for neural network based multichannel speaker extraction K Žmolíková, M Delcroix, K Kinoshita, T Higuchi, A Ogawa, T Nakatani 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 8-15, 2017 | 55 | 2017 |
Semi-supervised end-to-end speech recognition using text-to-speech and autoencoders S Karita, S Watanabe, T Iwata, M Delcroix, A Ogawa, T Nakatani ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 53 | 2019 |
Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions M Delcroix, K Kinoshita, C Yu, A Ogawa, T Yoshioka, T Nakatani 2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016 | 51 | 2016 |
Auxiliary Feature Based Adaptation of End-to-end ASR Systems. M Delcroix, S Watanabe, A Ogawa, S Karita, T Nakatani Interspeech 2018, 2444-2448, 2018 | 47 | 2018 |
Text-informed speech enhancement with deep neural networks. K Kinoshita, M Delcroix, A Ogawa, T Nakatani INTERSPEECH, 1760-1764, 2015 | 46 | 2015 |
Spatial correlation model based observation vector clustering and MVDR beamforming for meeting recognition S Araki, M Okada, T Higuchi, A Ogawa, T Nakatani 2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016 | 40 | 2016 |
Balancing acoustic and linguistic probabilities A Ogawa, K Takeda, F Itakura Proceedings of the 1998 IEEE International Conference on Acoustics, Speech …, 1998 | 40 | 1998 |
Speech recognition in the presence of highly non-stationary noise based on spatial, spectral and temporal speech/noise modeling combined with dynamic variance adaptation M Delcroix, K Kinoshita, T Nakatani, S Araki, A Ogawa, T Hori, ... Proc. 1st Int. Workshop on Machine Listening in Multisource Environments …, 2011 | 38 | 2011 |
ASR error detection and recognition rate estimation using deep bidirectional recurrent neural networks A Ogawa, T Hori 2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015 | 34 | 2015 |
Frame-level phoneme-invariant speaker embedding for text-independent speaker recognition on extremely short utterances N Tawara, A Ogawa, T Iwata, M Delcroix, T Ogawa ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 32 | 2020 |