팔로우
Albert Zeyer
Albert Zeyer
Human Language Technology and Pattern Recognition Group, RWTH Aachen University
cs.rwth-aachen.de의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Improved Training of End-to-end Attention Models for Speech Recognition
A Zeyer, K Irie, R Schlüter, H Ney
Proc. Interspeech 2018, 7-11, 2018
2992018
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention--w/o Data Augmentation
C Lüscher, E Beck, K Irie, M Kitza, W Michel, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.03072, 2019
2962019
A comparison of Transformer and LSTM encoder decoder models for ASR
A Zeyer, P Bahar, K Irie, R Schlüter, H Ney
IEEE Automatic Speech Recognition and Understanding Workshop, Sentosa, Singapore, 2019
2182019
A comprehensive study of deep bidirectional LSTM RNNs for acoustic modeling in speech recognition
A Zeyer, P Doetsch, P Voigtlaender, R Schlüter, H Ney
2017 IEEE international conference on acoustics, speech and signal …, 2017
2172017
Language modeling with deep transformers
K Irie, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.04226, 2019
2002019
RETURNN as a generic flexible neural toolkit with application to translation and speech recognition
A Zeyer, T Alkhouli, H Ney
arXiv preprint arXiv:1805.05225, 2018
872018
RETURNN: The RWTH extensible training framework for universal recurrent neural networks
P Doetsch, A Zeyer, P Voigtlaender, I Kulikov, R Schlüter, H Ney
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
812017
Generating synthetic audio data for attention-based speech recognition systems
N Rossenbach, A Zeyer, R Schlüter, H Ney
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
802020
Towards Online-Recognition with Deep Bidirectional LSTM Acoustic Models
A Zeyer, R Schlüter, H Ney
Interspeech, 3424-3428, 2016
622016
A new training pipeline for an improved neural transducer
A Zeyer, A Merboldt, R Schlüter, H Ney
arXiv preprint arXiv:2005.09319, 2020
552020
On using specaugment for end-to-end speech translation
P Bahar, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1911.08876, 2019
532019
CTC in the Context of Generalized Full-Sum HMM Training
A Zeyer, E Beck, R Schlüter, H Ney
INTERSPEECH, 944-948, 2017
522017
The RWTH/UPB/FORTH system combination for the 4th CHiME challenge evaluation
T Menne
Deutsche Nationalbibliothek, 2016
522016
Training language models for long-span cross-sentence evaluation
K Irie, A Zeyer, R Schlüter, H Ney
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2019
472019
Investigating methods to improve language model integration for attention-based encoder-decoder ASR models
M Zeineldeen, A Glushko, W Michel, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:2104.05544, 2021
422021
Bidirectional decoder networks for attention-based end-to-end offline handwriting recognition
P Doetsch, A Zeyer, H Ney
2016 15th International Conference on Frontiers in Handwriting Recognition …, 2016
412016
Librispeech transducer model with internal language model prior correction
A Zeyer, A Merboldt, W Michel, R Schlüter, H Ney
arXiv preprint arXiv:2104.03006, 2021
292021
Why does CTC result in peaky behavior?
A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:2105.14849, 2021
252021
An Analysis of Local Monotonic Attention Variants.
A Merboldt, A Zeyer, R Schlüter, H Ney
Interspeech, 1398-1402, 2019
202019
A comprehensive analysis on attention models
A Zeyer, A Merboldt, R Schlüter, H Ney
Universitätsbibliothek der RWTH Aachen, 2019
202019
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20