Temporal pattern attention for multivariate time series forecasting SY Shih, FK Sun, H Lee Machine Learning 108, 1421-1441, 2019 | 382 | 2019 |
Superb: Speech processing universal performance benchmark S Yang, PH Chi, YS Chuang, CIJ Lai, K Lakhotia, YY Lin, AT Liu, J Shi, ... arXiv preprint arXiv:2105.01051, 2021 | 265 | 2021 |
Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders AT Liu, S Yang, PH Chi, P Hsu, H Lee ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 259 | 2020 |
Tera: Self-supervised learning of transformer encoder representation for speech AT Liu, SW Li, H Lee IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 2351-2366, 2021 | 211 | 2021 |
Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder YA Chung, CC Wu, CH Shen, HY Lee, LS Lee arXiv preprint arXiv:1603.00982, 2016 | 182 | 2016 |
One-shot voice conversion by separating speaker and content representations with instance normalization J Chou, C Yeh, H Lee arXiv preprint arXiv:1904.05742, 2019 | 156 | 2019 |
Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations J Chou, C Yeh, H Lee, L Lee arXiv preprint arXiv:1804.02812, 2018 | 125 | 2018 |
Spoken content retrieval—beyond cascading speech recognition with text retrieval L Lee, J Glass, H Lee, C Chan IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (9), 1389 …, 2015 | 111 | 2015 |
Audio albert: A lite bert for self-supervised learning of audio representation PH Chi, PH Chung, TH Wu, CC Hsieh, YH Chen, SW Li, H Lee 2021 IEEE Spoken Language Technology Workshop (SLT), 344-350, 2021 | 105 | 2021 |
Tree transformer: Integrating tree structures into self-attention YS Wang, HY Lee, YN Chen arXiv preprint arXiv:1909.06639, 2019 | 98 | 2019 |
Lamol: Language modeling for lifelong language learning FK Sun, CH Ho, HY Lee arXiv preprint arXiv:1909.03329, 2019 | 98 | 2019 |
Supervised and unsupervised transfer learning for question answering YA Chung, HY Lee, J Glass arXiv preprint arXiv:1711.05345, 2017 | 83 | 2017 |
Learning chinese word representations from glyphs of characters TR Su, HY Lee arXiv preprint arXiv:1708.04755, 2017 | 82 | 2017 |
SpeechBERT: Cross-modal pre-trained language model for end-to-end spoken question answering YS Chuang, CL Liu, HY Lee | 78* | 2019 |
Meta learning for end-to-end low-resource speech recognition JY Hsu, YJ Chen, H Lee ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020 | 66 | 2020 |
End-to-end text-to-speech for low-resource languages by cross-lingual transfer learning T Tu, YJ Chen, C Yeh, HY Lee arXiv preprint arXiv:1904.06508, 2019 | 66 | 2019 |
Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection S Shen, H Lee arXiv preprint arXiv:1604.00077, 2016 | 63 | 2016 |
Vqvc+: One-shot voice conversion by vector quantization and u-net architecture DY Wu, YH Chen, HY Lee arXiv preprint arXiv:2006.04154, 2020 | 61 | 2020 |
Learning to encode text as human-readable summaries using generative adversarial networks YS Wang, HY Lee arXiv preprint arXiv:1810.02851, 2018 | 59 | 2018 |
Spoken SQuAD: A study of mitigating the impact of speech recognition errors on listening comprehension CH Li, SL Wu, CL Liu, H Lee arXiv preprint arXiv:1804.00320, 2018 | 59 | 2018 |