Chao Weng
Chao Weng
Tencent AI Lab
Dirección de correo verificada de tencent.com - Página principal
Título
Citado por
Citado por
Año
Recurrent deep neural networks for robust speech recognition
C Weng, D Yu, S Watanabe, BHF Juang
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
1402014
Deep neural networks for single-channel multi-talker speech recognition
C Weng, D Yu, ML Seltzer, J Droppo
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (10 …, 2015
782015
Durian: Duration informed attention network for multimodal synthesis
C Yu, H Lu, N Hu, M Yu, C Weng, K Xu, P Liu, D Tuo, S Kang, G Lei, D Su, ...
arXiv preprint arXiv:1909.01700, 2019
622019
Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition.
C Weng, J Cui, G Wang, J Wang, C Yu, D Su, D Yu
Interspeech, 761-765, 2018
492018
Component fusion: Learning replaceable language model component for end-to-end speech recognition system
C Shan, C Weng, G Wang, D Su, M Luo, D Yu, L Xie
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
442019
Investigating end-to-end speech recognition for mandarin-english code-switching
C Shan, C Weng, G Wang, D Su, M Luo, D Yu, L Xie
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
432019
Single-channel mixed speech recognition using deep neural networks
C Weng, D Yu, ML Seltzer, J Droppo
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
422014
Mixed speech recognition
D Yu, C Weng, ML Seltzer, J Droppo
US Patent 9,390,712, 2016
412016
Past review, current progress, and challenges ahead on the cocktail party problem
Y Qian, C Weng, X Chang, S Wang, D Yu
Frontiers of Information Technology & Electronic Engineering 19 (1), 40-63, 2018
382018
Feature space maximum a posteriori linear regression for adaptation of deep neural networks
Z Huang, J Li, SM Siniscalchi, IF Chen, C Weng, CH Lee
Fifteenth Annual Conference of the International Speech Communication …, 2014
272014
A Multistage Training Framework for Acoustic-to-Word Model.
C Yu, C Zhang, C Weng, J Cui, D Yu
Interspeech, 786-790, 2018
232018
Beyond cross-entropy: Towards better frame-level objective functions for deep neural network training in automatic speech recognition
Z Huang, J Li, C Weng, CH Lee
Fifteenth Annual Conference of the International Speech Communication …, 2014
232014
Minimum bayes risk training of rnn-transducer for end-to-end speech recognition
C Weng, C Yu, J Cui, C Zhang, D Yu
arXiv preprint arXiv:1911.12487, 2019
192019
Joint training of complex ratio mask based beamformer and acoustic model for noise robust asr
Y Xu, C Weng, L Hui, J Liu, M Yu, D Su, D Yu
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
192019
DurIAN: Duration Informed Attention Network for Speech Synthesis.
C Yu, H Lu, N Hu, M Yu, C Weng, K Xu, P Liu, D Tuo, S Kang, G Lei, D Su, ...
INTERSPEECH, 2027-2031, 2020
172020
Deep learning vector quantization for acoustic information retrieval
Z Huang, C Weng, K Li, YC Cheng, CH Lee
2014 IEEE international conference on acoustics, speech and signal …, 2014
162014
Replay and synthetic speech detection with res2net architecture
X Li, N Li, C Weng, X Liu, D Su, D Yu, H Meng
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
152021
Pitchnet: Unsupervised singing voice conversion with pitch adversarial network
C Deng, C Yu, H Lu, C Weng, D Yu
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
152020
Discriminative training using non-uniform criteria for keyword spotting on spontaneous speech
C Weng, BHF Juang
Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 2014
142014
Discriminative Training Using Non-uniform Criteria for Keyword Spotting on Spontaneous Speech
C Weng, BH Juang, D Povey
13th Annual Conference of the International Speech Communication Association …, 2012
142012
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20