Seguir
Yu Ding
Yu Ding
Director of AI R&D Center, Happy Elements, China
Dirección de correo verificada de happyelements.com
Título
Citado por
Citado por
Año
Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset
Z Zhang, L Li, Y Ding, C Fan
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
1952021
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
S Wang, L Li, Y Ding, C Fan, X Yu
International Joint Conference on Artificial Intelligence (IJCAI-21), 2021
1122021
Freenet: Multi-identity face reenactment
J Zhang, X Zeng, M Wang, Y Pan, L Liu, Y Liu, Y Ding, C Fan
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
1052020
Transformer-based multimodal information fusion for facial expression analysis
W Zhang, F Qiu, S Wang, H Zeng, Z Zhang, R An, B Ma, Y Ding
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
892022
One-shot talking face generation from single-speaker audio-visual correlation learning
S Wang, L Li, Y Ding, X Yu
Proceedings of the AAAI Conference on Artificial Intelligence 36 (3), 2531-2539, 2022
762022
Learning a facial expression embedding disentangled from identity
W Zhang, X Ji, K Chen, Y Ding, C Fan
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
632021
Write-a-speaker: Text-based emotional and rhythmic talking-head generation
L Li, S Wang, Z Zhang, Y Ding, Y Zheng, X Yu, C Fan
Proceedings of the AAAI conference on artificial intelligence 35 (3), 1911-1920, 2021
622021
Laughter animation synthesis
Y Ding, K Prepin, J Huang, C Pelachaud, T Artières
Proceedings of the 2014 international conference on Autonomous agents and …, 2014
622014
Modeling multimodal behaviors from speech prosody
Y Ding, C Pelachaud, T Artieres
International Conference on Intelligent Virtual Agents, 217-228, 2013
412013
StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles
Y Ma, S Wang, Z Hu, C Fan, T Lv, Y Ding, Z Deng, X Yu
AAAI 2023, 2023
352023
Faceswapnet: Landmark guided many-to-many face reenactment
J Zhang, X Zeng, Y Pan, Y Liu, Y Ding, C Fan
arXiv preprint arXiv:1905.11805 2, 3, 2019
352019
Prior aided streaming network for multi-task affective recognitionat the 2nd abaw2 competition
W Zhang, Z Guo, K Chen, L Li, Z Zhang, Y Ding
arXiv preprint arXiv:2107.03708, 2021
332021
Rhythmic body movements of laughter
R Niewiadomski, M Mancini, Y Ding, C Pelachaud, G Volpe
Proceedings of the 16th international conference on multimodal interaction …, 2014
272014
Speech-driven eyebrow motion synthesis with contextual markovian models
Y Ding, M Radenen, T Artieres, C Pelachaud
2013 IEEE International Conference on Acoustics, Speech and Signal …, 2013
242013
Implementing and evaluating a laughing virtual character
M Mancini, B Biancardi, F Pecune, G Varni, Y Ding, C Pelachaud, G Volpe, ...
ACM Transactions on Internet Technology (TOIT) 17 (1), 1-22, 2017
222017
Laughing with a Virtual Agent.
F Pecune, M Mancini, B Biancardi, G Varni, Y Ding, C Pelachaud, G Volpe, ...
AAMAS, 1817-1818, 2015
212015
One-shot voice conversion using star-gan
R Wang, Y Ding, L Li, C Fan
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
202020
DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video
Z Zhang, Z Hu, W Deng, C Fan, T Lv, Y Ding
AAAI 2023, 2023
192023
Dynamically adjust word representations using unaligned multimodal information
J Guo, J Tang, W Dai, Y Ding, W Kong
Proceedings of the 30th ACM International Conference on Multimedia, 3394-3402, 2022
172022
A multifaceted study on eye contact based speaker identification in three-party conversations
Y Ding, Y Zhang, M Xiao, Z Deng
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
172017
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20