Follow
Jinming Zhao
Jinming Zhao
Verified email at ruc.edu.cn
Title
Cited by
Cited by
Year
Multimodal multi-task learning for dimensional and continuous emotion recognition
S Chen, Q Jin, J Zhao, S Wang
Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, 19-26, 2017
1642017
Mmgcn: Multimodal fusion via deep graph convolution network for emotion recognition in conversation
J Hu, Y Liu, J Zhao, Q Jin
arXiv preprint arXiv:2107.06779, 2021
1252021
WenLan: Bridging vision and language by large-scale multi-modal pre-training
Y Huo, M Zhang, G Liu, H Lu, Y Gao, G Yang, J Wen, H Zhang, B Xu, ...
arXiv preprint arXiv:2103.06561, 2021
1152021
Missing modality imagination network for emotion recognition with uncertain missing modalities
J Zhao, R Li, Q Jin
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
822021
Multi-modal multi-cultural dimensional continues emotion recognition in dyadic interactions
J Zhao, R Li, S Chen, Q Jin
Proceedings of the 2018 on audio/visual emotion challenge and workshop, 65-72, 2018
572018
Memobert: Pre-training model with prompt-based learning for multimodal emotion recognition
J Zhao, R Li, Q Jin, X Wang, H Li
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
312022
Adversarial domain adaption for multi-cultural dimensional emotion recognition in dyadic interactions
J Zhao, R Li, J Liang, S Chen, Q Jin
Proceedings of the 9th international on audio/visual emotion challenge and …, 2019
292019
Mer 2023: Multi-label learning, modality robustness, and semi-supervised learning
Z Lian, H Sun, L Sun, K Chen, M Xu, K Wang, K Xu, Y He, Y Li, J Zhao, ...
Proceedings of the 31st ACM International Conference on Multimedia, 9610-9614, 2023
232023
M3ED: Multi-modal multi-scene multi-label emotional dialogue database
J Zhao, T Zhang, J Hu, Y Liu, Q Jin, X Wang, H Li
arXiv preprint arXiv:2205.10237, 2022
222022
Multi-modal emotion estimation for in-the-wild videos
L Meng, Y Liu, X Liu, Z Huang, Y Cheng, M Wang, C Liu, Q Jin
arXiv preprint arXiv:2203.13032, 2022
212022
Cross-culture multimodal emotion recognition with adversarial learning
J Liang, S Chen, J Zhao, Q Jin, H Liu, L Lu
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
192019
Emotion recognition with multimodal features and temporal models
S Wang, W Wang, J Zhao, S Chen, Q Jin, S Zhang, Y Qin
Proceedings of the 19th ACM International Conference on Multimodal …, 2017
172017
Video interestingness prediction based on ranking model
S Wang, S Chen, J Zhao, Q Jin
Proceedings of the joint workshop of the 4th workshop on affective social …, 2018
152018
Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
H Zuo, R Liu, J Zhao, G Gao, H Li
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
132023
Multi-modal fusion for video sentiment analysis
R Li, J Zhao, J Hu, S Guo, Q Jin
Proceedings of the 1st International on Multimodal Sentiment Analysis in …, 2020
122020
Multi-task learning framework for emotion recognition in-the-wild
T Zhang, C Liu, X Liu, Y Liu, L Meng, L Sun, W Jiang, F Zhang, J Zhao, ...
European Conference on Computer Vision, 143-156, 2022
102022
DialogueEIN: Emotion interaction network for dialogue affective analysis
Y Liu, J Zhao, J Hu, R Li, Q Jin
Proceedings of the 29th International Conference on Computational …, 2022
92022
Speech Emotion Recognition via Multi-Level Cross-Modal Distillation.
R Li, J Zhao, Q Jin
Interspeech, 4488-4492, 2021
92021
Speech Emotion Recognition in Dyadic Dialogues with Attentive Interaction Modeling.
J Zhao, S Chen, J Liang, Q Jin
INTERSPEECH, 1671-1675, 2019
92019
Emotion Recognition using Multimodal Features
J Zhao, S Chen, S Wang, Q Jin
2018 First Asian Conference on Affective Computing and Intelligent …, 2018
72018
The system can't perform the operation now. Try again later.
Articles 1–20