팔로우
Linjie Li 李琳婕
Linjie Li 李琳婕
Senior Researcher, Microsoft
microsoft.com의 이메일 확인됨
제목
인용
인용
연도
UNITER: Learning UNiversal Image-TExt Representations
YC Chen, L Li, L Yu, AE Kholy, F Ahmed, Z Gan, Y Cheng, J Liu
ECCV 2020, 2020
867*2020
Relation-aware graph attention network for visual question answering
L Li, Z Gan, Y Cheng, J Liu
ICCV 2019, 2019
2172019
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Z Gan, YC Chen, L Li, C Zhu, Y Cheng, J Liu
NeurIPS 2020, 2020
1942020
HERO: Hierarchical Encoder for Video+ Language Omni-representation Pre-training
L Li, YC Chen, Y Cheng, Z Gan, L Yu, J Liu
EMNLP 2020, 2020
1692020
Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
J Lei, L Li, L Zhou, Z Gan, TL Berg, M Bansal, J Liu
CVPR 2021, 2021
1442021
Multi-step reasoning via recurrent dual attention for visual dialog
Z Gan, Y Cheng, AEI Kholy, L Li, J Liu, J Gao
ACL 2019, 2019
762019
Graph Optimal Transport for Cross-Domain Alignment
L Chen, Z Gan, Y Cheng, L Li, L Carin, J Liu
ICML 2020, 2020
652020
Meta Module Network for Compositional Visual Reasoning
W Chen, Z Gan, L Li, Y Cheng, W Wang, J Liu
WACV 2021, 2019
402019
LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
S Sun, YC Chen, L Li, S Wang, Y Fang, J Liu
NAACL 2021, 2021
332021
VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation
L Li, J Lei, Z Gan, L Yu, YC Chen, R Pillai, Y Cheng, L Zhou, XE Wang, ...
NeurIPS 2021 Data and Benchmark Track, 2021
292021
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
M Zhou, L Zhou, S Wang, Y Cheng, L Li, Z Yu, J Liu
CVPR 2021, 2021
252021
Playing Lottery Tickets with Vision and Language
Z Gan, YC Chen, L Li, T Chen, Y Cheng, S Wang, J Liu
AAAI 2022, 2021
202021
Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models
L Li, J Lei, Z Gan, J Liu
ICCV 2021, 2021
192021
A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
L Li, Z Gan, J Liu
arXiv preprint arXiv:2012.08673, 2020
192020
Extracting human face similarity judgments: Pairs or triplets?
L Li, VL Malave, A Song, A Yu
CogSci 2016, 2016
152016
VIOLET: End-to-End Video-Language Transformers with Masked Visual-token Modeling
TJ Fu, L Li, Z Gan, K Lin, WY Wang, L Wang, Z Liu
arXiv preprint arXiv:2111.12681, 2021
142021
Learning to See People like People: Predicting Social Perceptions of Faces.
A Song, L Li, C Atalla, G Cottrell
CogSci 2017, 2017
14*2017
SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning
K Lin, L Li, CC Lin, F Ahmed, Z Gan, Z Liu, Y Lu, L Wang
CVPR 2022, 2021
72021
Learning to see faces like humans: modeling the social dimensions of faces
A Song, L Linjie, C Atalla, G Cottrell
Journal of Vision 17 (10), 837-837, 2017
72017
Cross-modal Representation Learning for Zero-shot Action Recognition
CC Lin, K Lin, L Li, L Wang, Z Liu
CVPR 2022, 19978-19988, 2022
32022
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20