팔로우
Triantafyllos Afouras
Triantafyllos Afouras
FAIR, Meta, University of Oxford
fb.com의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Counterfactual multi-agent policy gradients
J Foerster, G Farquhar, T Afouras, N Nardelli, S Whiteson
AAAI Conference on Artificial Intelligence, 2017
21262017
Deep audio-visual speech recognition
T Afouras, JS Chung, A Senior, O Vinyals, A Zisserman
IEEE transactions on pattern analysis and machine intelligence 44 (12), 8717 …, 2018
8002018
Stabilising experience replay for deep multi-agent reinforcement learning
J Foerster, N Nardelli, G Farquhar, T Afouras, PHS Torr, P Kohli, ...
International Conference on Machine Learning (ICML), 2017
7282017
The conversation: Deep audio-visual speech enhancement
T Afouras, JS Chung, A Zisserman
INTERSPEECH, 2018
3962018
LRS3-TED: a large-scale dataset for visual speech recognition
T Afouras, JS Chung, A Zisserman
arXiv preprint arXiv:1809.00496, 2018
3942018
Self-Supervised Learning of Audio-Visual Objects from Video
T Afouras, A Owens, JS Chung, A Zisserman
European Conference on Computer Vision (ECCV), 2020
2452020
BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues
S Albanie, G Varol, L Momeni, T Afouras, JS Chung, N Fox, A Zisserman
European Conference on Computer Vision (ECCV), 2020
1732020
Localizing visual sounds the hard way
A Vedaldi, H Chen, W Xie, T Afouras, A Nagrani, A Zisserman
Institute of Electrical and Electronics Engineers, 2021
169*2021
Spot the conversation: speaker diarisation in the wild
JS Chung, J Huh, A Nagrani, T Afouras, A Zisserman
INTERSPEECH, 2020
1422020
ASR is all you need: Cross-modal distillation for lip reading
T Afouras, JS Chung, A Zisserman
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
1382020
Deep lip reading: a comparison of models and an online application
T Afouras, JS Chung, A Zisserman
INTERSPEECH, 2018
1272018
My lips are concealed: Audio-visual speech enhancement through obstructions
T Afouras, JS Chung, A Zisserman
arXiv preprint arXiv:1907.04975, 2019
942019
Sub-word Level Lip Reading With Visual Attention
KR Prajwal, T Afouras, A Zisserman
arXiv preprint arXiv:2110.07603, 2021
812021
Bbc-oxford british sign language dataset
S Albanie, G Varol, L Momeni, H Bull, T Afouras, H Chowdhury, N Fox, ...
arXiv preprint arXiv:2111.03635, 2021
582021
Watch, read and lookup: learning to spot signs from multiple supervisors
L Momeni, G Varol, S Albanie, T Afouras, A Zisserman
Proceedings of the Asian Conference on Computer Vision, 2020
492020
Self-supervised object detection from audio-visual correspondence
T Afouras, YM Asano, F Fagan, A Vedaldi, F Metze
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
482022
Read and attend: Temporal localisation in sign language videos
G Varol, L Momeni, S Albanie, T Afouras, A Zisserman
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021
482021
Seeing wake words: Audio-visual keyword spotting
L Momeni, T Afouras, T Stafylakis, S Albanie, A Zisserman
arXiv preprint arXiv:2009.01225, 2020
452020
Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives
K Grauman, A Westbury, L Torresani, K Kitani, J Malik, T Afouras, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
312024
Audio-visual synchronisation in the wild
H Chen, W Xie, T Afouras, A Nagrani, A Vedaldi, A Zisserman
arXiv preprint arXiv:2112.04432, 2021
312021
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20