Æȷοì
Xisen Jin
Á¦¸ñ
Àοë
Àοë
¿¬µµ
Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures
W Lei, X Jin, MY Kan, Z Ren, X He, D Yin
Proceedings of the 56th Annual Meeting of the Association for Computational ¡¦, 2018
3702018
Recurrent event network: Autoregressive structure inference over temporal knowledge graphs
W Jin, M Qu, X Jin, X Ren
arXiv preprint arXiv:1904.05530, 2019
3042019
Contextualizing hate speech classifiers with post-hoc explanation
B Kennedy, X Jin, AM Davani, M Dehghani, X Ren
ACL 2020, 2020
1422020
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models
X Jin, Z Wei, J Du, X Xue, X Ren
ICLR 2020, 2019
1172019
Gradient-based editing of memory examples for online task-free continual learning
X Jin, A Sadhu, J Du, X Ren
Advances in Neural Information Processing Systems 34, 29193-29205, 2021
100*2021
Lifelong pretraining: Continually adapting language models to emerging corpora
X Jin, D Zhang, H Zhu, W Xiao, SW Li, X Wei, A Arnold, X Ren
arXiv preprint arXiv:2110.08534, 2021
802021
On transferability of bias mitigation effects in language model fine-tuning
X Jin, F Barbieri, B Kennedy, AM Davani, L Neves, X Ren
arXiv preprint arXiv:2010.12864, 2020
61*2020
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation
X Jin, W Lei, Z Ren, H Chen, S Liang, Y Zhao, D Yin
Proceedings of the 27th ACM International Conference on Information and ¡¦, 2018
552018
Dataless knowledge fusion by merging weights of language models
X Jin, X Ren, D Preotiuc-Pietro, P Cheng
arXiv preprint arXiv:2212.09849, 2022
402022
Learn continually, generalize rapidly: Lifelong knowledge accumulation for few-shot learning
X Jin, BY Lin, M Rostami, X Ren
arXiv preprint arXiv:2104.08808, 2021
332021
Refining language models with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
Advances in Neural Information Processing Systems 34, 8954-8967, 2021
302021
Visually grounded continual learning of compositional phrases
X Jin, J Du, A Sadhu, R Nevatia, X Ren
arXiv preprint arXiv:2005.00785, 2020
15*2020
Refining neural networks with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
arXiv preprint arXiv:2103.10415 10, 2021
102021
Gradient-based editing of memory examples for online task-free continual learning
X Jin, A Sadhu, J Du, X Ren
arXiv preprint arXiv:2006.15294, 2020
92020
Overcoming catastrophic forgetting in massively multilingual continual learning
GI Winata, L Xie, K Radhakrishnan, S Wu, X Jin, P Cheng, M Kulkarni, ...
arXiv preprint arXiv:2305.16252, 2023
42023
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
X Jin, X Ren
arXiv preprint arXiv:2402.01865, 2024
2024
Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
G Indra Winata, L Xie, K Radhakrishnan, S Wu, X Jin, P Cheng, ...
arXiv e-prints, arXiv: 2305.16252, 2023
2023
Efficient Learning of Less Biased Models with Transfer Learning
X Jin, F Barbieri, L Neves, X Ren
2020
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–18