Æȷοì
Sho Yokoi
Á¦¸ñ
Àοë
Àοë
¿¬µµ
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
Proceedings of the 2020 Conference on Empirical Methods in Natural Language ¡¦, 2020
2352020
Word Rotator's Distance
S Yokoi, R Takahashi, R Akama, J Suzuki, K Inui
arXiv preprint arXiv:2004.15003, 2020
692020
Evaluation of Similarity-based Explanations
K Hanawa, S Yokoi, S Hara, K Inui
The International Conference on Learning Representations, 2021
592021
Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, R Konno, K Inui
arXiv preprint arXiv:2004.14514, 2020
562020
Incorporating residual and normalization layers into analysis of masked language models
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2109.07152, 2021
412021
Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness
R Akama, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2004.14008, 2020
202020
Efficient Estimation of Influence of a Training Instance
S Kobayashi, S Yokoi, J Suzuki, K Inui
arXiv preprint arXiv:2012.04207, 2020
192020
Analyzing feed-forward blocks in transformers through the lens of attention map
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2302.00456, 2023
15*2023
Unsupervised Learning of Style-sensitive Word Vectors
R Akama, K Watanabe, S Yokoi, S Kobayashi, K Inui
arXiv preprint arXiv:1805.05581, 2018
132018
Unbalanced Optimal Transport for Unbalanced Word Alignment
Y Arase, H Bao, S Yokoi
arXiv preprint arXiv:2306.04116, 2023
102023
Norm of word embedding encodes information gain
M Oyama, S Yokoi, H Shimodaira
arXiv preprint arXiv:2212.09663, 2022
102022
Transformer Language Models Handle Word Frequency in Prediction Head
G Kobayashi, T Kuribayashi, S Yokoi, K Inui
arXiv preprint arXiv:2305.18294, 2023
92023
Modeling Event Salience in Narratives via Barthes' Cardinal Functions
T Otake, S Yokoi, N Inoue, R Takahashi, T Kuribayashi, K Inui
arXiv preprint arXiv:2011.01785, 2020
92020
Why is sentence similarity benchmark not predictive of application-oriented task performance?
K Abe, S Yokoi, T Kajiwara, K Inui
Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems ¡¦, 2022
62022
Pointwise HSIC: A Linear-Time Kernelized Co-occurrence Norm for Sparse Linguistic Expressions
S Yokoi, S Kobayashi, K Fukumizu, J Suzuki, K Inui
Proceedings of the 2018 Conference on Empirical Methods in Natural Language ¡¦, 2018
62018
Link Prediction in Sparse Networks by Incidence Matrix Factorization
S Yokoi, H Kajino, H Kashima
Journal of Information Processing 25, 477-485, 2017
62017
Instance-based neural dependency parsing
H Ouchi, J Suzuki, S Kobayashi, S Yokoi, T Kuribayashi, M Yoshikawa, ...
Transactions of the Association for Computational Linguistics 9, 1493-1507, 2021
42021
Improving word mover's distance by leveraging self-attention matrix
H Yamagiwa, S Yokoi, H Shimodaira
arXiv preprint arXiv:2211.06229, 2022
32022
Computationally efficient Wasserstein loss for structured labels
A Toyokuni, S Yokoi, H Kashima, M Yamada
arXiv preprint arXiv:2103.00899, 2021
32021
Learning Co-Substructures by Kernel Dependence Maximization
S Yokoi, D Mochihashi, R Takahashi, N Okazaki, K Inui
The 26th International Joint Conference on Artificial Intelligence (IJCAI ¡¦, 2017
32017
ÇöÀç ½Ã½ºÅÛÀÌ ÀÛµ¿µÇÁö ¾Ê½À´Ï´Ù. ³ªÁß¿¡ ´Ù½Ã ½ÃµµÇØ ÁÖ¼¼¿ä.
ÇмúÀÚ·á 1–20