Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models EB Zaken, S Ravfogel, Y Goldberg Proceedings of the 60th Annual Meeting of the Association for Computational …, 2021 | 645 | 2021 |
Null it out: Guarding protected attributes by iterative nullspace projection S Ravfogel, Y Elazar, H Gonen, M Twiton, Y Goldberg Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 297 | 2020 |
Measuring and improving consistency in pretrained language models Y Elazar, N Kassner, S Ravfogel, A Ravichander, E Hovy, H Schütze, ... Transactions of the Association for Computational Linguistics 9, 1012-1031, 2021 | 251 | 2021 |
Amnesic probing: Behavioral explanation with amnesic counterfactuals Y Elazar, S Ravfogel, A Jacovi, Y Goldberg Transactions of the Association for Computational Linguistics 9, 160-175, 2021 | 168 | 2021 |
Contrastive explanations for model interpretability A Jacovi, S Swayamdipta, S Ravfogel, Y Elazar, Y Choi, Y Goldberg arXiv preprint arXiv:2103.01378, 2021 | 86 | 2021 |
Studying the inductive biases of RNNs with synthetic variations of natural languages S Ravfogel, Y Goldberg, T Linzen The 2019 Conference of the North American Chapter of the Association for …, 2019 | 76 | 2019 |
Linear adversarial concept erasure S Ravfogel, M Twiton, Y Goldberg, RD Cotterell International Conference on Machine Learning, 18400-18421, 2022 | 64 | 2022 |
Counterfactual interventions reveal the causal effect of relative clause representations on agreement prediction S Ravfogel, G Prasad, T Linzen, Y Goldberg Proceedings of the 25th Conference on Computational Natural Language Learning, 2021 | 47 | 2021 |
It's not Greek to mBERT: inducing word-level translations from multilingual BERT H Gonen, S Ravfogel, Y Elazar, Y Goldberg Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting …, 2020 | 43 | 2020 |
Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation M Mosbach, T Pimentel, S Ravfogel, D Klakow, Y Elazar arXiv preprint arXiv:2305.16938, 2023 | 40 | 2023 |
Can LSTM learn to capture agreement? The case of Basque S Ravfogel, FM Tyers, Y Goldberg Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and …, 2018 | 39 | 2018 |
Leace: Perfect linear concept erasure in closed form N Belrose, D Schneider-Joseph, S Ravfogel, R Cotterell, E Raff, ... Advances in Neural Information Processing Systems 36, 2024 | 38 | 2024 |
Measuring causal effects of data statistics on language model'sfactual'predictions Y Elazar, N Kassner, S Ravfogel, A Feder, A Ravichander, M Mosbach, ... arXiv preprint arXiv:2207.14251, 2022 | 33 | 2022 |
Ab antiquo: Neural proto-language reconstruction C Meloni, S Ravfogel, Y Goldberg Proceedings of the 2021 Conference of the North American Chapter of the …, 2019 | 31* | 2019 |
Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models E Ben Zaken, S Ravfogel, Y Goldberg arXiv e-prints, arXiv: 2106.10199, 2021 | 29 | 2021 |
Dalle-2 is seeing double: flaws in word-to-concept mapping in Text2Image models R Rassin, S Ravfogel, Y Goldberg Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting …, 2022 | 28 | 2022 |
Kernelized Concept Erasure S Ravfogel, F Vargas, Y Goldberg, R Cotterell Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 23* | 2022 |
Linguistic binding in diffusion models: Enhancing attribute correspondence through attention map alignment R Rassin, E Hirsch, D Glickman, S Ravfogel, Y Goldberg, G Chechik Advances in Neural Information Processing Systems 36, 2024 | 19 | 2024 |
When bert forgets how to pos: Amnesic probing of linguistic properties and mlm predictions Y Elazar, S Ravfogel, A Jacovi, Y Goldberg arXiv preprint arXiv:2006.00995, 2020 | 18 | 2020 |
Visual comparison of language model adaptation R Sevastjanova, E Cakmak, S Ravfogel, R Cotterell, M El-Assady IEEE Transactions on Visualization and Computer Graphics 29 (1), 1178-1188, 2022 | 14 | 2022 |