Do llamas work in english? on the latent language of multilingual transformers C Wendler, V Veselovsky, G Monea, R West Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024 | 78 | 2024 |
Pass: Parallel speculative sampling G Monea, A Joulin, E Grave arXiv preprint arXiv:2311.13581, 2023 | 34 | 2023 |
A glitch in the matrix? locating and detecting language model grounding with fakepedia G Monea, M Peyrard, M Josifoski, V Chaudhary, J Eisner, E Kıcıman, ... arXiv preprint arXiv:2312.02073, 2023 | 9 | 2023 |
How do llamas process multilingual text? a latent exploration through activation patching C Dumas, V Veselovsky, G Monea, R West, C Wendler ICML 2024 Workshop on Mechanistic Interpretability, 2024 | 6 | 2024 |
Llms are in-context reinforcement learners G Monea, A Bosselut, K Brantley, Y Artzi | 5 | 2024 |
Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers C Dumas, C Wendler, V Veselovsky, G Monea, R West arXiv preprint arXiv:2411.08745, 2024 | 1 | 2024 |
Controllable Context Sensitivity and the Knob Behind It J Minder, K Du, N Stoehr, G Monea, C Wendler, R West, R Cotterell arXiv preprint arXiv:2411.07404, 2024 | | 2024 |