Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 2084 | 2023 |
TAPAS: Weakly Supervised Table Parsing via Pre-training J Herzig, PK Nowak, T Müller, F Piccinno, JM Eisenschlos Proceedings of ACL 2020, 2020 | 628 | 2020 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 617 | 2024 |
Time-aware language models as temporal knowledge bases B Dhingra, JR Cole, JM Eisenschlos, D Gillick, J Eisenstein, WW Cohen Transactions of the Association for Computational Linguistics 10, 257-273, 2022 | 239 | 2022 |
Pix2struct: Screenshot parsing as pretraining for visual language understanding K Lee, M Joshi, IR Turc, H Hu, F Liu, JM Eisenschlos, U Khandelwal, ... International Conference on Machine Learning, 18893-18912, 2023 | 219 | 2023 |
Multifit: Efficient multi-lingual language model fine-tuning JM Eisenschlos, S Ruder, P Czapla, M Kardas, S Gugger, J Howard Proceedings of EMNLP 2020, 2019 | 114 | 2019 |
Understanding tables with intermediate pre-training JM Eisenschlos, S Krichene, T Müller Findings of EMNLP 2020, 2020 | 110 | 2020 |
Open Domain Question Answering over Tables via Dense Retrieval J Herzig, T Müller, S Krichene, JM Eisenschlos Proceedings of NAACL 2021, 2021 | 94 | 2021 |
MATE: Multi-view attention for table transformer efficiency JM Eisenschlos, M Gor, T Mueller, W Cohen Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 85 | 2021 |
Deplot: One-shot visual language reasoning by plot-to-table translation F Liu, JM Eisenschlos, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, ... arXiv preprint arXiv:2212.10505, 2022 | 78 | 2022 |
Matcha: Enhancing visual language pretraining with math reasoning and chart derendering F Liu, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, Y Altun, N Collier, ... arXiv preprint arXiv:2212.09662, 2022 | 67 | 2022 |
SoftSort: A Continuous Relaxation for the argsort Operator S Prillo, JM Eisenschlos Proceedings of ICML 2020, 2020 | 65 | 2020 |
Paligemma: A versatile 3b vlm for transfer L Beyer, A Steiner, AS Pinto, A Kolesnikov, X Wang, D Salz, M Neumann, ... arXiv preprint arXiv:2407.07726, 2024 | 51 | 2024 |
Chain-of-table: Evolving tables in the reasoning chain for table understanding Z Wang, H Zhang, CL Li, JM Eisenschlos, V Perot, Z Wang, L Miculicich, ... arXiv preprint arXiv:2401.04398, 2024 | 48 | 2024 |
Fool Me Twice: Entailment from Wikipedia Gamification JM Eisenschlos, B Dhingra, J Bulian, B Börschinger, J Boyd-Graber Proceedings of NAACL 2021, 2021 | 39 | 2021 |
Selectively answering ambiguous questions JR Cole, MJQ Zhang, D Gillick, JM Eisenschlos, B Dhingra, J Eisenstein arXiv preprint arXiv:2305.14613, 2023 | 36 | 2023 |
Table-to-text generation and pre-training with tabt5 E Andrejczuk, JM Eisenschlos, F Piccinno, S Krichene, Y Altun arXiv preprint arXiv:2210.09162, 2022 | 28 | 2022 |
DoT: An efficient Double Transformer for NLP tasks with tables S Krichene, T Müller, JM Eisenschlos Findings of ACL 2021, 2021 | 15 | 2021 |
Universal self-adaptive prompting X Wan, R Sun, H Nakhost, H Dai, JM Eisenschlos, SO Arik, T Pfister arXiv preprint arXiv:2305.14926, 2023 | 14 | 2023 |
TAPAS at SemEval-2021 Task 9: Reasoning over tables with intermediate pre-training T Müller, JM Eisenschlos, S Krichene SemEval 2021, 2021 | 14 | 2021 |