دنبال کردن
Mandar Joshi
Mandar Joshi
ایمیل تأیید شده در google.com - صفحهٔ اصلی
عنوان
نقل شده توسط
نقل شده توسط
سال
Roberta: A Robustly Optimized BERT Pretraining Approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
182512019
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
M Joshi, E Choi, DS Weld, L Zettlemoyer
Association for Computational Linguistics (ACL), 2017
25362017
SpanBERT: Improving Pre-training by Representing and Predicting Spans
M Joshi, D Chen, Y Liu, DS Weld, L Zettlemoyer, O Levy
Transactions of the Association for Computational Linguistics, 2019
23362019
BERT for Coreference Resolution: Baselines and Analysis
M Joshi, O Levy, DS Weld, L Zettlemoyer
Empirical Methods in Natural Language Processing (EMNLP), 2019
4392019
Roberta: A robustly optimized BERT pretraining approach, CoRR abs/1907.11692 (2019)
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
3471907
Pix2struct: Screenshot parsing as pretraining for visual language understanding
K Lee, M Joshi, IR Turc, H Hu, F Liu, JM Eisenschlos, U Khandelwal, ...
International Conference on Machine Learning, 18893-18912, 2023
2712023
Pali-x: On scaling up a multilingual vision and language model
X Chen, J Djolonga, P Padlewski, B Mustafa, S Changpinyo, J Wu, ...
arXiv preprint arXiv:2305.18565, 2023
1782023
Cm3: A causal masked multimodal model of the internet
A Aghajanyan, B Huang, C Ross, V Karpukhin, H Xu, N Goyal, D Okhonko, ...
arXiv preprint arXiv:2201.07520, 2022
1622022
Improving passage retrieval with zero-shot question generation
DS Sachan, M Lewis, M Joshi, A Aghajanyan, W Yih, J Pineau, ...
arXiv preprint arXiv:2204.07496, 2022
1502022
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction
B Paranjape, M Joshi, J Thickstun, H Hajishirzi, L Zettlemoyer
Empirical Methods in Natural Language Processing (EMNLP), 2020
1212020
Deplot: One-shot visual language reasoning by plot-to-table translation
F Liu, JM Eisenschlos, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, ...
arXiv preprint arXiv:2212.10505, 2022
942022
Matcha: Enhancing visual language pretraining with math reasoning and chart derendering
F Liu, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, Y Altun, N Collier, ...
arXiv preprint arXiv:2212.09662, 2022
812022
Htlm: Hyper-text pre-training and prompting of language models
A Aghajanyan, D Okhonko, M Lewis, M Joshi, H Xu, G Ghosh, ...
arXiv preprint arXiv:2107.06955, 2021
812021
A robustly optimized bert pretraining approach
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
arXiv preprint arXiv:1907.11692, 2019
742019
From pixels to ui actions: Learning to follow instructions via graphical user interfaces
P Shaw, M Joshi, J Cohan, J Berant, P Pasupat, H Hu, U Khandelwal, ...
Advances in Neural Information Processing Systems 36, 34354-34370, 2023
682023
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
M Joshi, E Choi, O Levy, D Weld, L Zettlemoyer
North American Chapter of ACL (NAACL) 1, 3597–3608, 2019
602019
Open-domain visual entity recognition: Towards recognizing millions of wikipedia entities
H Hu, Y Luan, Y Chen, U Khandelwal, M Joshi, K Lee, K Toutanova, ...
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
572023
RoBERTa: A robustly optimized BERT pretraining approach (arXiv: 1907.11692). arXiv
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, ...
511907
Knowledge Graph and Corpus Driven Segmentation and Answer Inference for Telegraphic Entity-seeking Queries
M Joshi, U Sawant, S Chakrabarti
Empirical Methods in Natural Language Processing (EMNLP), 2014
502014
Cross-document coreference resolution over predicted mentions
A Cattan, A Eirew, G Stanovsky, M Joshi, I Dagan
arXiv preprint arXiv:2106.01210, 2021
472021
سیستم در حال حاضر قادر به انجام عملکرد نیست. بعداً دوباره امتحان کنید.
مقاله‌ها 1–20