フォロー
Phu Mon Htut
Phu Mon Htut
AWS AI Labs
確認したメール アドレス: amazon.com - ホームページ
タイトル
引用先
引用先
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
11432022
Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work?
Y Pruksachatkun, J Phang, H Liu, PM Htut, X Zhang, RY Pang, C Vania, ...
arXiv preprint arXiv:2005.00628, 2020
2982020
BBQ: A hand-built bias benchmark for question answering
A Parrish, A Chen, N Nangia, V Padmakumar, J Phang, J Thompson, ...
arXiv preprint arXiv:2110.08193, 2021
2802021
Generalized inner loop meta-learning
E Grefenstette, B Amos, D Yarats, PM Htut, A Molchanov, F Meier, D Kiela, ...
arXiv preprint arXiv:1910.01727, 2019
1752019
Do attention heads in BERT track syntactic dependencies?
PM Htut, J Phang, S Bordia, SR Bowman
arXiv preprint arXiv:1911.12246, 2019
1492019
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs
A Warstadt
arXiv preprint arXiv:1909.02597, 2019
1342019
English intermediate-task training improves zero-shot cross-lingual transfer too
J Phang, I Calixto, PM Htut, Y Pruksachatkun, H Liu, C Vania, K Kann, ...
arXiv preprint arXiv:2005.13013, 2020
712020
Training a ranking function for open-domain question answering
PM Htut, SR Bowman, K Cho
arXiv preprint arXiv:1804.04264, 2018
602018
Grammar induction with neural language models: An unusual replication
PM Htut, K Cho, SR Bowman
arXiv preprint arXiv:1808.10000, 2018
572018
jiant 1.2: A software toolkit for research on general-purpose text understanding models
A Wang, IF Tenney, Y Pruksachatkun, K Yu, J Hula, P Xia, R Pappagari, ...
Note: http://jiant. info/Cited by: footnote 4, 2019
542019
Findings of the iwslt 2023 evaluation campaign
M Agarwal, S Agarwal, A Anastasopoulos, L Bentivogli, O Bojar, C Borg, ...
Association for Computational Linguistics, 2023
462023
jiant: A software toolkit for research on general-purpose text understanding models
Y Pruksachatkun, P Yeres, H Liu, J Phang, PM Htut, A Wang, I Tenney, ...
arXiv preprint arXiv:2003.02249, 2020
402020
Comparing test sets with item response theory
C Vania, PM Htut, W Huang, D Mungra, RY Pang, J Phang, H Liu, K Cho, ...
arXiv preprint arXiv:2106.00840, 2021
342021
(QA): Question Answering with Questionable Assumptions
N Kim, PM Htut, SR Bowman, J Petty
arXiv preprint arXiv:2212.10003, 2022
222022
The unbearable weight of generating artificial errors for grammatical error correction
PM Htut, J Tetreault
arXiv preprint arXiv:1907.08889, 2019
162019
Clustering examples in multi-dataset benchmarks with item response theory
P Rodriguez, PM Htut, JP Lalor, J Sedoc
Proceedings of the Third Workshop on Insights from Negative Results in NLP …, 2022
92022
Inducing constituency trees through neural machine translation
PM Htut, K Cho, SR Bowman
arXiv preprint arXiv:1909.10056, 2019
62019
RAMP: Retrieval and attribute-marking enhanced prompting for attribute-controlled translation
G Sarti, PM Htut, X Niu, B Hsu, A Currey, G Dinu, M Nadejde
arXiv preprint arXiv:2305.17131, 2023
52023
Open Domain Question Answering with Conflicting Contexts
S Liu, Q Ning, K Halder, W Xiao, Z Qi, PM Htut, Y Zhang, NA John, B Min, ...
arXiv preprint arXiv:2410.12311, 2024
2024
^ 2: Question Answering with Questionable Assumptions
SR Bowman, PM Htut, N Kim
2023
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20