팔로우
Kristina Toutanova
Kristina Toutanova
Google DeepMind
google.com의 이메일 확인됨 - 홈페이지
제목
인용
인용
연도
Bert: Pre-training of deep bidirectional transformers for language understanding
J Devlin
arXiv preprint arXiv:1810.04805, 2018
1198302018
Feature-rich part-of-speech tagging with a cyclic dependency network
K Toutanova, D Klein, CD Manning, Y Singer
Proceedings of the 2003 human language technology conference of the north …, 2003
45292003
Natural questions: a benchmark for question answering research
T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, ...
Transactions of the Association for Computational Linguistics 7, 453-466, 2019
28142019
Enriching the knowledge sources used in a maximum entropy part-of-speech tagger
K Toutanvoa, CD Manning
2000 Joint SIGDAT conference on Empirical methods in natural language …, 2000
15572000
Observed versus latent features for knowledge base and text inference
K Toutanova, D Chen
Proceedings of the 3rd workshop on continuous vector space models and their …, 2015
11922015
BoolQ: Exploring the surprising difficulty of natural yes/no questions
C Clark, K Lee, MW Chang, T Kwiatkowski, M Collins, K Toutanova
arXiv preprint arXiv:1905.10044, 2019
11572019
Latent retrieval for weakly supervised open domain question answering
K Lee, MW Chang, K Toutanova
arXiv preprint arXiv:1906.00300, 2019
10252019
Representing Text for Joint Embedding of Text and Knowledge Bases
K Toutanova, D Chen, P Pantel, H Poon, P Choudhury, M Gamon
EMNLP, 2015
8882015
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
I Turc, MW Chang, K Lee, K Toutanova
6992019
Cross-Sentence N-ary Relation Extraction with Graph LSTMs
N Peng, H Poon, C Quirk, K Toutanova, W Yih
Transactions of the Association for Computational Linguistics 5, 101-115, 2017
6132017
Sparse, dense, and attentional representations for text retrieval
Y Luan, J Eisenstein, K Toutanova, M Collins
Transactions of the Association for Computational Linguistics 9, 329-345, 2021
4032021
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
J Devlin, MW Chang, K Lee, K Toutanova
Association for Computational Linguistics, 4171-4186, 2019
361*2019
BERT: Pre-training of deep bidirectional transformers for language understanding 2019 Conference of the North American Chapter of the Association for Computational Linguistics …
J Devlin, MW Chang, K Lee, K Toutanova
Materials Science and Engineering 1115, 012032, 2021
3402021
Learning discriminative projections for text similarity measures
W Yih, K Toutanova, JC Platt, C Meek
Proceedings of the fifteenth conference on computational natural language …, 2011
3352011
BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805
J Devlin, M Chang, K Lee, K Toutanova
Preprint retrieved from http://arxiv. org/abs, 2018
305*2018
Zero-shot entity linking by reading entity descriptions
L Logeswaran, MW Chang, K Lee, K Toutanova, J Devlin, H Lee
arXiv preprint arXiv:1906.07348, 2019
2932019
Pronunciation modeling for improved spelling correction
K Toutanova, RC Moore
Proceedings of the 40th Annual Meeting of the Association for Computational …, 2002
2812002
Extracting parallel sentences from comparable corpora using document level alignment
J Smith, C Quirk, K Toutanova
Human language technologies: The 2010 annual conference of the North …, 2010
2692010
LinGO Redwoods: A Rich and Dynamic Treebank for HPSG
S Oepen, D Flickinger, K Toutanova, CD Manning
Research on Language and Computation 2, 575-596, 2004
2632004
Semi-supervised part-of-speech tagging
KN Toutanova, ME Johnson
US Patent 8,275,607, 2012
2532012
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20