Seguir
Hongyuan Mei
Hongyuan Mei
Google DeepMind, TTIC, JHU, UChicago
E-mail confirmado em google.com - Página inicial
Título
Citado por
Citado por
Ano
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process
H Mei, J Eisner
arXiv, 2016
7852016
What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment
H Mei, M Bansal, MR Walter
NAACL, 2016
3352016
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences
H Mei, M Bansal, MR Walter
AAAI, 2016
2882016
Coherent Dialogue with Attention-based Language Models
H Mei, M Bansal, MR Walter
AAAI, 2017
1182017
Imputing missing events in continuous-time event streams
H Mei, G Qin, J Eisner
International Conference on Machine Learning, 4475-4485, 2019
592019
Transformer embeddings of irregularly spaced events and their participants
C Yang, H Mei, J Eisner
arXiv preprint arXiv:2201.00044, 2021
582021
Language models can improve event prediction by few-shot abductive reasoning
X Shi, S Xue, K Wang, F Zhou, J Zhang, J Zhou, C Tan, H Mei
Advances in Neural Information Processing Systems 36, 29532-29557, 2023
472023
Easytpp: Towards open benchmarking temporal point processes
S Xue, X Shi, Z Chu, Y Wang, H Hao, F Zhou, C Jiang, C Pan, JY Zhang, ...
arXiv preprint arXiv:2307.08097, 2023
422023
Hypro: A hybridly normalized probabilistic model for long-horizon prediction of event sequences
S Xue, X Shi, J Zhang, H Mei
Advances in Neural Information Processing Systems 35, 34641-34650, 2022
372022
Statler: State-maintaining language models for embodied reasoning
T Yoneda, J Fang, P Li, H Zhang, T Jiang, S Lin, B Picker, D Yunis, H Mei, ...
2024 IEEE International Conference on Robotics and Automation (ICRA), 15083 …, 2024
362024
Can large language models play text games well? current state-of-the-art and open questions
CF Tsai, X Zhou, SS Liu, J Li, M Yu, H Mei
arXiv preprint arXiv:2304.02868, 2023
332023
Noise-contrastive estimation for multivariate point processes
H Mei, T Wan, J Eisner
Advances in neural information processing systems 33, 5204-5214, 2020
272020
Neural Datalog through time: Informed temporal modeling via logical specification
H Mei, G Qin, M Xu, J Eisner
International Conference on Machine Learning, 6808-6819, 2020
262020
Personalized dynamic treatment regimes in continuous time: a Bayesian approach for optimizing clinical decisions with timing
W Hua, H Mei, S Zohar, M Giral, Y Xu
Bayesian Analysis 17 (3), 849-878, 2022
242022
Hypothesis generation with large language models
Y Zhou, H Liu, T Srivastava, H Mei, C Tan
arXiv preprint arXiv:2404.04326, 2024
232024
Robustness of learning from task instructions
J Gu, H Zhao, H Xu, L Nie, H Mei, W Yin
arXiv preprint arXiv:2212.03813, 2022
222022
Hidden state variability of pretrained language models can guide computation reduction for transfer learning
S Xie, J Qiu, A Pasad, L Du, Q Qu, H Mei
arXiv preprint arXiv:2210.10041, 2022
212022
Tiny-attention adapter: Contexts are more important than the number of parameters
H Zhao, H Tan, H Mei
arXiv preprint arXiv:2211.01979, 2022
182022
Weaverbird: Empowering financial decision-making with large language model, knowledge base, and search engine
S Xue, F Zhou, Y Xu, M Jin, Q Wen, H Hao, Q Dai, C Jiang, H Zhao, S Xie, ...
arXiv preprint arXiv:2308.05361, 2023
162023
Explicit planning helps language models in logical reasoning
H Zhao, K Wang, M Yu, H Mei
arXiv preprint arXiv:2303.15714, 2023
162023
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20