Segui
Hong Liu
Hong Liu
Email verificata su stanford.edu - Home page
Titolo
Citata da
Citata da
Anno
Separate to adapt: Open set domain adaptation via progressive separation
H Liu, Z Cao, M Long, J Wang, Q Yang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
3422019
Transferable adversarial training: A general approach to adapting deep classifiers
H Liu, M Long, J Wang, M Jordan
International Conference on Machine Learning, 4013-4022, 2019
3002019
Cycle Self-Training for Domain Adaptation
H Liu, J Wang, M Long
Advances in Neural Information Processing Systems 34, 2021
2002021
Self-supervised Learning is More Robust to Dataset Imbalance
H Liu, JZ HaoChen, A Gaidon, T Ma
International Conference on Learning Representations, 2022
1712022
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
H Liu, Z Li, D Hall, P Liang, T Ma
International Conference on Learning Representations, 2024
1062024
Learning to Adapt to Evolving Domains
H Liu, M Long, J Wang, Y Wang
Advances in Neural Information Processing Systems 33, 2020
732020
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Z Li, H Liu, D Zhou, T Ma
International Conference on Learning Representations, 2024
482024
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
H Liu, SM Xie, Z Li, T Ma
International Conference on Machine Learning, 2023
372023
Towards Understanding the Transferability of Deep Representations
H Liu, M Long, J Wang, MI Jordan
arXiv preprint arXiv:1909.12031, 2019
322019
Meta-learning Transferable Representations with a Single Target Domain
H Liu, JZ HaoChen, C Wei, T Ma
arXiv preprint arXiv:2011.01418, 2020
72020
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–10