Seguir
Hongyi Wang
Hongyi Wang
Incoming Assistant Professor, Rutgers University
E-mail confirmado em andrew.cmu.edu - Página inicial
Título
Citado por
Citado por
Ano
Federated Learning with Matched Averaging
H Wang, M Yurochkin, Y Sun, D Papailiopoulos, Y Khazaeni
ICLR 2020 - International Conference on Learning Representations, 2020
12802020
Attack of the tails: Yes, you really can backdoor federated learning
H Wang, K Sreenivasan, S Rajput, H Vishwakarma, S Agarwal, J Sohn, ...
Advances in Neural Information Processing Systems 33, 16070-16084, 2020
6492020
Fedml: A research library and benchmark for federated machine learning
C He, S Li, J So, X Zeng, M Zhang, H Wang, X Wang, P Vepakomma, ...
arXiv preprint arXiv:2007.13518, 2020
644*2020
Atomo: Communication-efficient learning via atomic sparsification
H Wang, S Sievert, S Liu, Z Charles, D Papailiopoulos, S Wright
Advances in neural information processing systems 31, 2018
4072018
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3932021
Draco: Byzantine-resilient distributed training via redundant gradients
L Chen, H Wang, Z Charles, D Papailiopoulos
International Conference on Machine Learning, 903-912, 2018
300*2018
Trustllm: Trustworthiness in large language models
Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li, C Gao, Y Huang, W Lyu, ...
arXiv preprint arXiv:2401.05561, 2024
2122024
DETOX: A redundancy-based framework for faster and more robust gradient aggregation
S Rajput, H Wang, Z Charles, D Papailiopoulos
Advances in Neural Information Processing Systems 32, 2019
1332019
Mpcformer: fast, performant and private transformer inference with mpc
D Li, R Shao, H Wang, H Guo, EP Xing, H Zhang
arXiv preprint arXiv:2211.01452, 2022
642022
Erasurehead: Distributed gradient descent without delays using approximate gradient coding
H Wang, Z Charles, D Papailiopoulos
arXiv preprint arXiv:1901.09671, 2019
622019
Pufferfish: Communication-efficient models at no extra cost
H Wang, S Agarwal, D Papailiopoulos
Proceedings of Machine Learning and Systems 3, 365-386, 2021
542021
On the utility of gradient compression in distributed training systems
S Agarwal, H Wang, S Venkataraman, D Papailiopoulos
Proceedings of Machine Learning and Systems 4, 652-672, 2022
482022
Llm360: Towards fully transparent open-source llms
Z Liu, A Qiao, W Neiswanger, H Wang, B Tan, T Tao, J Li, Y Wang, S Sun, ...
arXiv preprint arXiv:2312.06550, 2023
462023
Adaptive gradient communication via critical learning regime identification
S Agarwal, H Wang, K Lee, S Venkataraman, D Papailiopoulos
Proceedings of Machine Learning and Systems 3, 55-80, 2021
45*2021
Rare Gems: Finding Lottery Tickets at Initialization
K Sreenivasan, J Sohn, L Yang, M Grinde, A Nagle, H Wang, K Lee, ...
NeurIPS 2022, 2022
412022
Efficient federated learning on knowledge graphs via privacy-preserving relation embedding aggregation
K Zhang, Y Wang, H Wang, L Huang, C Yang, X Chen, L Sun
arXiv preprint arXiv:2203.09553, 2022
312022
Slimpajama-dc: Understanding data combinations for llm training
Z Shen, T Tao, L Ma, W Neiswanger, Z Liu, H Wang, B Tan, J Hestness, ...
arXiv preprint arXiv:2309.10818, 2023
302023
The effect of network width on the performance of large-batch training
L Chen, H Wang, J Zhao, D Papailiopoulos, P Koutris
Advances in neural information processing systems 31, 2018
242018
Fusing models with complementary expertise
H Wang, FM Polo, Y Sun, S Kundu, E Xing, M Yurochkin
arXiv preprint arXiv:2310.01542, 2023
222023
AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
D Li, H Wang, E Xing, H Zhang
NeurIPS 2022, 2022
162022
O sistema não pode executar a operação agora. Tente novamente mais tarde.
Artigos 1–20