Artykuły udostępnione publicznie: - Xiaohan ChenWięcej informacji
Dostępne w jakimś miejscu: 17
Plug-and-play methods provably converge with properly trained denoisers
E Ryu, J Liu, S Wang, X Chen, Z Wang, W Yin
International Conference on Machine Learning, 5546-5557, 2019
Upoważnienia: US National Science Foundation, US Department of Defense, National Natural …
Can we gain more from orthogonality regularizations in training deep networks?
N Bansal, X Chen, Z Wang
Advances in Neural Information Processing Systems 31, 2018
Upoważnienia: US National Science Foundation
Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds
X Chen, J Liu, Z Wang, W Yin
Advances in Neural Information Processing Systems 31, 2018
Upoważnienia: US National Science Foundation, US Department of Defense
Learning to optimize: A primer and a benchmark
T Chen, X Chen, W Chen, H Heaton, J Liu, Z Wang, W Yin
Journal of Machine Learning Research 23 (189), 1-59, 2022
Upoważnienia: US National Science Foundation
ALISTA: Analytic weights are as good as learned weights in LISTA
J Liu, X Chen
International Conference on Learning Representations (ICLR), 2019
Upoważnienia: US National Science Foundation
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Neural Information Processing Systems 2021, 2021
Upoważnienia: Netherlands Organisation for Scientific Research
Federated dynamic sparse training: Computing less, communicating less, yet learning better
S Bibikar, H Vikalo, Z Wang, X Chen
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6080-6088, 2022
Upoważnienia: US National Science Foundation, US Department of Defense
E2-train: Training state-of-the-art cnns with over 80% energy savings
Y Wang, Z Jiang, X Chen, P Xu, Y Zhao, Y Lin, Z Wang
Advances in neural information processing systems 32, 2019
Upoważnienia: US National Science Foundation
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
X Ma, G Yuan, X Shen, T Chen, X Chen, X Chen, N Liu, M Qin, S Liu, ...
Neural Information Processing Systems 2021, 2021
Upoważnienia: US National Science Foundation, US Department of Defense
Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
arXiv preprint arXiv:2106.14568, 2021
Upoważnienia: US National Science Foundation
Smartexchange: Trading higher-cost memory storage/access for lower-cost computation
Y Zhao, X Chen, Y Wang, C Li, H You, Y Fu, Y Xie, Z Wang, Y Lin
2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture …, 2020
Upoważnienia: US National Science Foundation
The Elastic Lottery Ticket Hypothesis
X Chen, Y Cheng, S Wang, Z Gan, J Liu, Z Wang
Neural Information Processing Systems 2021, 2021
Upoważnienia: US National Science Foundation
Uncertainty quantification for deep context-aware mobile activity recognition and unknown context discovery
Z Huo, A PakBin, X Chen, N Hurley, Y Yuan, X Qian, Z Wang, S Huang, ...
International Conference on Artificial Intelligence and Statistics, 3894-3904, 2020
Upoważnienia: US Department of Defense, US National Institutes of Health
Towards constituting mathematical structures for learning to optimize
J Liu, X Chen, Z Wang, W Yin, HQ Cai
International Conference on Machine Learning, 21426-21449, 2023
Upoważnienia: US National Science Foundation
Peek-a-boo: What (more) is disguised in a randomly weighted neural network, and how to find it efficiently
X Chen, J Zhang, Z Wang
International Conference on Learning Representations, 2022
Upoważnienia: US National Science Foundation, US Department of Defense
Model elasticity for hardware heterogeneity in federated learning systems
AJ Farcas, X Chen, Z Wang, R Marculescu
Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning …, 2022
Upoważnienia: US National Science Foundation
Smartdeal: Remodeling deep network weights for efficient inference and training
X Chen, Y Zhao, Y Wang, P Xu, H You, C Li, Y Fu, Y Lin, Z Wang
IEEE Transactions on Neural Networks and Learning Systems 34 (10), 7099-7113, 2022
Upoważnienia: US National Science Foundation
Informacje na temat publikacji i finansowania automatycznie określa program komputerowy