Artykuły udostępnione publicznie: - Tengyu MAWięcej informacji
Niedostępny w żadnym miejscu: 1
PIA: parallel architecture with illumination allocator for joint enhancement and detection in low-light
T Ma, L Ma, X Fan, Z Luo, R Liu
Proceedings of the 30th ACM international conference on multimedia, 2070-2078, 2022
Upoważnienia: National Natural Science Foundation of China
Dostępne w jakimś miejscu: 45
A simple but tough-to-beat baseline for sentence embeddings
S Arora, Y Liang, T Ma
ICLR 2017, 2016
Upoważnienia: US National Science Foundation
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
ICML 2017;arXiv preprint arXiv:1703.00573, 2017, 2017
Upoważnienia: US National Science Foundation, US Department of Defense
A latent variable model approach to pmi-based word embeddings
S Arora, Y Li, Y Liang, T Ma, A Risteski
Transactions of the Association for Computational Linguistics 4, 385-399, 2016
Upoważnienia: US National Science Foundation
Towards explaining the regularization effect of initial large learning rate in training neural networks
Y Li, C Wei, T Ma
Advances in neural information processing systems 32, 2019
Upoważnienia: US National Science Foundation
Finding Approximate Local Minima for Nonconvex Optimization in Linear Time
N Agarwal, Z Allen-Zhu, B Bullins, E Hazan, T Ma
STOC 2017, 2016
Upoważnienia: US National Science Foundation
Provable guarantees for self-supervised deep learning with spectral contrastive loss
JZ HaoChen, C Wei, A Gaidon, T Ma
Advances in neural information processing systems 34, 5000-5011, 2021
Upoważnienia: US National Science Foundation
Linear algebraic structure of word senses, with applications to polysemy
S Arora, Y Li, Y Liang, T Ma, A Risteski
arXiv preprint arXiv:1601.03764, 2016
Upoważnienia: US National Science Foundation
Data selection for language models via importance resampling
SM Xie, S Santurkar, T Ma, PS Liang
Advances in Neural Information Processing Systems 36, 34201-34227, 2023
Upoważnienia: US National Science Foundation, US Department of Defense
The implicit and explicit regularization effects of dropout
C Wei, S Kakade, T Ma
International conference on machine learning, 10181-10192, 2020
Upoważnienia: US National Science Foundation
Label noise sgd provably prefers flat global minimizers
A Damian, T Ma, JD Lee
Advances in Neural Information Processing Systems 34, 27449-27461, 2021
Upoważnienia: US National Science Foundation, US Department of Defense
Polynomial-time tensor decompositions with sum-of-squares
T Ma, J Shi, D Steurer
57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016 …, 2016
Upoważnienia: US National Science Foundation
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
JD Lee, Q Lin, T Ma, T Yang
Journal of Machine Learning Research 18 (122), 1-43, 2017
Upoważnienia: US National Science Foundation
Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation
K Shen, RM Jones, A Kumar, SM Xie, JZ HaoChen, T Ma, P Liang
International conference on machine learning, 19847-19878, 2022
Upoważnienia: US National Science Foundation, US Department of Defense
Data-dependent sample complexity of deep neural networks via lipschitz augmentation
C Wei, T Ma
Advances in neural information processing systems 32, 2019
Upoważnienia: US National Science Foundation
Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning
C Wei, SM Xie, T Ma
Advances in Neural Information Processing Systems 34, 16158-16170, 2021
Upoważnienia: US National Science Foundation, US Department of Defense
Shape matters: Understanding the implicit bias of the noise covariance
JZ HaoChen, C Wei, J Lee, T Ma
Conference on Learning Theory, 2315-2357, 2021
Upoważnienia: US National Science Foundation, US Department of Defense
Chain of thought empowers transformers to solve inherently serial problems
Z Li, H Liu, D Zhou, T Ma
arXiv preprint arXiv:2402.12875 1, 2024
Upoważnienia: US National Science Foundation
Statistically meaningful approximation: a case study on approximating turing machines with transformers
C Wei, Y Chen, T Ma
Advances in Neural Information Processing Systems 35, 12071-12083, 2022
Upoważnienia: US National Science Foundation
How sharpness-aware minimization minimizes sharpness?
K Wen, T Ma, Z Li
The eleventh international conference on learning representations, 2023
Upoważnienia: US National Science Foundation
Informacje na temat publikacji i finansowania automatycznie określa program komputerowy