Obserwuj
Bo Han
Tytuł
Cytowane przez
Cytowane przez
Rok
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, IW Tsang, M Sugiyama
NeurIPS 2018, 2018
25282018
How does Disagreement Help Generalization against Label Corruption?
X Yu, B Han, J Yao, G Niu, IW Tsang, M Sugiyama
ICML 2019, 2019
9672019
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
J Zhang, X Xu, B Han, G Niu, L Cui, M Sugiyama, M Kankanhalli
ICML 2020, 2020
4872020
Are Anchor Points Really Indispensable in Label-Noise Learning?
X Xia, T Liu, N Wang, B Han, C Gong, G Niu, M Sugiyama
NeurIPS 2019, 2019
4372019
Part-dependent Label Noise: Towards Instance-dependent Label Noise
X Xia, T Liu, B Han, N Wang, M Gong, H Liu, G Niu, D Tao, M Sugiyama
NeurIPS 2020, 2020
3432020
Robust Early-learning: Hindering the Memorization of Noisy Labels
X Xia, T Liu, B Han, C Gong, N Wang, Z Ge, Y Chang
ICLR 2021, 2021
3342021
Geometry-aware Instance-reweighted Adversarial Training
J Zhang, J Zhu, G Niu, B Han, M Sugiyama, M Kankanhalli
ICLR 2021, 2021
3222021
Masking: A New Perspective of Noisy Supervision
B Han, J Yao, G Niu, M Zhou, IW Tsang, Y Zhang, M Sugiyama
NeurIPS 2018, 2018
2972018
Reducing Estimation Error for Transition Matrix in Label-noise Learning
Y Yao, T Liu, B Han, M Gong, J Deng, G Niu, M Sugiyama
NeurIPS 2020, 2020
277*2020
Understanding and Improving for Learning with Noisy Labels
Y Bai, E Yang, B Han, Y Yang, J Li, Y Mao, G Niu, T Liu
arXiv preprint arXiv:2106.15853, 2021
257*2021
A Survey of Label-noise Representation Learning: Past, Present and Future
B Han, Q Yao, T Liu, G Niu, IW Tsang, JT Kwok, M Sugiyama
arXiv preprint arXiv:2011.04406, 2020
1992020
Learning Causally Invariant Representations for Out-of-distribution Generalization on Graphs
Y Chen, Y Zhang, Y Bian, H Yang, K Ma, B Xie, T Liu, B Han, J Cheng
arXiv preprint arXiv:2202.05441, 2022
190*2022
Provably Consistent Partial-Label Learning
L Feng, J Lv, B Han, M Xu, G Niu, X Geng, B An, M Sugiyama
arXiv preprint arXiv:2007.08929, 2020
1802020
SIGUA: Forgetting May Make Learning with Noisy Labels More Robust
B Han, G Niu, X Yu, Q Yao, M Xu, IW Tsang, M Sugiyama
ICML 2020, 2020
169*2020
Searching to Exploit Memorization Effect in Learning with Noisy Labels
Q Yao, H Yang, B Han, G Niu, JT Kwok
arXiv preprint arXiv:1911.02377, 2019
157*2019
Deepinception: Hypnotize large language model to be jailbreaker
X Li, Z Zhou, J Zhu, J Yao, T Liu, B Han
arXiv preprint arXiv:2311.03191, 2023
1562023
Provably End-to-end Label-Noise Learning without Anchor Points
X Li, T Liu, B Han, G Niu, M Sugiyama
arXiv preprint arXiv:2102.02400, 2021
1532021
Is Out-of-distribution Detection Learnable?
Z Fang, S Li, J Lu, J Dong, B Han, F Liu
NeurIPS 2022, 2022
1512022
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels
X Xia, T Liu, B Han, M Gong, J Yu, G Niu, M Sugiyama
arXiv preprint arXiv:2106.00445, 2022
1512022
Confidence Scores Make Instance-dependent Label-noise Learning Possible
A Berthon, B Han, G Niu, T Liu, M Sugiyama
ICML 2021, 2021
1272021
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20