Takip et
Sungjun Cho
Sungjun Cho
cs.wisc.edu üzerinde doğrulanmış e-posta adresine sahip - Ana Sayfa
Başlık
Alıntı yapanlar
Alıntı yapanlar
Yıl
Pure transformers are powerful graph learners
J Kim, D Nguyen, S Min, S Cho, M Lee, H Lee, S Hong
Advances in Neural Information Processing Systems 35, 14582-14595, 2022
2172022
Learning to Unlearn: Instance-wise Unlearning for Pre-trained Classifiers
S Cha, S Cho, D Hwang, H Lee, T Moon, M Lee
arXiv preprint arXiv:2301.11578, 2023
302023
Rebalancing batch normalization for exemplar-based class-incremental learning
S Cha, S Cho, D Hwang, S Hong, M Lee, T Moon
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
202023
Equivariant hypergraph neural networks
J Kim, S Oh, S Cho, S Hong
European Conference on Computer Vision, 86-103, 2022
192022
Learning equi-angular representations for online continual learning
M Seo, H Koh, W Jeung, M Lee, S Kim, H Lee, S Cho, S Choi, H Kim, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
132024
Grouping matrix based graph pooling with adaptive number of clusters
SM Ko, S Cho, DW Jeong, S Han, M Lee, H Lee
Proceedings of the AAAI Conference on Artificial Intelligence 37 (7), 8334-8342, 2023
92023
Practical correlated topic modeling and analysis via the rectified anchor word algorithm
M Lee, S Cho, D Bindel, D Mimno
Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
82019
Using spectral characterization to identify healthcare-associated infection (HAI) patients for clinical contact precaution
J Cui, S Cho, M Kamruzzaman, M Bielskas, A Vullikanti, BA Prakash
Scientific Reports 13 (1), 16197, 2023
62023
Towards robust and cost-efficient knowledge unlearning for large language models
S Cha, S Cho, D Hwang, M Lee
arXiv preprint arXiv:2408.06621, 2024
52024
Curve your attention: Mixed-curvature transformers for graph representation learning
S Cho, S Cho, S Park, H Lee, H Lee, M Lee
arXiv preprint arXiv:2309.04082, 2023
52023
Show Think and Tell: Thought-Augmented Fine-Tuning of Large Language Models for Video Captioning
B Kim, D Hwang, S Cho, Y Jang, H Lee, M Lee
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
22024
Transformers meet stochastic block models: attention with data-adaptive sparsity and cost
S Cho, S Min, J Kim, M Lee, H Lee, S Hong
Advances in Neural Information Processing Systems 35, 24706-24719, 2022
22022
3d denoisers are good 2d teachers: molecular pretraining via denoising and cross-modal distillation
S Cho, DW Jeong, SM Ko, J Kim, S Han, S Hong, H Lee, M Lee
arXiv preprint arXiv:2309.04062, 2023
12023
LEARNING PROCESSING DEVICE AND LEARNING PROCESSING METHOD FOR POOLING HIERARCHICALLY STRUCTURED GRAPH DATA ON BASIS OF GROUPING MATRIX, AND METHOD FOR TRAINING ARTIFICIAL …
SM KO, S Cho, D Jeong, S Han, M Lee, H Lee
US Patent App. 18/950,349, 2025
2025
Practical and Reproducible Symbolic Music Generation by Large Language Models with Structural Embeddings
S Rhyu, K Yang, S Cho, J Kim, K Lee, M Lee
arXiv preprint arXiv:2407.19900, 2024
2024
On-the-fly Rectification for Robust Large-Vocabulary Topic Inference
M Lee, S Cho, K Dong, D Bindel, D Mimno
International Conference on Machine Learning (ICML), 2021
2021
Robust and Scalable Spectral Topic Modeling for Large Vocabularies
S Cho
2020
Supplementary Materials for Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning
S Cha, S Cho, D Hwang, S Hong, M Lee, T Moon
Mixed-Curvature Transformers for Graph Representation Learning
S Cho, S Cho, S Park, H Lee, H Lee, M Lee
Sistem, işlemi şu anda gerçekleştiremiyor. Daha sonra yeniden deneyin.
Makaleler 1–19