Follow
Jang Kangwook
Title
Cited by
Cited by
Year
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Y Lee*, K Jang*, J Goo, Y Jung, H Kim
ISCA Interspeech 2022, 3588-3592, 2022
552022
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
K Jang*, S Kim*, SY Yun, H Kim
ISCA Interspeech 2023, 316-320, 2023
52023
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition
S Kim*, K Jang*, S Bae, H Kim, SY Yun
IEEE SLT Workshop 2024, 457-464, 2024
32024
One-Class Learning with Adaptive Centroid Shift for Audio Deepfake Detection
HM Kim, K Jang, H Kim
ISCA Interspeech 2024, 4853-4857, 2024
32024
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
K Jang, S Kim, H Kim
IEEE ICASSP 2024, 10721-10725, 2024
12024
Improving Cross-Lingual Phonetic Representation of Low-Resource Languages Through Language Similarity Analysis
M Kim, K Jang, H Kim
IEEE ICASSP 2025, arXiv: 2501.06810, 2025
2025
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition
S Kim, K Jang, S Bae, S Cho, SY Yun
arXiv preprint, 2025
2025
Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech Representation
S Kim, S Cho, S Bae, K Jang, SY Yun
ICLR 2025, 2025
2025
The system can't perform the operation now. Try again later.
Articles 1–8