Follow
Dongsoo Lee
Dongsoo Lee
NAVER Cloud
Verified email at navercorp.com
Title
Cited by
Cited by
Year
A Scalable Multi-TeraOPS Deep Learning Processor Core for AI Trainina and Inference
B Fleischer, S Shukla, M Ziegler, J Silberman, J Oh, V Srinivasan, J Choi, ...
2018 IEEE Symposium on VLSI Circuits, 35-36, 2018
1542018
Lut-gemm: Quantized matrix multiplication based on luts for efficient inference in large-scale generative language models
G Park, M Kim, S Lee, J Kim, B Kwon, SJ Kwon, B Kim, Y Lee, D Lee
The Twelfth International Conference on Learning Representations, 2023
1032023
High-performance low-energy STT MRAM based on balanced write scheme
D Lee, SK Gupta, K Roy
Proceedings of the 2012 ACM/IEEE international symposium on Low power …, 2012
742012
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization
J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee
Advances in Neural Information Processing Systems 36, 2024
732024
DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
S Hong, S Moon, J Kim, S Lee, M Kim, D Lee, JY Kim
2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), 616-630, 2022
562022
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization
SJ Kwon, D Lee, B Kim, P Kapoor, B Park, GY Wei
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
482020
Soft-error-resilient FPGAs using built-in 2-D Hamming product code
SP Park, D Lee, K Roy
IEEE transactions on very large scale integration (VLSI) systems 20 (2), 248-256, 2011
472011
Maximum Likelihood Training of Implicit Nonlinear Diffusion Model
D Kim, B Na, SJ Kwon, D Lee, W Kang, I Moon
Advances in Neural Information Processing Systems 35, 32270-32284, 2022
452022
A review of on-device fully neural end-to-end automatic speech recognition algorithms
C Kim, D Gowda, D Lee, J Kim, A Kumar, S Kim, A Garg, C Han
2020 54th Asilomar Conference on Signals, Systems, and Computers, 277-283, 2020
392020
BiQGEMM: matrix multiplication with lookup table for binary-coding-based quantized DNNs
Y Jeon, B Park, SJ Kwon, B Kim, J Yun, D Lee
SC20: International Conference for High Performance Computing, Networking …, 2020
342020
A scalable multi-TeraOPS core for AI training and inference
S Shukla, B Fleischer, M Ziegler, J Silberman, J Oh, V Srinivasan, J Choi, ...
IEEE Solid-State Circuits Letters 1 (12), 217-220, 2019
342019
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
332022
R-MRAM: A ROM-Embedded STT MRAM Cache
D Lee, X Fong, K Roy
IEEE Electron Device Letters 34 (10), 1256-1258, 2013
332013
Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation
I Chung, B Kim, Y Choi, SJ Kwon, Y Jeon, B Park, S Kim, D Lee
arXiv preprint arXiv:2009.07453, 2020
292020
Area efficient ROM-embedded SRAM cache
D Lee, K Roy
IEEE Transactions on Very Large Scale Integration (VLSI) Systems 21 (9 …, 2013
262013
Viterbi-based efficient test data compression
D Lee, K Roy
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2012
252012
Flexround: Learnable rounding based on element-wise division for post-training quantization
JH Lee, J Kim, SJ Kwon, D Lee
International Conference on Machine Learning, 18913-18939, 2023
242023
Decompression apparatus and control method thereof
D Lee, K Sejung, B Kim, P Kapoor, P Baeseong
US Patent 10,917,121, 2021
242021
Energy-delay optimization of the STT MRAM write operation under process variations
D Lee, K Roy
IEEE Transactions on Nanotechnology 13 (4), 714-723, 2014
242014
DeepTwist: Learning Model Compression via Occasional Weight Distortion
D Lee, P Kapoor, B Kim
arXiv preprint arXiv:1810.12823, 2018
232018
The system can't perform the operation now. Try again later.
Articles 1–20