팔로우
Daehyun Ahn
Daehyun Ahn
Squeezebits Inc.
squeezebits.com의 이메일 확인됨
제목
인용
인용
연도
OPTIMUS: OPTImized matrix MUltiplication Structure for Transformer neural network accelerator
J Park, H Yoon, D Ahn, J Choi, JJ Kim
Proceedings of Machine Learning and Systems 2, 363-378, 2020
412020
Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array
Y Kim, H Kim, D Ahn, JJ Kim
Proceedings of the International Symposium on Low Power Electronics and …, 2018
362018
Viterbi-based pruning for sparse matrix with fixed and high index compression ratio
D Lee, D Ahn, T Kim, PI Chuang, JJ Kim
International Conference on Learning Representations, 2018
222018
Double Viterbi: Weight encoding for high compression ratio and fast on-chip reconstruction for deep neural network
D Ahn, D Lee, T Kim, JJ Kim
International Conference on Learning Representations, 2019
132019
Temporal Dynamic Quantization for Diffusion Models
J So, J Lee, D Ahn, H Kim, E Park
arXiv preprint arXiv:2306.02316, 2023
102023
Balancing computation loads and optimizing input vector loading in LSTM accelerators
J Park, W Yi, D Ahn, J Kung, JJ Kim
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2019
102019
V-LSTM: An efficient LSTM accelerator using fixed nonzero-ratio viterbi-based pruning
T Kim, D Ahn, D Lee, JJ Kim
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023
72023
Energy-Efficient In-Memory Binary Neural Network Accelerator Design Based on 8T2C SRAM Cell
H Oh, H Kim, D Ahn, J Park, Y Kim, I Lee, JJ Kim
IEEE Solid-State Circuits Letters 5, 70-73, 2022
62022
Maximizing Parallel Activation of Word-Lines in MRAM-Based Binary Neural Network Accelerators
D Ahn, H Oh, H Kim, Y Kim, JJ Kim
IEEE Access 9, 141961 - 141969, 2021
62021
Time-step interleaved weight reuse for LSTM neural network computing
N Park, Y Kim, D Ahn, T Kim, JJ Kim
Proceedings of the ACM/IEEE International Symposium on Low Power Electronics …, 2020
62020
Squeezing Large-Scale Diffusion Models for Mobile
J Choi, M Kim, D Ahn, T Kim, Y Kim, D Jo, H Jeon, JJ Kim, H Kim
arXiv preprint arXiv:2307.01193, 2023
32023
SPRITE: Sparsity-Aware Neural Processing Unit with Constant Probability of Index-Matching
S Ryu, Y Oh, T Kim, D Ahn, JJ Kim
2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), 663-666, 2021
32021
Energy-efficient charge sharing-based 8T2C SRAM in-memory accelerator for binary neural networks in 28nm CMOS
H Oh, H Kim, D Ahn, J Park, Y Kim, I Lee, JJ Kim
2021 IEEE Asian Solid-State Circuits Conference (A-SSCC), 1-3, 2021
22021
Searching for Robust Binary Neural Networks via Bimodal Parameter Perturbation
D Ahn, H Kim, T Kim, E Park, JJ Kim
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023
12023
Workload-Balanced Graph Attention Network Accelerator with Top-K Aggregation Candidates
N Park, D Ahn, JJ Kim
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided …, 2022
12022
QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference
T Kim, J Lee, D Ahn, S Kim, J Choi, M Kim, H Kim
arXiv preprint arXiv:2402.10076, 2024
2024
Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis
Y Kim, D Jo, H Jeon, T Kim, D Ahn, H Kim, JJ Kim
Thirty-seventh Conference on Neural Information Processing Systems, 2023
2023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–17