팔로우
Hanlin Tang
제목
인용
인용
연도
: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
International Conference on Machine Learning, 4848-4856, 2018
3722018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
Advances in Neural Information Processing Systems 31, 2018
2892018
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
2402019
Central server free federated learning over single-sided trust social networks
C He, C Tan, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1910.04956, 2019
852019
1-bit adam: Communication efficient large-scale training with adam’s convergence speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
International Conference on Machine Learning, 10118-10129, 2021
702021
Distributed learning over unreliable networks
C Yu, H Tang, C Renggli, S Kassing, A Singla, D Alistarh, C Zhang, J Liu
International Conference on Machine Learning, 7202-7212, 2019
652019
Deepsqueeze: Decentralization meets error-compensated compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
402019
1-bit LAMB: communication efficient large-scale large-batch training with LAMB’s convergence speed
C Li, AA Awan, H Tang, S Rajbhandari, Y He
2022 IEEE 29th International Conference on High Performance Computing, Data …, 2022
262022
Decentralized online learning: Take benefits from others' data without sharing your own to track global trend
Y Zhao, C Yu, P Zhao, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1901.10593, 2019
262019
ErrorCompensatedX: error compensation for variance reduced algorithms
H Tang, Y Li, J Liu, M Yan
Advances in Neural Information Processing Systems 34, 18102-18113, 2021
122021
Mkq-bert: Quantized bert with 4-bits weights and activations
H Tang, X Zhang, K Liu, J Zhu, Z Kang
arXiv preprint arXiv:2203.13483, 2022
92022
Apmsqueeze: A communication efficient adam-preconditioned momentum sgd algorithm
H Tang, S Gan, S Rajbhandari, X Lian, J Liu, Y He, C Zhang
arXiv preprint arXiv:2008.11343, 2020
62020
Easyquant: An efficient data-free quantization algorithm for llms
H Tang, Y Sun, D Wu, K Liu, J Zhu, Z Kang
arXiv preprint arXiv:2403.02775, 2024
12024
PASTO: Strategic Parameter Optimization in Recommendation Systems--Probabilistic is Better than Deterministic
W Ding, H Tang, J Feng, L Yuan, S Yang, G Yang, J Zheng, J Wang, Q Su, ...
arXiv preprint arXiv:2108.09076, 2021
2021
Communication Efficient Machine Learning
H Tang
University of Rochester, 2021
2021
Systems/Subsytems
S Rajbhandari, AVN Jalajakumari, H Chun, G Faulkner, K Cameron, ...
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–16