Follow
Hanlin Tang
Title
Cited by
Cited by
Year
: Decentralized Training over Decentralized Data
H Tang, X Lian, M Yan, C Zhang, J Liu
International Conference on Machine Learning, 4848-4856, 2018
2562018
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
Advances in Neural Information Processing Systems 31, 2018
2092018
Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, C Yu, X Lian, T Zhang, J Liu
International Conference on Machine Learning, 6155-6165, 2019
1532019
Central server free federated learning over single-sided trust social networks
C He, C Tan, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1910.04956, 2019
522019
Distributed learning over unreliable networks
C Yu, H Tang, C Renggli, S Kassing, A Singla, D Alistarh, C Zhang, J Liu
International Conference on Machine Learning, 7202-7212, 2019
432019
Deepsqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression
H Tang, X Lian, S Qiu, L Yuan, C Zhang, T Zhang, J Liu
arXiv preprint arXiv:1907.07346, 2019
242019
Decentralized online learning: Take benefits from others' data without sharing your own to track global trend
Y Zhao, C Yu, P Zhao, H Tang, S Qiu, J Liu
arXiv preprint arXiv:1901.10593, 2019
232019
1-bit adam: Communication efficient large-scale training with adam’s convergence speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
International Conference on Machine Learning, 10118-10129, 2021
212021
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
H Tang, S Gan, S Rajbhandari, X Lian, J Liu, Y He, C Zhang
arXiv preprint arXiv:2008.11343, 2020
22020
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang, Yao Li, Ji Liu, Ming Yan
2021
Systems/Subsytems
S Rajbhandari, AVN Jalajakumari, H Chun, G Faulkner, K Cameron, ...
The system can't perform the operation now. Try again later.
Articles 1–11