Seguir
Can Karakus
Can Karakus
Senior Applied Scientist, Amazon Web Services
Dirección de correo verificada de amazon.com
Título
Citado por
Citado por
Año
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local Computations
D Basu, D Data, C Karakus, S Diggavi
Advances in Neural Information Processing Systems, 14668-14679, 2019
3812019
Straggler mitigation in distributed optimization through data encoding
C Karakus, Y Sun, S Diggavi, W Yin
Advances in Neural Information Processing Systems, 5434-5442, 2017
1572017
Redundancy techniques for straggler mitigation in distributed optimization and learning
C Karakus, Y Sun, S Diggavi, W Yin
Journal of Machine Learning Research 20 (72), 1-47, 2019
632019
Encoded distributed optimization
C Karakus, Y Sun, S Diggavi
2017 IEEE international symposium on information theory (ISIT), 2890-2894, 2017
562017
Shifting network tomography toward a practical goal
D Ghita, C Karakus, K Argyraki, P Thiran
Proceedings of the Seventh COnference on emerging Networking EXperiments and …, 2011
552011
Opportunistic scheduling for full-duplex uplink-downlink networks
C Karakus, S Diggavi
2015 IEEE International Symposium on Information Theory (ISIT), 1019-1023, 2015
272015
Privacy-utility trade-off of linear regression under random projections and additive noise
M Showkatbakhsh, C Karakus, S Diggavi
2018 IEEE International Symposium on Information Theory (ISIT), 186-190, 2018
222018
Enhancing multiuser MIMO through opportunistic D2D cooperation
C Karakus, S Diggavi
IEEE Transactions on Wireless Communications 16 (9), 5616-5629, 2017
222017
Reference signals and link adaptation for massive MIMO
JB Soriaga, PK Vitthaladevuni, C Karakus, JI Tingfang
US Patent 10,505,597, 2019
182019
Herring: Rethinking the parameter server at scale for the cloud
I Thangakrishnan, D Cavdar, C Karakus, P Ghai, Y Selivonchyk, C Pruce
SC20: International Conference for High Performance Computing, Networking …, 2020
172020
Amazon sagemaker model parallelism: A general and flexible framework for large model training
C Karakus, R Huilgol, F Wu, A Subramanian, C Daniel, D Cavdar, T Xu, ...
arXiv preprint arXiv:2111.05972, 2021
162021
Gaussian interference channel with intermittent feedback
C Karakus, IH Wang, S Diggavi
IEEE Transactions on Information Theory 61 (9), 4663-4699, 2015
152015
Rate splitting is approximately optimal for fading Gaussian interference channels
J Sebastian, C Karakus, S Diggavi, IH Wang
2015 53rd Annual Allerton Conference on Communication, Control, and …, 2015
122015
Approximate capacity of fast fading interference channels with no instantaneous CSIT
J Sebastian, C Karakus, S Diggavi
IEEE Transactions on Communications 66 (12), 6015-6027, 2018
72018
Differentially private consensus-based distributed optimization
M Showkatbakhsh, C Karakus, S Diggavi
arXiv preprint arXiv:1903.07792, 2019
52019
Densifying assumed-sparse tensors: Improving memory efficiency and mpi collective performance during tensor accumulation for parallelized training of neural machine translation …
D Cavdar, V Codreanu, C Karakus, JA Lockman, D Podareanu, ...
High Performance Computing: 34th International Conference, ISC High …, 2019
42019
Approximately achieving the feedback interference channel capacity with point-to-point codes
J Sebastian, C Karakus, S Diggavi
2016 IEEE International Symposium on Information Theory (ISIT), 715-719, 2016
32016
Interference channel with intermittent feedback
C Karakus, IH Wang, S Diggavi
2013 IEEE International Symposium on Information Theory, 26-30, 2013
32013
An Achievable Rate Region for Gaussian Interference Channel with Intermittent Feedback
C Karakus, I Wang, S Diggavi
Communication, Control, and Computing (Allerton), 2013 51st Annual Allerton …, 2013
22013
MADA: Meta-Adaptive Optimizers through hyper-gradient Descent
K Ozkara, C Karakus, P Raman, M Hong, S Sabach, B Kveton, V Cevher
arXiv preprint arXiv:2401.08893, 2024
2024
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20