Seguir
Soham De
Soham De
DeepMind
Dirección de correo verificada de google.com - Página principal
Título
Citado por
Citado por
Año
High-Performance Large-Scale Image Recognition Without Normalization
A Brock, S De, SL Smith, K Simonyan
International Conference on Machine Learning, 2021
502*2021
Adversarial robustness through local linearization
C Qin, J Martens, S Gowal, D Krishnan, K Dvijotham, A Fawzi, S De, ...
Advances in Neural Information Processing Systems, 13847-13856, 2019
3092019
Training quantized nets: A deeper understanding
H Li*, S De*, Z Xu, C Studer, H Samet, T Goldstein
Advances in Neural Information Processing Systems, 5813-5823, 2017
2302017
On the Origin of Implicit Regularization in Stochastic Gradient Descent
SL Smith, B Dherin, DGT Barrett, S De
International Conference on Learning Representations, 2021
1732021
The loosening of American culture over 200 years is associated with a creativity–order trade-off
JC Jackson, M Gelfand, S De, A Fox
Nature human behaviour 3 (3), 244-250, 2019
150*2019
Automated inference with adaptive batches
S De, A Yadav, D Jacobs, T Goldstein
Artificial Intelligence and Statistics, 1504-1513, 2017
147*2017
Batch normalization biases residual blocks towards the identity function in deep networks
S De, S Smith
Advances in Neural Information Processing Systems 33, 2020
146*2020
Unlocking High-Accuracy Differentially Private Image Classification through Scale
S De, L Berrada, J Hayes, SL Smith, B Balle
ICML Workshop on Theory and Practice of Differential Privacy, 2022
1422022
Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration
S De, A Mukherjee, E Ullah
ICML Workshop on Modern Trends in Nonconvex Optimization for Machine Learning, 2018
138*2018
Characterizing signal propagation to close the performance gap in unnormalized ResNets
A Brock, S De, SL Smith
International Conference on Learning Representations, 2021
1202021
On the Generalization Benefit of Noise in Stochastic Gradient Descent
S Smith, E Elsen, S De
International Conference on Machine Learning, 9058-9067, 2020
107*2020
The impact of neural network overparameterization on gradient confusion and stochastic gradient descent
KA Sankararaman*, S De*, Z Xu, WR Huang, T Goldstein
International Conference on Machine Learning, 8469-8479, 2020
106*2020
BYOL works even without batch statistics
PH Richemond, JB Grill, F Altché, C Tallec, F Strub, A Brock, S Smith, ...
NeurIPS Workshop on Self-Supervised Learning: Theory and Practice, 2020
106*2020
Resurrecting Recurrent Neural Networks for Long Sequences
A Orvieto, SL Smith, A Gu, A Fernando, C Gulcehre, R Pascanu, S De
arXiv preprint arXiv:2303.06349, 2023
922023
Understanding norm change: An evolutionary game-theoretic approach
S De, DS Nau, MJ Gelfand
Proceedings of the 16th Conference on Autonomous Agents and MultiAgent …, 2017
71*2017
Layer-specific adaptive learning rates for deep networks
B Singh, S De, Y Zhang, T Goldstein, G Taylor
2015 IEEE 14th International Conference on Machine Learning and Applications …, 2015
642015
Efficient distributed SGD with variance reduction
S De, T Goldstein
2016 IEEE International Conference on Data Mining (ICDM), 2016
56*2016
Efficient neural network verification with exactness characterization
KD Dvijotham, R Stanforth, S Gowal, C Qin, S De, P Kohli
Uncertainty in Artificial Intelligence, 497-507, 2019
452019
The Inevitability of Ethnocentrism Revisited: Ethnocentrism Diminishes As Mobility Increases
S De, MJ Gelfand, D Nau, P Roos
Scientific reports 5, 2015
452015
An Empirical Study of ADMM for Nonconvex Problems
Z Xu, S De, M Figueiredo, C Studer, T Goldstein
NIPS 2016 Workshop on Nonconvex Optimization for Machine Learning: Theory …, 2016
412016
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20