Mohammad Gheshlaghi Azar
Mohammad Gheshlaghi Azar
Research Scientist at DeepMind
Dirección de correo verificada de google.com
Título
Citado por
Citado por
Año
Rainbow: Combining improvements in deep reinforcement learning
M Hessel, J Modayil, H Van Hasselt, T Schaul, G Ostrovski, W Dabney, ...
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
9012018
Noisy networks for exploration
M Fortunato, MG Azar, B Piot, J Menick, I Osband, A Graves, V Mnih, ...
arXiv preprint arXiv:1706.10295, 2017
4152017
Minimax regret bounds for reinforcement learning
MG Azar, I Osband, R Munos
International Conference on Machine Learning, 263-272, 2017
2662017
Bootstrap your own latent: A new approach to self-supervised learning
JB Grill, F Strub, F Altché, C Tallec, PH Richemond, E Buchatskaya, ...
arXiv preprint arXiv:2006.07733, 2020
1962020
Speedy Q-Learning
MG Azar, M Ghavamzadeh, HJ Kappen, R Munos
Advances in Neural Information Processing Systems, 2411-2419, 2011
122*2011
Dynamic Policy Programming
M Gheshlaghi Azar, V Gomez, HJ Kappen
Journal of Machine Learning Research 13, 3207-3245, 2012
1002012
The reactor: A fast and sample-efficient actor-critic agent for reinforcement learning
A Gruslys, W Dabney, MG Azar, B Piot, M Bellemare, R Munos
arXiv preprint arXiv:1704.04651, 2017
98*2017
Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
MG Azar, R Munos, HJ Kappen
Machine learning 91 (3), 325-349, 2013
982013
Sequential transfer in multi-armed bandit with finite set of models
MG Azar, A Lazaric, E Brunskill
Advances in Neural Information Processing Systems, 2220-2228, 2013
652013
On the sample complexity of reinforcement learning with a generative model
MG Azar, R Munos, B Kappen
arXiv preprint arXiv:1206.6461, 2012
532012
Stochastic optimization of a locally smooth function under correlated bandit feedback
MG Azar, A Lazaric, E Brunskill
arXiv preprint arXiv:1402.0562, 2014
50*2014
Observe and look further: Achieving consistent performance on atari
T Pohlen, B Piot, T Hester, MG Azar, D Horgan, D Budden, G Barth-Maron, ...
arXiv preprint arXiv:1805.11593, 2018
432018
Dynamic policy programming with function approximation
MG Azar, V Gómez, B Kappen
Proceedings of the Fourteenth International Conference on Artificial …, 2011
402011
Regret bounds for reinforcement learning with policy advice
MG Azar, A Lazaric, E Brunskill
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2013
312013
Meta-learning of sequential strategies
PA Ortega, JX Wang, M Rowland, T Genewein, Z Kurth-Nelson, ...
arXiv preprint arXiv:1905.03030, 2019
292019
Neural predictive belief representations
ZD Guo, MG Azar, B Piot, BA Pires, R Munos
arXiv preprint arXiv:1811.06407, 2018
272018
Reinforcement learning with a near optimal rate of convergence
MG Azar, R Munos, M Ghavamzadeh, H Kappen
242011
A cryptography-based approach for movement decoding
EL Dyer, MG Azar, MG Perich, HL Fernandes, S Naufel, LE Miller, ...
Nature biomedical engineering 1 (12), 967-976, 2017
222017
Mel Vecerík, et al. Observe and look further: Achieving consistent performance on atari
T Pohlen, B Piot, T Hester, MG Azar, D Horgan, D Budden, G Barth-Maron, ...
arXiv preprint arXiv:1805.11593, 2018
202018
Hindsight credit assignment
A Harutyunyan, W Dabney, T Mesnard, M Azar, B Piot, N Heess, ...
arXiv preprint arXiv:1912.02503, 2019
172019
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20