Seguir
Aldo Pacchiano
Aldo Pacchiano
Broad Institute of MIT and Harvard
Dirección de correo verificada de broadinstitute.org - Página principal
Título
Citado por
Citado por
Año
Wasserstein fair classification
R Jiang, A Pacchiano, T Stepleton, H Jiang, S Chiappa
Uncertainty in artificial intelligence, 862-872, 2020
1592020
Effective diversity in population based reinforcement learning
J Parker-Holder, A Pacchiano, KM Choromanski, SJ Roberts
Advances in Neural Information Processing Systems 33, 18050-18062, 2020
1302020
Es-maml: Simple hessian-free meta learning
X Song, W Gao, Y Yang, K Choromanski, A Pacchiano, Y Tang
arXiv preprint arXiv:1910.01215, 2019
1172019
Model selection in contextual stochastic bandit problems
A Pacchiano, M Phan, Y Abbasi Yadkori, A Rao, J Zimmert, T Lattimore, ...
Advances in Neural Information Processing Systems 33, 10328-10337, 2020
802020
A general approach to fairness with optimal transport
C Silvia, J Ray, S Tom, P Aldo, J Heinrich, A John
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 3633-3640, 2020
572020
Stochastic bandits with linear constraints
A Pacchiano, M Ghavamzadeh, P Bartlett, H Jiang
International conference on artificial intelligence and statistics, 2827-2835, 2021
452021
Ready policy one: World building through active learning
P Ball, J Parker-Holder, A Pacchiano, K Choromanski, S Roberts
International Conference on Machine Learning, 591-601, 2020
452020
From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization
KM Choromanski, A Pacchiano, J Parker-Holder, Y Tang, V Sindhwani
Advances in Neural Information Processing Systems 32, 2019
452019
Regret bound balancing and elimination for model selection in bandits and rl
A Pacchiano, C Dann, C Gentile, P Bartlett
arXiv preprint arXiv:2012.13045, 2020
372020
On approximate Thompson sampling with Langevin algorithms
E Mazumdar, A Pacchiano, Y Ma, M Jordan, P Bartlett
International Conference on Machine Learning, 6797-6807, 2020
37*2020
Provably robust blackbox optimization for reinforcement learning
K Choromanski, A Pacchiano, J Parker-Holder, Y Tang, D Jain, Y Yang, ...
Conference on Robot Learning, 683-696, 2020
37*2020
Towards tractable optimism in model-based reinforcement learning
A Pacchiano, P Ball, J Parker-Holder, K Choromanski, S Roberts
Uncertainty in Artificial Intelligence, 1413-1423, 2021
362021
Tactical optimism and pessimism for deep reinforcement learning
T Moskovitz, J Parker-Holder, A Pacchiano, M Arbel, M Jordan
Advances in Neural Information Processing Systems 34, 12849-12863, 2021
35*2021
Learning to score behaviors for guided policy optimization
A Pacchiano, J Parker-Holder, Y Tang, K Choromanski, A Choromanska, ...
International Conference on Machine Learning, 7445-7454, 2020
34*2020
Dynamic balancing for model selection in bandits and rl
A Cutkosky, C Dann, A Das, C Gentile, A Pacchiano, M Purohit
International Conference on Machine Learning, 2276-2285, 2021
312021
Online model selection for reinforcement learning with function approximation
J Lee, A Pacchiano, V Muthukumar, W Kong, E Brunskill
International Conference on Artificial Intelligence and Statistics, 3340-3348, 2021
302021
Dueling RL: Reinforcement Learning with Trajectory Preferences
A Saha, A Pacchiano, J Lee
International Conference on Artificial Intelligence and Statistics, 6263-6289, 2023
28*2023
Geometrically coupled monte carlo sampling
M Rowland, KM Choromanski, F Chalus, A Pacchiano, T Sarlos, ...
Advances in Neural Information Processing Systems 31, 2018
282018
Regret balancing for bandit and rl model selection
Y Abbasi-Yadkori, A Pacchiano, M Phan
arXiv preprint arXiv:2006.05491, 2020
252020
Ridge rider: Finding diverse solutions by following eigenvectors of the hessian
J Parker-Holder, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
Advances in Neural Information Processing Systems 33, 753-765, 2020
242020
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20