Seguir
Wu Lin
Wu Lin
Vector Institute
Dirección de correo verificada de vectorinstitute.ai - Página principal
Título
Citado por
Citado por
Año
Fast and scalable bayesian deep learning by weight-perturbation in adam
M Khan, D Nielsen, V Tangkaratt, W Lin, Y Gal, A Srivastava
International conference on machine learning, 2611-2620, 2018
2752018
Conjugate-computation variational inference: Converting variational inference in non-conjugate models to inferences in conjugate models
M Khan, W Lin
Artificial Intelligence and Statistics, 878-887, 2017
1442017
Fast and simple natural-gradient variational inference with mixture of exponential-family approximations
W Lin, ME Khan, M Schmidt
International Conference on Machine Learning, 3992-4002, 2019
592019
Variational message passing with structured inference networks
W Lin, N Hubacher, ME Khan
arXiv preprint arXiv:1803.05589, 2018
462018
Faster stochastic variational inference using proximal-gradient methods with general divergence functions
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
442015
Handling the positive-definite constraint in the Bayesian learning rule
W Lin, M Schmidt, ME Khan
International conference on machine learning, 6116-6126, 2020
292020
Tractable structured natural-gradient descent using local parameterizations
W Lin, F Nielsen, KM Emtiyaz, M Schmidt
International Conference on Machine Learning, 6680-6691, 2021
272021
Stein's lemma for the reparameterization trick with exponential family mixtures
W Lin, ME Khan, M Schmidt
arXiv preprint arXiv:1910.13398, 2019
192019
Variational adaptive-Newton method for explorative learning
ME Khan, W Lin, V Tangkaratt, Z Liu, D Nielsen
arXiv preprint arXiv:1711.05560, 2017
182017
Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
J. Comp. Neurol 319, 359-386, 2015
82015
Structured second-order methods via natural gradient descent
W Lin, F Nielsen, ME Khan, M Schmidt
arXiv preprint arXiv:2107.10884, 2021
72021
WaterlooClarke: TREC 2015 Total Recall Track.
H Zhang, W Lin, Y Wang, CLA Clarke, MD Smucker
TREC, 2015
72015
Simplifying Momentum-based Positive-definite Submanifold Optimization with Applications to Deep Learning
W Lin, V Duruisseaux, M Leok, F Nielsen, ME Khan, M Schmidt
arXiv preprint arXiv:2302.09738, 2023
6*2023
Natural-gradient stochastic variational inference for non-conjugate structured variational autoencoder
W Lin, ME Khan, N Hubacher, D Nielsen
International Conference on Machine Learning, 2017
22017
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective
W Lin, F Dangel, R Eschenhagen, J Bae, RE Turner, A Makhzani
arXiv preprint arXiv:2402.03496, 2024
2024
Structured Inverse-Free Natural Gradient: Memory-Efficient & Numerically-Stable KFAC for Large Neural Nets
W Lin, F Dangel, R Eschenhagen, K Neklyudov, A Kristiadi, RE Turner, ...
arXiv preprint arXiv:2312.05705, 2023
2023
Computationally efficient geometric methods for optimization and inference in machine learning
W Lin
University of British Columbia, 2023
2023
Practical Structured Riemannian Optimization with Momentum by using Generalized Normal Coordinates
W Lin, V Duruisseaux, M Leok, F Nielsen, ME Khan, M Schmidt
NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations, 2022
2022
Introduction to Natural-gradient Descent: Part IV
W Lin, F Nielsen, ME Khan, M Schmidt
2021
Introduction to Natural-gradient Descent: Part I
W Lin, F Nielsen, ME Khan, M Schmidt
https://yorkerlin.github.io/posts/2021/09/Geomopt01/, 2021
2021
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20