Aojun Zhou
Aojun Zhou
SenseTime
Dirección de correo verificada de sensetime.com
Título
Citado por
Citado por
Año
Incremental network quantization: Towards lossless cnns with low-precision weights
A Zhou, A Yao, Y Guo, L Xu, Y Chen
ICLR2017, 2017
8032017
Adversarial Robustness vs Model Compression, or Both?
S Ye, K Xu, S Liu, H Cheng, JH Lambrechts, H Zhang, A Zhou, K Ma, ...
ICCV2019, 2019
76*2019
Deep neural network compression with single and multiple level quantization
Y Xu, Y Wang, A Zhou, W Lin, H Xiong
AAAI2018 32 (1), 2018
722018
Explicit loss-error-aware quantization for low-bit deep neural networks
A Zhou, A Yao, K Wang, Y Chen
CVPR2018, 9426-9435, 2018
682018
Incorporating convolution designs into visual transformers
K Yuan, S Guo, Z Liu, A Zhou, F Yu, W Wu
ICCV2021, 2021
452021
HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions
A Zhou*, D Li*, A Yao, equal contribution
ICCV2019, 2019
23*2019
Learning N: M Fine-grained Structured Sparse Neural Networks From Scratch
A Zhou, Y Ma, J Zhu, J Liu, Z Zhang, K Yuan, W Sun, H Li
ICLR2021, 2021
212021
Deeply-supervised knowledge synergy
D Sun, A Yao, A Zhou, H Zhao
CVPR2019, 6997-7006, 2019
192019
Group Fisher Pruning for Practical Network Compression
L Liu, S Zhang, Z Kuang, A Zhou, JH Xue, X Wang, Y Chen, W Yang, ...
ICML2021, 7021-7032, 2021
92021
Towards Improving Generalization of Deep Networks via Consistent Normalization
A Zhou*, Y Ma*, Y Li, X Zhang, P Luo
https://arxiv.org/abs/1909.00182, 2019
32019
Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks
K Yuan, Q Li, D Chen, A Zhou, J Yan
ICCV2021, 2020
12020
Pyramid Fusion Transformer for Semantic Segmentation
Z Qin, J Liu, X Zhang, M Tian, A Zhou, S Yi, H Li
arXiv preprint arXiv:2201.04019, 2022
2022
DominoSearch: Find layer-wise fine-grained N: M sparse schemes from dense neural networks
W Sun*, A Zhou*, S Stuijk, R Wijnhoven, AO Nelson, H Corporaal
NeurIPS2021 34, 2021
2021
Methods, systems, articles of manufacture and apparatus to train a neural network
A Yao, D Sun, A Zhou, H Zhao, Y Chen
US Patent App. 16/981,018, 2021
2021
Loss-error-aware quantization of a low-bit neural network
A Yao, A Zhou, K Wang, H Zhao, Y Chen
US Patent App. 16/982,441, 2021
2021
Differentiable Dynamic Wirings for Neural Networks
K Yuan, Q Li, S Guo, D Chen, A Zhou, F Yu, Z Liu
Proceedings of the IEEE/CVF International Conference on Computer Vision, 327-336, 2021
2021
Incremental network quantization
A Yao, A Zhou, Y Guo, L Xu, Y Chen
US Patent App. 16/636,799, 2020
2020
Scale Calibrated Training: Improving Generalization of Deep Networks via Scale-Specific Normalization
Z Yu, A Zhou, Y Ma, Y Li, X Zhang, P Luo
arXiv preprint arXiv:1909.00182, 2019
2019
SnapQuant: A Probabilistic and Nested Parameterization for Binary Networks
K Wang, H Zhao, A Yao, A Zhou, D Sun, Y Chen
2018
Incremental Network Quantization: Towards
A Zhou, A Yao, Y Guo, L Xu, Y Chen
2017
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20