Follow
Jung-Woo Ha
Jung-Woo Ha
Research Fellow@NAVER AI Lab, Head of Future AI Center@NAVER, Adj. Prof. @HKUST
Verified email at navercorp.com - Homepage
Title
Cited by
Cited by
Year
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Y Choi, M Choi, M Kim, JW Ha, S Kim, J Choo
CVPR 2018, 2018
42272018
StarGAN v2: Diverse Image Synthesis for Multiple Domains
Y Choi, Y Uh, J Yoo, JW Ha
Proceedings of the IEEE/CVF Conferences on Computer Vision and Pattern …, 2020
16292020
Hadamard product for low-rank bilinear pooling
JH Kim, KW On, J Kim, JW Ha, BT Zhang
ICLR 2017, 2017
8152017
Dual attention networks for multimodal reasoning and matching
H Nam, JW Ha, J Kim
CVPR 2017, 2017
7892017
Overcoming Catastrophic Forgetting by Incremental Moment Matching
SW Lee, JW Kim, JH Jeon, JW Ha, BT Zhang
NIPS 2017, 2017
6502017
Phase-Aware Speech Enhancement with Deep Complex U-Net
HS Choi, J Kim, J Huh, A Kim, JW Ha, K Lee
ICLR 2019 (to appear), 2019
3682019
Multimodal residual learning for visual qa
JH Kim, SW Lee, D Kwak, MO Heo, J Kim, JW Ha, BT Zhang
Advances in Neural Information Processing Systems, 361-369, 2016
3642016
Photorealistic Style Transfer via Wavelet Transforms
J Yoo, Y Uh, S Chun, B Kang, JW Ha
arXiv preprint arXiv:1903.09760 (ICCV 2019), 2019
3552019
Rainbow Memory: Continual Learning with a Memory of Diverse Samples
J Bang, H Kim, YJ Yoo, JW Ha, J Choi
arXiv preprint arXiv:2103.17230 (CVPR 2021), 2021
2602021
KLUE: Korean Language Understanding Evaluation
S Park, J Moon, S Kim, WI Cho, J Han, J Park, C Song, J Kim, Y Song, ...
arxiv preprinting arXiv:2105.09680 (NeurIPS 2021 Dataset and Benchmark Track), 2021
2182021
AdamP: Slowing down the weight norm increase in momentum-based optimizers
B Heo, S Chun, SJ Oh, D Han, S Yun, Y Uh, JW Ha
arXiv preprint arXiv:2006.08217 (ICLR 2021), 2021
169*2021
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
X Gu, K Cho, JW Ha, S Kim
arXiv:1805.12352 (ICLR 2019), 2019
1622019
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks
S Yu, J Tack, S Mo, H Kim, J Kim, JW Ha, J Shin
International Conference on Learning Representations (ICLR 2022), 2022
1542022
Nsml: Meet the mlaas platform with a real-world case study
H Kim, M Kim, D Seo, J Kim, H Park, S Park, H Jo, KH Kim, Y Yang, Y Kim, ...
arXiv preprint arXiv:1810.09957, 2018
1022018
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ...
arXiv preprint arXiv:2109.04650 (EMNLP 2021), 2021
902021
Dataset Condensation via Efficient Synthetic-Data Parameterization
JH Kim, J Kim, SJ Oh, S Yun, H Song, J Jeong, JW Ha, HO Song
arXiv preprint arXiv:2205.14959 (ICML 2022), 2022
872022
Reinforcement learning based recommender system using biclustering technique
S Choi, H Ha, U Hwang, C Kim, JW Ha, S Yoon
arXiv preprint arXiv:1801.05532, 2018
832018
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs
D Hwang, J Park, S Kwon, KM Kim, JW Ha, HJ Kim
arXiv preprint arXiv:2007.08294 (NeurIPS 2020), 2020
782020
DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances
X Gu, KM Yoo, JW Ha
arXiv preprint arXiv:2012.01775 (AAAI 2021), 2021
772021
NSML: A Machine Learning Platform That Enables You to Focus on Your Models
N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ...
arXiv:1712.05902, https://arxiv.org/abs/1712.05902, 2017
762017
The system can't perform the operation now. Try again later.
Articles 1–20