Follow
Toshihiko Yamasaki
Toshihiko Yamasaki
Department of Information and Communication Engineering, The University of Tokyo
Verified email at cvm.t.u-tokyo.ac.jp
Title
Cited by
Cited by
Year
Sketch-based manga retrieval using manga109 dataset
Y Matsui, K Ito, Y Aramaki, A Fujimoto, T Ogawa, T Yamasaki, K Aizawa
Multimedia tools and applications 76, 21811-21838, 2017
10222017
Joint optimization framework for learning with noisy labels
D Tanaka, D Ikami, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
7352018
Cross-domain weakly-supervised object detection through progressive domain adaptation
N Inoue, R Furuta, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
5422018
Efficient retrieval of life log based on context and content
K Aizawa, D Tancharoen, S Kawasaki, T Yamasaki
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval …, 2004
1852004
Manga109 dataset and creation of metadata
A Fujimoto, T Ogawa, K Yamamoto, Y Matsui, T Yamasaki, K Aizawa
Proceedings of the 1st international workshop on comics analysis, processing …, 2016
1452016
Detecting deepfakes with self-blended images
K Shiohara, T Yamasaki
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
1442022
Foodlog: Capture, analysis and retrieval of personal food images via web
K Kitamura, T Yamasaki, K Aizawa
Proceedings of the ACM multimedia 2009 workshop on Multimedia for cooking …, 2009
1192009
Food log by analyzing food images
K Kitamura, T Yamasaki, K Aizawa
Proceedings of the 16th ACM international conference on Multimedia, 999-1000, 2008
942008
Self-supervised video representation learning using inter-intra contrastive framework
L Tao, X Wang, T Yamasaki
Proceedings of the 28th ACM International Conference on Multimedia, 2193-2201, 2020
932020
Affective audio-visual words and latent topic driving model for realizing movie affective scene classification
G Irie, T Satou, A Kojima, T Yamasaki, K Aizawa
IEEE Transactions on Multimedia 12 (6), 523-535, 2010
932010
Practical experience recording and indexing of life log video
D Tancharoen, T Yamasaki, K Aizawa
Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of …, 2005
922005
Pixelrl: Fully convolutional network with reinforcement learning for image processing
R Furuta, N Inoue, T Yamasaki
IEEE Transactions on Multimedia 22 (7), 1704-1719, 2019
862019
Image-based indoor positioning system: fast image matching using omnidirectional panoramic images
H Kawaji, K Hatada, T Yamasaki, K Aizawa
Proceedings of the 1st ACM international workshop on Multimodal pervasive …, 2010
862010
Mask-SLAM: Robust feature-based monocular SLAM by masking using semantic segmentation
M Kaneko, K Iwami, T Ogawa, T Yamasaki, K Aizawa
Proceedings of the IEEE conference on computer vision and pattern …, 2018
822018
Object detection for comics using manga109 annotations
T Ogawa, A Otsubo, R Narita, Y Matsui, T Yamasaki, K Aizawa
arXiv preprint arXiv:1803.08670, 2018
772018
Analog soft-pattern-matching classifier using floating-gate MOS technology
T Yamasaki, T Shibata
IEEE Transactions on Neural Networks 14 (5), 1257-1265, 2003
762003
Efficient optimization of convolutional neural networks using particle swarm optimization
T Yamasaki, T Honma, K Aizawa
2017 IEEE third international conference on multimedia big data (BigMM), 70-73, 2017
752017
Multi-label fashion image classification with minimal human supervision
N Inoue, E Simo-Serra, T Yamasaki, H Ishikawa
Proceedings of the IEEE international conference on computer vision …, 2017
692017
Evaluation of video summarization for a large number of cameras in ubiquitous home
GC De Silva, T Yamasaki, K Aizawa
Proceedings of the 13th annual ACM international conference on Multimedia …, 2005
652005
Time-varying mesh compression using an extended block matching algorithm
SR Han, T Yamasaki, K Aizawa
IEEE Transactions on Circuits and Systems for Video Technology 17 (11), 1506 …, 2007
632007
The system can't perform the operation now. Try again later.
Articles 1–20