alt text
Takashi Ishida
石田 隆

I am a Research Scientist with the Imperfect Information Learning Team at RIKEN AIP. Additionally, I am a Lecturer at The University of Tokyo, co-running Machine Learning and Statistical Data Analysis Lab (Sugiyama-Yokoya-Ishida Lab). My interest lies in machine learning, e.g., I have previously worked on weakly supervised learning (such as complementary-label learning), methods to effectively train neural nets (such as flooding), and estimating the upper bound of prediction performance (such as Bayes error estimation).

I earned my PhD from the University of Tokyo in 2021, advised by Prof. Masashi Sugiyama. During my PhD, I completed an Applied Scientist internship at and was fortunate to become a PhD Fellow at Google and a Research Fellow at JSPS (DC2). Prior to that, I spent some years in the finance industry, and I was an Assistant Manager at Sumitomo Mitsui DS Asset Management. I received the MSc from the University of Tokyo in 2017 and the Bachelor of Economics from Keio University in 2013.

Email: ishi at k.u-tokyo dot ac dot jp
Links: Github, X (@tksii), Google Scholar, Japanese page (researchmap)

I am no longer accepting master and PhD students at the Department of Complexity Science and Engineering and the Department of Computer Science. (I continue to jointly accept master students with Prof. Sugiyama at the department of Computer Science.)


  1. T. Ishida, I. Yamane, N. Charoenphakdee, G. Niu, M. Sugiyama. Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. In Proceedings of Eleventh International Conference on Learning Representations (ICLR2023). [arXiv] [OpenReview] [code] [Fashion-MNIST-H (Papers with Code)] [Video] Selected for oral (notable-top-5%) presentation!

  2. I. Yamane, Y. Chevaleyre, T. Ishida, F. Yger. Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS2023). [paper] [code] [video]

  3. Z. Lu, C. Xu, B. Du, T. Ishida, L. Zhang, & M. Sugiyama. LocalDrop: A hybrid regularization for deep neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.44, No.7, pp.3590-3601, 2022. [paper]

  4. H. Ishiguro, T. Ishida, & M. Sugiyama. Learning from Noisy Complementary Labels with Robust Loss Functions. IEICE Transactions on Information and Systems, Vol.E105-D, No.2, pp.-, Feb. 2022. [paper]

  5. T. Ishida, I. Yamane, T. Sakai, G. Niu, M. Sugiyama. Do We Need Zero Training Loss After Achieving Zero Training Error? In Proceedings of Thirty-seventh International Conference on Machine Learning (ICML2020). [paper] [code] [video]

  6. T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama. Complementary-label learning for arbitrary losses and models. In Proceedings of Thirty-sixth International Conference on Machine Learning (ICML2019). [paper] [poster] [slides] [video] [code]

  7. T. Ishida, G. Niu, and M. Sugiyama. Binary classification from positive-confidence data. In Advances in Neural Information Processing Systems 31 (NeurIPS2018). [paper] [poster] [slides] [video] [code] [Press Release] [ScienceDaily] [PHYS.ORG] [ASIAN SCIENTISTS] [ISE Magazine] [RIKEN RESEARCH] [日刊工業新聞] [ITmedia] Selected for spotlight presentation!

  8. T. Ishida, G. Niu, W. Hu, and M. Sugiyama. Learning from complementary labels. In Advances in Neural Information Processing Systems 30 (NeurIPS2017). [paper] [日刊工業新聞]

  9. T. Ishida. Forecasting Nikkei 225 Returns By Using Internet Search Frequency Data. In Securities Analysts Journal, Vol.52, No.6, pp.83-93, 2014. Selected as Research Notes.


  1. M. Sugiyama, H. Bao, T. Ishida, N. Lu, T. Sakai, & G. Niu. Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach. Adaptive Computation and Machine Learning series, The MIT Press, 2022. [link]


Awards & Achievements


Courses at UTokyo

Courses at UTokyo Extension