alt text
Takashi Ishida
石田 隆

I am an indefinite-term Research Scientist at RIKEN AIP, a Lecturer at The University of Tokyo, and a part-time Research Scientist at Sakana AI.

At UTokyo, I am co-running Machine Learning and Statistical Data Analysis Lab (Sugiyama-Yokoya-Ishida Lab). I belong to the Department of Complexity Science and Engineering, the Department of Computer Science, and the Department of Information Science.

I earned my PhD from the University of Tokyo in 2021, advised by Prof. Masashi Sugiyama. During my PhD, I completed an Applied Scientist internship at Amazon.com and was fortunate to become a PhD Fellow at Google and a Research Fellow at JSPS (DC2). Prior to that, I spent some years in the finance industry, during which I worked as an Assistant Manager at Sumitomo Mitsui DS Asset Management. I received the MSc from the University of Tokyo in 2017 and the Bachelor of Economics from Keio University in 2013.

Email: ishi at k.u-tokyo dot ac dot jp
Links: Github, X (@tksii), Google Scholar, researchmap (Japanese/English)

Papers (peer reviewed)

  1. W. Wang, T. Ishida, Y.-J. Zhang, G. Niu, M. Sugiyama. Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical. In Proceedings of the 41st International Conference on Machine Learning (ICML2024). [arXiv] [PMLR] [code]

  2. T. Ishida, I. Yamane, N. Charoenphakdee, G. Niu, M. Sugiyama. Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. In Proceedings of Eleventh International Conference on Learning Representations (ICLR2023). [arXiv] [OpenReview] [code] [Fashion-MNIST-H (Papers with Code)] [Video] Selected for oral (notable-top-5%) presentation!

  3. I. Yamane, Y. Chevaleyre, T. Ishida, F. Yger. Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS2023). [paper] [code] [video]

  4. Z. Lu, C. Xu, B. Du, T. Ishida, L. Zhang, & M. Sugiyama. LocalDrop: A hybrid regularization for deep neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.44, No.7, pp.3590-3601, 2022. [paper]

  5. H. Ishiguro, T. Ishida, & M. Sugiyama. Learning from Noisy Complementary Labels with Robust Loss Functions. IEICE Transactions on Information and Systems, Vol.E105-D, No.2, pp.-, Feb. 2022. [paper]

  6. T. Ishida, I. Yamane, T. Sakai, G. Niu, M. Sugiyama. Do We Need Zero Training Loss After Achieving Zero Training Error? In Proceedings of Thirty-seventh International Conference on Machine Learning (ICML2020). [paper] [code] [video]

  7. T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama. Complementary-label learning for arbitrary losses and models. In Proceedings of Thirty-sixth International Conference on Machine Learning (ICML2019). [paper] [poster] [slides] [video] [code]

  8. T. Ishida, G. Niu, and M. Sugiyama. Binary classification from positive-confidence data. In Advances in Neural Information Processing Systems 31 (NeurIPS2018). [paper] [poster] [slides] [video] [code] [Press Release] [ScienceDaily] [PHYS.ORG] [ASIAN SCIENTISTS] [ISE Magazine] [RIKEN RESEARCH] [日刊工業新聞] [ITmedia] Selected for spotlight presentation!

  9. T. Ishida, G. Niu, W. Hu, and M. Sugiyama. Learning from complementary labels. In Advances in Neural Information Processing Systems 30 (NeurIPS2017). [paper] [日刊工業新聞]

  10. T. Ishida. Forecasting Nikkei 225 Returns By Using Internet Search Frequency Data. In Securities Analysts Journal, Vol.52, No.6, pp.83-93, 2014. Selected as Research Notes.

Books

  1. M. Sugiyama, H. Bao, T. Ishida, N. Lu, T. Sakai, & G. Niu. Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach. Adaptive Computation and Machine Learning series, The MIT Press, 2022. [link]

Grants

Awards & Achievements

Service

Courses at UTokyo

Tiny Pet Projects

Note