alt text
Takashi Ishida

English | 日本語

I am a Research Scientist at RIKEN AIP, an Associate Professor at The University of Tokyo, and a part-time Research Scientist at Sakana AI.

At UTokyo, I am co-running Machine Learning and Statistical Data Analysis Lab (Sugiyama-Yokoya-Ishida Lab). I belong to the Department of Complexity Science and Engineering, the Department of Computer Science, and the Department of Information Science.

Previously, I was a Lecturer at the University of Tokyo. I earned my PhD from the University of Tokyo, advised by Prof. Masashi Sugiyama. During my PhD, I completed an Applied Scientist internship at Amazon.com and was fortunate to become a PhD Fellow at Google and a Research Fellow at JSPS (DC2). Prior to that, I spent some years in the finance industry, during which I worked as an Assistant Manager at Sumitomo Mitsui DS Asset Management. I received the MSc from the University of Tokyo and the Bachelor of Economics from Keio University.

Preprints

  1. I. Sugiura, T. Ishida, T. Makino, C. Tazuke, T. Nakagawa, K. Nakago, D. Ha. EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements. arXiv preprint arXiv:2506.08762, 2025. [arXiv] [blog] [bench code] [tool code] [dataset]

  2. R. Ushio, T. Ishida, M. Sugiyama. Practical estimation of the optimal classification error with soft labels and calibration. arXiv preprint arXiv:2505.20761, 2025. [arXiv] [code]

  3. T. Ishida, T. Lodkaew, I. Yamane. How Can I Publish My LLM Benchmark Without Giving the True Answers Away? arXiv preprint arXiv:2505.18102, 2025. [arXiv] Accepted as an oral presentation at the MemFM @ ICML 2025 workshop!

Papers (peer reviewed)

  1. J. Ackermann, T. Ishida, M. Sugiyama. Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback. In Conference on Language Modeling (COLM2025). [to appear]

  2. W. Wang, T. Ishida, Y.-J. Zhang, G. Niu, M. Sugiyama. Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical. In Proceedings of the 41st International Conference on Machine Learning (ICML2024). [arXiv] [PMLR] [code]

  3. T. Ishida, I. Yamane, N. Charoenphakdee, G. Niu, M. Sugiyama. Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification. In Proceedings of Eleventh International Conference on Learning Representations (ICLR2023). [arXiv] [OpenReview] [code] [Fashion-MNIST-H (Papers with Code)] [Video] [nnabla ディープラーニングチャンネル] Selected for oral (notable-top-5%) presentation!

  4. I. Yamane, Y. Chevaleyre, T. Ishida, F. Yger. Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS2023). [paper] [code] [video]

  5. Z. Lu, C. Xu, B. Du, T. Ishida, L. Zhang, & M. Sugiyama. LocalDrop: A hybrid regularization for deep neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.44, No.7, pp.3590-3601, 2022. [paper]

  6. H. Ishiguro, T. Ishida, & M. Sugiyama. Learning from Noisy Complementary Labels with Robust Loss Functions. IEICE Transactions on Information and Systems, Vol.E105-D, No.2, pp.364-376, Feb. 2022. [paper]

  7. T. Ishida, I. Yamane, T. Sakai, G. Niu, M. Sugiyama. Do We Need Zero Training Loss After Achieving Zero Training Error? In Proceedings of Thirty-seventh International Conference on Machine Learning (ICML2020). [paper] [code] [video]

  8. T. Ishida, G. Niu, A. K. Menon, and M. Sugiyama. Complementary-label learning for arbitrary losses and models. In Proceedings of Thirty-sixth International Conference on Machine Learning (ICML2019). [paper] [poster] [slides] [video] [code]

  9. T. Ishida, G. Niu, and M. Sugiyama. Binary classification from positive-confidence data. In Advances in Neural Information Processing Systems 31 (NeurIPS2018). [paper] [poster] [slides] [video] [code] [Press Release] [ScienceDaily] [PHYS.ORG] [ASIAN SCIENTISTS] [ISE Magazine] [RIKEN RESEARCH] [日刊工業新聞] [ITmedia] Selected for spotlight presentation!

  10. T. Ishida, G. Niu, W. Hu, and M. Sugiyama. Learning from complementary labels. In Advances in Neural Information Processing Systems 30 (NeurIPS2017). [paper] [日刊工業新聞]

  11. T. Ishida. Forecasting Nikkei 225 Returns By Using Internet Search Frequency Data. In Securities Analysts Journal, Vol.52, No.6, pp.83-93, 2014. Selected as Research Notes.

Books

  1. M. Sugiyama, H. Bao, T. Ishida, N. Lu, T. Sakai, & G. Niu. Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach. Adaptive Computation and Machine Learning series, The MIT Press, 2022. [link]

Grants

Awards & Achievements

Service

Courses at UTokyo

Join our group

I welcome motivated students and researchers interested in machine learning, LLMs, and related fields to join our group. Please check my recent publications to get a sense of my research interests. I can supervise or mentor students or work with researchers through the following opportunities:

If you have any questions, feel free to contact me.

Contact & Links

© 2025 Takashi Ishida