Dear all,
This Wednesday (June 12th), we combine the Reading Group session with a research talk at CISPA, by Dr. Michele Caprio (University of Manchester), at 2pm (that is, _30 minutes earlier_ than usual). Everyone is cordially invited to attend. The session will be hosted by Siu-Lun (Alan) Chau, your usual reading group master from CISPA side.
Title: Imprecise Probabilistic Machine Learning: Being Precise about Imprecision
Location: on Zoom [link [1]]. We also plan to book a room at CISPA's D2 building for colleagues who wish to participate together, the detail will be updated in the spreadsheet [link [2]] later.
Talk Abstract: This talk is divided into two parts. I will first introduce the field of "Imprecise Probabilistic Machine Learning", from its inception to modern-day research and open problems, including motivations and clarifying examples. In the second part, I will present some recent results that I've derived together with colleagues at Oxford Brookes on Credal Learning Theory. Statistical Learning Theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. We laid the foundations for a credal theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argued, may be inferred from a finite sample of training sets. We derived bounds for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results. This talk is based on the following work, https://doi.org/10.48550/arXiv.2402.00957.
Bio: Michele is a Lecturer (Assistant Professor) in Machine Learning and Artificial Intelligence at The University of Manchester. He obtained his PhD in Statistics from Duke University, and worked as a postdoctoral researcher in the Department of Computer and Information Science of the University of Pennsylvania. His general interest is probabilistic machine learning, and in particular the use of imprecise probabilistic techniques to investigate the theory and methodology of uncertainty quantification in Machine Learning and Artificial Intelligence. Recently, he won the IJAR Young Researcher and the IMS New Researcher Awards, and he was elected member of the London Mathematical Society.
We look forward to your participation.
Best Regards, Kiet Vo
Links: ------ [1] https://cispa-de.zoom-x.de/my/muandet [2] https://docs.google.com/spreadsheets/d/1vtgEezBqS4d_ACPt-emK2NT52x7nofX9jxgH...
Dear all,
This is a reminder that the reading group happens today at 2pm and the Zoom link this time is different from the usual one.
Zoom: https://cispa-de.zoom-x.de/my/muandet
We look forward to seeing you there.
Best Regards,
Kiet Vo
On 2024-06-10 11:15, Huynh Quang Kiet Vo wrote:
Dear all,
This Wednesday (June 12th), we combine the Reading Group session with a research talk at CISPA, by Dr. Michele Caprio (University of Manchester), at 2pm (that is, _30 minutes earlier_ than usual). Everyone is cordially invited to attend. The session will be hosted by Siu-Lun (Alan) Chau, your usual reading group master from CISPA side.
Title: Imprecise Probabilistic Machine Learning: Being Precise about Imprecision
Location: on Zoom [link [1]]. We also plan to book a room at CISPA's D2 building for colleagues who wish to participate together, the detail will be updated in the spreadsheet [link [2]] later.
Talk Abstract: This talk is divided into two parts. I will first introduce the field of "Imprecise Probabilistic Machine Learning", from its inception to modern-day research and open problems, including motivations and clarifying examples. In the second part, I will present some recent results that I've derived together with colleagues at Oxford Brookes on Credal Learning Theory. Statistical Learning Theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. We laid the foundations for a credal theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argued, may be inferred from a finite sample of training sets. We derived bounds for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results. This talk is based on the following work, https://doi.org/10.48550/arXiv.2402.00957.
Bio: Michele is a Lecturer (Assistant Professor) in Machine Learning and Artificial Intelligence at The University of Manchester. He obtained his PhD in Statistics from Duke University, and worked as a postdoctoral researcher in the Department of Computer and Information Science of the University of Pennsylvania. His general interest is probabilistic machine learning, and in particular the use of imprecise probabilistic techniques to investigate the theory and methodology of uncertainty quantification in Machine Learning and Artificial Intelligence. Recently, he won the IJAR Young Researcher and the IMS New Researcher Awards, and he was elected member of the London Mathematical Society.
We look forward to your participation.
Best Regards, Kiet Vo
Links: ------ [1] https://cispa-de.zoom-x.de/my/muandet [2] https://docs.google.com/spreadsheets/d/1vtgEezBqS4d_ACPt-emK2NT52x7nofX9jxgH...
ml-reading-group@lists.saarland-informatics-campus.de