Dear ML reading group,
For tomorrow's reading group we will have *Alejando Almódovar*, who is
visiting our lab during the summer, presenting an interesting paper in
causal discovery that came out recently titled Deriving Causal Order
from Single-Variable Interventions: Guarantees & Algorithm
<https://arxiv.org/abs/2405.18314>.
As always, the reading group will take place from *14:30* to max 16:00.
For those physically in the UdS campus, the session will be hosted as
usual in *room 0.01 of the E.1.7 building*. For those attending
remotely, we will set up the Zoom session for which you can find a link
in the google sheet .
Please let me know if I missed something. Otherwise, I'll see you tomorrow!
Cheers,
Adrián
Dear all,
This Wednesday (June 12th), we combine the Reading Group session with a
research talk at CISPA, by Dr. Michele Caprio (University of
Manchester), at 2pm (that is, _30 minutes earlier_ than usual). Everyone
is cordially invited to attend. The session will be hosted by Siu-Lun
(Alan) Chau, your usual reading group master from CISPA side.
Title: Imprecise Probabilistic Machine Learning: Being Precise about
Imprecision
Location: on Zoom [link [1]]. We also plan to book a room at CISPA's D2
building for colleagues who wish to participate together, the detail
will be updated in the spreadsheet [link [2]] later.
Talk Abstract:
This talk is divided into two parts. I will first introduce the field of
"Imprecise Probabilistic Machine Learning", from its inception to
modern-day research and open problems, including motivations and
clarifying examples. In the second part, I will present some recent
results that I've derived together with colleagues at Oxford Brookes on
Credal Learning Theory. Statistical Learning Theory is the foundation of
machine learning, providing theoretical bounds for the risk of models
learned from a (single) training set, assumed to issue from an unknown
probability distribution. In actual deployment, however, the data
distribution may (and often does) vary, causing domain
adaptation/generalization issues. We laid the foundations for a credal
theory of learning, using convex sets of probabilities (credal sets) to
model the variability in the data-generating distribution. Such credal
sets, we argued, may be inferred from a finite sample of training sets.
We derived bounds for the case of finite hypotheses spaces (both
assuming realizability or not), as well as infinite model spaces, which
directly generalize classical results. This talk is based on the
following work, https://doi.org/10.48550/arXiv.2402.00957.
Bio:
Michele is a Lecturer (Assistant Professor) in Machine Learning and
Artificial Intelligence at The University of Manchester. He obtained his
PhD in Statistics from Duke University, and worked as a postdoctoral
researcher in the Department of Computer and Information Science of the
University of Pennsylvania. His general interest is probabilistic
machine learning, and in particular the use of imprecise probabilistic
techniques to investigate the theory and methodology of uncertainty
quantification in Machine Learning and Artificial Intelligence.
Recently, he won the IJAR Young Researcher and the IMS New Researcher
Awards, and he was elected member of the London Mathematical Society.
We look forward to your participation.
Best Regards,
Kiet Vo
Links:
------
[1] https://cispa-de.zoom-x.de/my/muandet
[2]
https://docs.google.com/spreadsheets/d/1vtgEezBqS4d_ACPt-emK2NT52x7nofX9jxg…