Dear all,
This week’s reading group is hosted by CISPA, and we will have Anurag Singh presenting the paper on "Invariant Risk Minimisation Game" (https://arxiv.org/abs/2002.04692). The reading group will be on Wednesday at 2:30 pm.
As you may know, CISPA has recently moved offices to St.Ingbert so the organisers have proposed the following solution to keep the reading group running: Both udS and CISPA book a room for their members to meet and the presenter will present online in their respective location.
The link to join virtually can be found here: https://docs.google.com/spreadsheets/d/1vtgEezBqS4d_ACPt-emK2NT52x7nofX9jxg…
Thank you,
Alan
Hello All,
I've decided to change the paper for today's reading group. The proposed one had too much text and it would not be an "enjoyable” experience for the RG! This is the paper I'll be presenting:
https://arxiv.org/abs/2302.02941
Sorry for the last-minute change, but I think it's for the best!
See you later
Best,
Pablo
Dear all,
This week's reading group is hosted by *udS*, and we will have *Pablo
Sanchez* talking about the quite recent paper Statistics without
Interpretation: A Sober Look at Explainable Machine Learning
<https://arxiv.org/abs/2402.02870>. You can find an abstract of the
paper below.
As always, the reading group will be on *Wednesday at 2:30pm at room
0.01 of the E1.7 building*. I am in the process of find a free room with
a working projector, so I will update you if the room changes at some
point, for now we meet where always.
If someone plans to join remotely, please *let me know in advance.*
*Abstract*
In the rapidly growing literature on explanation algorithms, it
often remains unclear what precisely these algorithms are for and
how they should be used. We argue that this is because explanation
algorithms are often mathematically complex but don't admit a clear
interpretation. Unfortunately, complex statistical methods that
don't have a clear interpretation are bound to lead to errors in
interpretation, a fact that has become increasingly apparent in the
literature. In order to move forward, papers on explanation
algorithms should make clear how precisely the output of the
algorithms should be interpreted. They should also clarify what
questions about the function can and cannot be answered given the
explanations. Our argument is based on the distinction between
statistics and their interpretation. It also relies on parallels
between explainable machine learning and applied statistics.
Cheers,
Adrián