Many thanks and enjoy your holiday.
Best,
Krikamol
---
Krikamol Muandet, Dr. rer. nat. | Tenure-Track Faculty
CISPA — Helmholtz Center for Information Security
Stuhlsatzenhaus 5, 66123 Saarbrücken, Germany
+49 681 87083 2558 | muandet(a)cispa.de | https://krikamol.org
> On 11. Dec 2024, at 19:44, Kavya Gupta <kavya.gupta100(a)gmail.com> wrote:
>
> Dear Krikamol,
>
> Sorry for the confusion.
> I thought the link for the decision making series might be updated with my slides and video as well.
>
> I am on vacation currently.
> But I will remind Nektarios to update the Decision Making series link with the slides.
>
> Thank you.
>
> Best,
> Kavya
>
>
>
>
>
> On Wed 11 Dec 2024 at 13:22, Krikamol Muandet <muandet(a)cispa.de <mailto:muandet@cispa.de>> wrote:
>> Hi Kavya,
>>
>> I really like your presentation as part of the decision-making series. Would you mind sharing your slides with us? Thanks in advance.
>>
>> Best,
>> Krikamol
>>
>> ---
>>
>> Krikamol Muandet, Dr. rer. nat. | Tenure-Track Faculty
>> CISPA — Helmholtz Center for Information Security
>> Stuhlsatzenhaus 5, 66123 <https://www.google.com/maps/search/Stuhlsatzenhaus+5,+66123+Saarbr%C3%BCcke…> Saarbrücken, <https://www.google.com/maps/search/Stuhlsatzenhaus+5,+66123+Saarbr%C3%BCcke…> Germany <https://www.google.com/maps/search/Stuhlsatzenhaus+5,+66123+Saarbr%C3%BCcke…>
>> +49 681 87083 2558 | muandet(a)cispa.de <mailto:muandet@cispa.de> | https://krikamol.org <https://krikamol.org/>
>>
>>
Hi Kavya,
I really like your presentation as part of the decision-making series. Would you mind sharing your slides with us? Thanks in advance.
Best,
Krikamol
---
Krikamol Muandet, Dr. rer. nat. | Tenure-Track Faculty
CISPA — Helmholtz Center for Information Security
Stuhlsatzenhaus 5, 66123 Saarbrücken, Germany
+49 681 87083 2558 | muandet(a)cispa.de | https://krikamol.org
Dear all,
I hope this email finds you well.
For our next reading group session on Wednesday, December 11th, we are
excited to welcome our guest speaker, Tamlin Love [1].
The session will take place from 14:30 to 16:00. As always, those
attending remotely can join via Zoom. The meeting link can be found in
the shared Google Sheet.
This will be our last reading group for 2024. I want to take a moment to
thank all of you for your participation and for volunteering to present
in our reading group. I wish you a joyful holiday season and a fantastic
start to 2025!
Speaking of 2025, due to upcoming deadlines in January, we've decided to
pause the reading group for January. We'll resume as usual in February
2025 with the Causal Inference series. Good luck with your
submissions--may the reviewers ever be in your favor!
Below, you'll find the abstract for Tamlin's talk:
Title: Challenges in Robotics and Human-Robot Interaction Domains for
Explainability
Abstract:
"Explainability has been identified as an important tool in human-robot
interaction (HRI) for improving understanding of robots and thus
increasing trust, acceptance, and usability. However, HRI domains pose
challenges to explainability that differ from most typical XAI
scenarios. In this talk, I will discuss the wider challenges facing
explainable HRI, how we have started to tackle generating counterfactual
explanations in these scenarios using causal models, and some specific
challenges that arise from this approach, through the lens of a domestic
home assistance robot for older adults."
Looking forward to seeing you all soon! Let me know if there's anything
I missed.
Cheers,
Nektarios
Links:
------
[1] https://tamlinlove.github.io/
Dear all,
Amin Charusaie (https://charusaie.github.io/) from Max Planck Institute for Intelligent Systems (MPI-IS) will visit us during Dec 2-3 and will talk about his research. You are cordially invited to attend the talk. Please feel free to forward this information to anyone who might be interested.
Title: Optimal Multi-Objective Learn-to-Defer: Possibility, Complexity, and a Post-Processing Framework
Location: Seminar room 1.01, CISPA D2
Date/Time: Tuesday, December 3rd at 10:30am
Abstract:
Learn-to-Defer (L2D) is a paradigm that enables learning algorithms to work not in isolation but as a team with human experts. In this paradigm, we permit the system to defer a subset of its tasks to the expert. Although there are currently systems that follow this paradigm and are designed to optimize the accuracy of the final human-AI team, the general methodology for developing such systems under a set of constraints (e.g., algorithmic fairness, expert intervention budget, defer of anomaly, etc.) remains largely unexplored. In this presentation, I discuss the complexity of obtaining an optimal deterministic solution to multi-objective L2D, possibility of learning deferral labels, and achieve the optimal random solution via a d-dimensional generalization to the fundamental lemma of Neyman and Pearson. I further discuss the implications of such generalization to a variety of multi-objective learning problems beyond L2D. Finally, I provide experimental results that show the effectiveness of the introduced method on a series of L2D datasets.
If you would like to meet him either in person or virtually, please contact me via muandet(a)cispa.de.
Best,
Krikamol
---
Krikamol Muandet, Dr. rer. nat. | Tenure-Track Faculty
CISPA — Helmholtz Center for Information Security
Stuhlsatzenhaus 5, 66123 Saarbrücken, Germany
+49 681 87083 2558 | muandet(a)cispa.de | https://krikamol.org