Program DG-EBF • CVPR 2026

PROGRAM

A tentative schedule is below; talk details will be announced.

🗓 Date June 4, 2026
Time 1:00 PM – 6:00 PM
📍 Location Colorado Convention Center, Denver (Room 103)

Tentative Schedule (subject to change)

Time Activity
13:00 – 13:05 Opening Remarks
13:05 – 13:35 Talk 1
13:35 – 14:05 Talk 2
14:05 – 14:35 Talk 3
14:35 – 15:05 Talk 4
15:05 – 16:00 Posters + Coffee
16:00 – 16:55 Oral Sessions
16:55 – 17:25 Talk 5
17:25 – 17:55 Talk 6
17:55 – 18:00 Closing Remarks

Invited talks

Details will be announced.

Sara Beery
Sara Beery
MIT

Title: TBD

Abstract: TBD

Aditi Raghunathan
Aditi Raghunathan
Carnegie Mellon University

Title: TBD

Abstract: TBD

Kun Zhang
Kun Zhang
MBZUAI

Title: Causal representation learning and causal generative AI

Abstract: Causality is a fundamental notion in science, engineering, and even machine learning and AI. Uncovering the causal process underlying observed data naturally helps answer 'why' and 'what-if' questions, informs optimal decisions, and enables adaptive prediction. In many scenarios, observed variables, such as image pixels and questionnaire responses, are often reflections of the underlying hidden causal variables rather than being causal variables themselves. Causal representation learning aims to reveal the underlying hidden causal variables and their relations. In this talk, we show how the modularity property of causal systems makes it possible to recover the underlying causal representations from observational data with identifiability guarantees: under suitable assumptions, the learned representations align with the underlying causal process. We further demonstrate how identifiable causal representation learning can directly benefit generative AI, using image generation / editing, text generation, and extrapolated data generation as illustrative examples.

Zsolt Kira
Zsolt Kira
Georgia Tech

Title: TBD

Abstract: TBD

Abhinav Dhall
Abhinav Dhall
Monash University

Title: TBD

Abstract: TBD

M. Saquib Sarfraz
M. Saquib Sarfraz
Mercedes-Benz / KIT

Title: Seeing Domain Generalization in Foundation Models: A Nearest-Neighbor View

Abstract: Foundation models and vision-language models have changed the landscape of domain generalization. Trained on large and diverse web-scale data, they often appear robust across datasets, styles, and visual domains. But are they truly domain-general, or have they learned a broader set of correlations? In this talk, I will look at domain generalization through the geometry of learned representations. Using nearest-neighbor structure, clustering, and visualization, I will discuss when foundation-model embeddings organize images by semantic content, and when they remain entangled with domain, style, background, prompt vocabulary, or transformation-specific cues. The central message of the talk is that domain generalization in the foundation-model era is not only a question of better objectives or larger pretraining data. It is also a question of inspection: how can we visualize, cluster, query, and audit learned representation spaces before deploying models in open-world settings? A nearest-neighbor view provides a practical lens for understanding both the strengths and the hidden failure modes of modern domain-general models.