TRUST-XR Logo Trust XR 2025
  • About
  • Logistics
  • Organizers
  • Call for Paper
  • Submission Guidelines
  • Keynote
  • Program
TRUST-XR 2025 Banner

About the Workshop

Extended Reality (XR) technologies increasingly integrate into everyday life such as education, healthcare, defense, workforce training, and entertainment, the need for trustworthy, secure, and privacy-aware artificial intelligence (AI) becomes critical. XR systems capture sensitive biometric and behavioral data, use opaque AI models, and are vulnerable to adversarial manipulation.

TRUST-XR 2025 will explore cross-disciplinary challenges in securing XR systems, improving AI explainability, protecting immersive user data, and developing fair and ethical XR interfaces powered by conventional and generative AI (e.g., large language models (LLMs)). The workshop will foster discussion and innovation around privacy-preserving pipelines, adversarial resilience, and human-centered trust frameworks.

Trust XR Workshop is for you, if you are:

  • XR and Immersive Systems Researchers: Investigating the role of trust, security, privacy, and ethics in extended reality environments using biometric and multimodal sensing data.
  • AI and Machine Learning Experts: Developing explainable, privacy-preserving, and adversarially robust AI models for XR applications, including those powered by LLMs and generative AI.
  • Human-Computer Interaction (HCI) and Cognitive Security Scholars: Studying user trust, manipulation risks, and perception-aware attacks in AI-driven immersive experiences.
  • Cybersecurity and Privacy Professionals: Focusing on real-time XR system vulnerabilities, privacy leakage, multimodal sensor protection, and runtime-layer defense strategies.
  • Ethics, Policy, and Responsible AI Advocates: Addressing fairness, accountability, and regulatory compliance in intelligent XR systems, including responsible deployment of black-box AI models.
  • Developers and System Architects: Building secure XR platforms and pipelines that balance immersive performance with safety, explainability, and user trust.

Program Overview

TRUST-XR 2025 is a full-day hybrid workshop featuring expert talks, research presentations, and collaborative activities. The agenda includes:

  • Opening Keynote: Exploring the future of trustworthy AI in immersive XR systems
  • Thematic Sessions: Peer-reviewed papers and lightning talks
  • Interactive Activities: Group exploration of:
    • Explainability and robustness in AI-powered XR
    • Ethical handling of biometric data
    • LLM integration in real-time XR systems
    • Privacy, manipulation, and trust frameworks

For inquiries, contact us at trustxr2025@gmail.com.

Location and Date

The TRUST-XR 2025 workshop will be held in conjunction with ISMAR 2025 in October 2025 as a hybrid event at Daejeon, South Korea.

Exact date and venue details will be updated as they become available.

Important Dates

  • Paper Submission Deadline: July 10th, 2025 (23:59 AoE, Thrusday)
  • Notification of Acceptance: August 1st, 2025 (23:59 AoE, Friday)
  • Camera-Ready Submission: August 15th, 2025 (23:59 AoE, Friday)
  • Workshop Date: October 12th, 2025 (Held in conjunction with ISMAR 2025)

Organizers

General Chairs

  • Khaza Anuarul Hoque (University of Missouri-Columbia, USA)
  • Brendan David-John (Virginia Tech, USA)

Technical Program Chair

  • Ripan Kumar Kundu (University of Missouri-Columbia, USA)

Website Chair

  • Istiak Ahmed (University of Missouri-Columbia, USA)

Online Session Coordinator

  • Azim Ibragimov (University of Florida, USA)
    Email: a.ibragimov@ufl.edu

Technical Program Committee

  • Xiaokuan Zhang (George Mason University, USA)
  • Maria Gorlatova (Duke University, USA)
  • John Quarles (University of Texas at San Antonio, USA)
  • Barry Giesbrecht (University of California Santa Barbara, USA)
  • Balakrishnan Prabhakaran (University at Albany - State University of New York, USA)
  • Prasad Calyam (University of Missouri-Columbia, USA)
  • Oleg Komogortsev (Texas State University, USA)
  • M. Rasel Mahmud (Kennesaw State University, USA)
  • You-Jin Kim (Texas A&M University, USA)
  • Ibrahim Baggili (Louisiana State University, USA)
  • Aryabrata Basu (University of Arkansas at Little Rock, USA)
  • Mallesham Dasari (Northeastern University, USA)
  • Fatima Muhammad Anwar (University of Massachusetts Amherst, USA)

Topics of Interest

The TRUST-XR 2025 workshop invites contributions related to trustworthy, secure, and privacy-aware AI in XR systems. Topics include, but are not limited to:

  • Explainable and interpretable AI models for XR environments
  • Privacy-preserving learning for multimodal XR data (e.g., eye-tracking, biometrics)
  • Adversarial robustness in AI-driven XR pipelines
  • Cognitive security threats and perceptual manipulation in immersive systems
  • Ethical frameworks and responsible AI for XR applications
  • Generative AI safety in XR content and interaction
  • Trust, transparency, and accountability in intelligent XR systems
  • Secure multimodal pipelines: visual, audio, haptic, biometric
  • Cross-platform XR security (enterprise and consumer devices)
  • Human-centered evaluation for trust, privacy, explainability
  • Real-world case studies of trustworthy XR deployments
  • Bias mitigation in AI models used in XR
  • Security implications of integrating LLMs into XR systems
  • Policy and regulatory perspectives on immersive computing safety

Submission Guidelines

We invite submissions that explore the intersection of AI, XR, privacy, and security. Submissions must follow the IEEE VGTC formatting guidelines and will be peer-reviewed based on relevance, originality, and contribution diversity.

We accept the following types of submissions:

  1. Regular Papers: Upto 8 pages, including references and appendix.
  2. Short Papers: Upto 4 pages, including references
  3. Position papers: Upto 2 pages, including references.

At least one author per accepted contribution published in the IEEE Digital Library must be registered as an AUTHOR to the FULL conference at the rate of full Member/Non-Member registration regardless of whether or not he/she is a student .

Papers will be submitted for publication in the IEEE Digital Library.

Formatting

All submissions should use the IEEE VGTC formatting template and be submitted in PDF format. Submissions must be original, unpublished, and not currently under review elsewhere. Papers that fail to comply with formatting or length limits will be desk-rejected.

Review Process

Submissions will be single-blind and reviewed by the workshop organizers and program committee. Each submission will undergo peer review by at least two domain experts. Accepted papers will be assigned oral or lightning talk presentations based on content and format. Acceptance does not restrict future publications authors may present extended or modified versions of their work as full papers, short papers, or journal articles, provided they include significant new contributions beyond the original workshop submission.

Review decisions are final. Authors of accepted papers are required to submit a revised version that addresses the feedback provided in the reviews.

Submission Site

EasyChair Link

Keynote Speakers

Dr. Chou P. Hung

Keynote 1: Trustworthy Attentional Guidance for Real-Virtual Hybrid Environments

Speaker: Dr. Chou P. Hung, U.S. DEVCOM ARL Army Research Office

Biography: Dr. Chou P. Hung is the Program Manager for Neurophysiology of Cognition at the U.S. DEVCOM ARL Army Research Office and has been a researcher at the DEVCOM Army Research Laboratory since 2015 in the areas of human cognition and bio-inspired novel AI development. Previously, he was an Assistant Professor of neuroscience at Georgetown University and at National Yang-Ming University in Taiwan, where he led research to discover neural circuits and representations underlying visual perception. Dr. Hung’s research interests span from living neurons, circuits, mechanisms, and behaviors underlying real-world and augmented perception and performance, to biological and AI-aided learning and decision-making, to brain-inspired computational principles for novel AIs for complex reasoning. He holds a BS in Biology from Caltech (1996), a PhD in Neuroscience from Yale University (2002), and completed his postdoc at MIT (2005).

Dr. Mar Gonzalez-Franco

Keynote 2: How can XR help building trust on AI?

Speaker: Dr. Mar Gonzalez-Franco, Google

Biography: Dr. Mar Gonzalez-Franco is a Computer Scientist and Neuroscientist. She is currently a Research Scientist Manager at Google where she leads the Blended Interactions and Devices Research lab working on a new generation of Immersive technologies and generative AI, focused on input interactions and experiences. Her team has envisioned Android XR multimodal and multidevice interactions to the OS and unified input vocabularies. Before working at Google, she was a Principal Researcher at Microsoft Research where she built new features for products such as Xbox, Hololens, Soundscape and Teams. Her main tech transfer at Microsoft, was Avatars on MS Teams that is available on daily basis to over 260 million users, and won Times Invention of the year 2022. Her technical work has produced over 40 patents (some pending), and +10 open-source projects including some of the most used avatar libraries (Microsoft Rocketbox, UCF-Google VALID), and Immersive AI pipelines and Datasets (XR-Objects, Diffseg, PARSE-Ego4D). Apart from her technological contributions, she is a prolific scientist with over 100 publications and does regular service as program committee, chair and reviewer in top venues (ACM, IEEE, Nature Publishing, Royal Society, AAAS Science) and governments (UN, EU, NSF, NSERC). She was awarded the IEEE VGTC VR New researcher award in 2022, the NAE early-career engineer, and the 2025 ACM SIGCHI Special Recognition of pioneering Input and Interaction guidelines in the Android XR Operating System.

Dr. Mar Gonzalez-Franco

Keynote 3: Building and Implementing Principles for Responsible Innovation in XR

Speaker: Dr. Sarah Papazoglakis, Meta

Biography: Dr. Sarah Papazoglakis is a Senior Trust Strategist at Meta’s Reality Labs, building trust, privacy, and responsible innovation frameworks for emerging XR technologies and bridging the gap between AI research and consumer product use. Dr Papazoglakis draws from her humanities PhD to help product and engineering leaders imagine and define positive social impacts of future technologies and scope the requirements needed to build privacy- and trust-by-design into foundational product architectures.

Tentative Program

Session I: Welcome and Keynote

Time Title Details
8:30 AM - 9:00 AM Arrival and Setup
9:00 AM - 9:10 AM Welcome & Introduction Overview of Workshop Themes and Goals
9:10 AM - 9:30 AM Opening Keynote Trustworthy Attentional Guidance for Real-Virtual Hybrid Environments by Dr. Chou P. Hung

Session II: Paper and Lightning Talks – Part 1

Time Title Details
9:30 AM - 10:30 AM Paper Presentations TBA
10:30 AM Break

Session III: Paper and Lightning Talks – Part 2

Time Title Details
11:00 AM - 12:00 PM Paper Presentations (continued) Short papers
12:00 PM Lunch Break

Session IV: Paper and Lightning Talks – Part 3

Time Title Details
2:00 PM - 2:20 PM Keynote 2 How can XR help building trust on AI?
by Dr. Mar Gonzalez-Franco
2:20 PM - 3:00 PM Paper Presentations TBA
3:00 PM Break

Session V: Activity II – Future Directions

Time Title Details
3:30 PM - 4:45 PM Collaborative Brainstorming Future Research Directions for Secure, Ethical, and Trustworthy AIXR Systems
4:45 PM Closing Remarks Summary and Next Steps
TRUST-XR Logo

© 2025 Trust XR Workshop. All Rights Reserved.