Guided Reality

Series of images showing 3d visuals overlaid on a 3d printer

Large language models (LLMs) have enabled the automatic generation of step-by-step augmented reality (AR) instructions for a wide range of physical tasks. However, existing LLM-based AR guidance often lacks rich visual augmentations t to effectively embed instructions to spatial context for a better user understanding.

We present Guided Reality, a fully automated AR system that generates embedded and dynamic visual guidance based on step-by-step instructions. Our system integrates LLMs and vision models to: 1) generate multi-step instructions from user queries, 2) identify appropriate types of visual guidance, 3) extract spatial information about key interaction points in the real world, and 4) embed visual guidance in physical space to support task execution. Drawing from a corpus of user manuals, we define five categories of visual guidance and propose an identification strategy based on the current step. We evaluate the system through a user study (N=16), completing real-world tasks and exploring the system in the wild. Additionally, four instructors shared insights on how Guided Reality could be integrated into their training workflows.

ACME LabÌýProgrammable Reality Lab

Infographic explaining guided reality system implementation
3d interface instructions overlaid on a 3d printer

Ìý

Associated Researchers

Additional Researchers

Aditya Gunturu

Publications

Ada Yi Zhao, Aditya Gunturu, Ellen Yi-Luen Do and Ryo Suzuki. 2025. "". In: ACM Symposium on User Interface Software and Technology (UIST'2025) (Busan, Korea, Sep 28 - Oct 1, 2025)