Cindy Zhao

New Sensing Capabilities Unlock Natural Interaction Paradigms

· product-design

The Idea

When a device gains a new input modality (hand tracking, eye tracking, spatial awareness), it doesn't just add features — it enables entirely new interaction paradigms that map to existing human skills.

Example: VR rhythm games like Maestro let you conduct an orchestra. This works because:

  • The gesture vocabulary already exists in the physical world
  • Hand tracking can now detect what humans already know how to do
  • No need to learn abstract button mappings

The Pattern

Traditional input:

Button press → Action (abstract mapping, must be learned)

Motion-enabled input:

Natural gesture → Action (intuitive mapping, already known)

The best XR experiences don't invent new gestures — they recognize existing ones. Conducting, painting, sculpting, throwing — actions people already have muscle memory for.

The Design Question

Not "what can we build?" but "what human actions can we now detect?"

Each new sensing capability is an unlock:

  • Hand tracking → conducting, sculpting, sign language
  • Eye tracking → attention-based UI, natural gaze interaction
  • Spatial awareness → room-scale interaction, physical-digital blending

Connection to Other Ideas

This parallels Inference-Bridged Workflows:

  • LLMs bridge the gap between intent and execution in data/language tasks
  • Motion tracking bridges the gap between physical intuition and digital interaction

Both reduce the translation layer between what humans naturally do and what computers can act on.

Related

Hey, this is Cindy's mind.