The work is obfuscated enough to comply with my NDA. Some personal work is shown that is characteristic of work done at Samsung.

The challenge

Make spatial computing work for real humans in an environment with rapidly changing constraints.

At the time, Samsung was figuring out what kinds of products could exist. Many teams were focused on semi-isolated technical problems.

My group focused on finding real value and validating that with prototypes, across interactions, apps, and the operating system.

Our work shaped the trajectory of Samsung’s spatial products, a precursor to Galaxy XR.

Core focus

A new “responsive design” where spatial content is responsive to a user’s distance and the user’s context.

How to cohesively unite interaction and representation systems like:

  • Mixed user representations: avatar, volumetric video, flat video
  • Mixed content: meshs, flat windows, flat stickers, interactive
  • Mixed and extensible device interaction systems: AR glasses (with varying capabilities), mobile phones

Extensible/flexible designs that adapt to hardware and OS capabilities. “The device” was actually a slew of potential devices, internally and with external partners.

And generally, finding the details that must be solved for true everyday use, not just the surface level work seen in marketing. What can we only do with spatial computers?

Patents

The work was mysterious & important

Patents hint at the product work I was tackling.

Patent image for System and method for conversation-based notification management
2022 / US20220385617A1 / Granted
System and method for conversation-based notification management

Contextually-aware notification management system for AR glasses.

Patent image for Extended reality interaction in synchronous virtual spaces using heterogeneous devices
2021 / US11995776B2 / Granted
Extended reality interaction in synchronous virtual spaces using heterogeneous devices

AR glasses + phones in the same embodied space with minimal user effort.

Patent image for Systems and Methods for Manipulating Views and Shared Objects in XR Space
2021 / US20220229535A1 / Abandoned
Systems and Methods for Manipulating Views and Shared Objects in XR Space

Interactions around embodied "video calls" in AR, from a system that can support headsets and phones."

Patent image for Automatic representation toggling based on depth camera field of view
2020 / US20210407215A1 / Granted
Automatic representation toggling based on depth camera field of view

Dynamically blend between depth video and an avatar in a AR-HMD and depth capture enabled phone system, based on if the user is within the capture volume.

Patent image for Electronic device for communicating in augmented reality and method thereof
2020 / US20210319617A1 / Granted
Electronic device for communicating in augmented reality and method thereof

Interactions for an avatar-based AR chat app.

Patent image for System and method for augmented reality interaction
2018 / US20190279407A1 / Granted
System and method for augmented reality interaction

Creating a priority system to manage relative content importance.

Patent image for System and method for transition boundaries and distance responsive interfaces in augmented and virtual reality
2018 / US20190172262A1 / Granted
System and method for transition boundaries and distance responsive interfaces in augmented and virtual reality

Increased legibility and contextual controls based on user distance from UI. Additional system to prevent rapid toggling when user stands on boundary line.

2020 / US20220005215A1 / Granted

Efficient encoding of depth data across devices

In an AR-HMD and depth-capture-enabled phone system, decrease the depth data that needs to be encoded by using dynamic min-max culling.

2019 / US20200202629A1 / Granted

System and method for head mounted device input

Utilizing head or body movement as a discrete or continuous input. Useful in limited contexts for non-primary input.

2019 / US20200320906A1 / Granted

Mobile device with a foldable display and method of providing user interfaces on the foldable display

Novel interactions to manage multiple open apps/windows, especially for foldable devices.

My role

I was a Team Lead and Senior Designer. I worked in and led the XR Design Group (XRDG), a group of designers and engineers who worked to understand what could be created with AR glasses and how.

As an AR/VR Team Lead:

  • Demonstrated vision to executives, partners, and visiting dignitaries through conversations, presentations, and demos.
  • Created and managed the process that let us quickly explore the possibility space going from ideas, rapid prototypes, high production prototypes, and video documentation.
  • Led and supported hiring process.

As a spatial designer & prototyper:

  • Led investigations across operating system, apps, and interactions.
  • Collaborated on prototypes (Unity/C#) and user studies, de-risking explorations.
  • Bridge design and engineering in a context where device capabilities were often changing.
  • Built internal documentation to catalog our work.
  • Submitted 10+ patents.

Exploration process

Or, how to find diamonds

One of my largest contributions was creating a new rapid iteration process that let us experiment both wide and deep, as part of our partnership with a group in HQ and external partners.

It had three goals

  1. Give the team time to explore wide. Narrow focus too early will trap us in a local optimum.
  2. Give the team time to explore deep. Shallow work risks giant unknowns.
  3. Jointly answer UX, engineering, and product questions. Isolated tracks will never find the true opportunities and constraints.

Process components

Depending on the timeline and problem we were addressing, we could shift time between components.

1. Early research

User, design, tech, and market research to understand opportunities and constraints in a space.

User, design, tech, and market research to understand opportunities and constraints in a space.
2. Product concept

Renders and high level documentation to sell the idea internally and to guide the team.

Renders and high level documentation to sell the idea internally and to guide the team.
3. Scoping

Define our MVP while aligning the team. What can be made real? What should be faked? What needs to be tested?

Define our MVP while aligning the team. What can be made real? What should be faked? What needs to be tested?
4. Design & prototyping

Exploration, definition, and then refinement – each stage with mockups and prototypes, in/validated with user studies.

Exploration, definition, and then refinement – each stage with mockups and prototypes, in/validated with user studies.
5. Presentation

When sharing our work across the company, we create polished decks and demo videos of our prototypes.

When sharing our work across the company, we create polished decks and demo videos of our prototypes.
6. Iteration

We learn something new. Requirements or goals change. We pivot to the next thing.

We learn something new. Requirements or goals change. We pivot to the next thing.

Characteristic work

A mix of Samsung & personal work

With spatial displays and the proper imaging pipeline, sonograms could look like x-rays. (Personal work that is representitive of early concepting I would do at Samsung.)

Almost everything shown in these patent images were built by me and the team.

Flexible user representation in a volumetric call based on device capabilities: flat 2D screen, cutout 2D screen, volumetric projection.

Flexible user representation in a volumetric call based on device capabilities: flat 2D screen, cutout 2D screen, volumetric projection.

Part of the flow for 2D users sharing content in a mixed-device spatial call. We made it feel simple with contextual and scoped interactions.

Part of the flow for 2D users sharing content in a mixed-device spatial call. We made it feel simple with contextual and scoped interactions.

Async messaging app exploring volumetric content and interactions, like grabbing a friend’s tiny avatar to start a new chat.

Async messaging app exploring volumetric content and interactions, like grabbing a friend's tiny avatar to start a new chat.

Headset users can move around 3D content easily. To support the same for 2D users, we created a “model inspector” view.

Headset users can move around 3D content easily. To support the same for 2D users, we created a "model inspector" view.

In a volumetric call, 2D users can change their 3D viewing perspective.

In a volumetric call, 2D users can change their 3D viewing perspective.

Unified sharing space for mixed-device calls. Headset user is pointing to shared “wall” while mobile user can see the wall in a volumetric view or as a 2D region on their screen.

Unified sharing space for mixed-device calls. Headset user is pointing to shared "wall" while mobile user can see the wall in a volumetric view or as a 2D region on their screen.

High five with tiny avatars in an async messaging system.

High five with tiny avatars in an async messaging system.

Exploration for a memory palace. (Personal)

Exploration for a memory palace. (Personal)

Look development for various materials. (Personal)

Look development for various materials. (Personal)

<em>Siteless</em> is a book full of abstract architectural forms. I use it for modeling inspiration. (Personal)

Siteless is a book full of abstract architectural forms. I use it for modeling inspiration. (Personal)

Hardware VR project for independently controlled eyes, like in <em>Pan’s Labyrinth</em>. (Personal)

Hardware VR project for independently controlled eyes, like in Pan's Labyrinth. (Personal)

Video. When a project needed a tent model and I happened to be learning photogrammetry, I captured and processed my tent for use.

Model. The final processed tent.

Tools used

Whatever answers critical questions at the right fidelity to de-risk our next steps.

I might render an idea quickly in Blender or spend a few days tuning an interaction system to feel just right.

Most projects ended with very high fidelity multi-device prototypes and a video showcasing the what and why.

UX design

  • Maquette, Tvori, Tilt Brush, Blocks, Quill (AR/VR design tools)
  • Sketch, Figma, Adobe Creative Suite, Framer Classic, Procreate (2D design)
  • User research (design, facilitate, analyze, largely qualitative)

AR/VR Prototyping, 3D Modeling

  • Blender (modeling, texturing, procedural materials/shaders, 3D VFX compositing)
  • Unity (interactions, MRTK)
  • Depthkit, Meshroom, Polycam, Record3D (volumetric capture, photogrammetry)
  • A-Frame (webVR)

(My current stack →)

Other work

XD Immersive presentation: From 2D to 3D product design. What changes and what stays the same in a spatial context? (~25min)

For other examples of my spatial computing work, you can look at Humane Virtuality and Moral Decisions & Haptics in VR as well as my sporadic YouTube uploads.