Led a team of designer-developers who used prototypes to explore the possibility space of AR glasses across OS, apps, and interactions. Promoted our vision to executives and external partners. Worked in a way that was flexible with ever changing specs while informing the device specs.
Team Lead & Senior Designer
February 2017 - May 2021
Samsung Research America, Mobile Platform Solutions Lab
My coworkers in XR Design Group (XRDG)
Make spatial computing work for real humans in an environment with rapidly changing constraints.
At the time, Samsung was figuring out what kinds of products could exist. Many teams were focused on semi-isolated technical problems.
My group focused on finding real value and validating that with prototypes, across interactions, apps, and the operating system.
Our work shaped the trajectory of Samsung’s spatial products, a precursor to Galaxy XR.
A new “responsive design” where spatial content is responsive to a user’s distance and the user’s context.
How to cohesively unite interaction and representation systems like:
Extensible/flexible designs that adapt to hardware and OS capabilities. “The device” was actually a slew of potential devices, internally and with external partners.
And generally, finding the details that must be solved for true everyday use, not just the surface level work seen in marketing. What can we only do with spatial computers?
Patents hint at the product work I was tackling.
Contextually-aware notification management system for AR glasses.
AR glasses + phones in the same embodied space with minimal user effort.
Interactions around embodied "video calls" in AR, from a system that can support headsets and phones."
Dynamically blend between depth video and an avatar in a AR-HMD and depth capture enabled phone system, based on if the user is within the capture volume.
Interactions for an avatar-based AR chat app.
Creating a priority system to manage relative content importance.
Increased legibility and contextual controls based on user distance from UI. Additional system to prevent rapid toggling when user stands on boundary line.
Efficient encoding of depth data across devices
In an AR-HMD and depth-capture-enabled phone system, decrease the depth data that needs to be encoded by using dynamic min-max culling.
System and method for head mounted device input
Utilizing head or body movement as a discrete or continuous input. Useful in limited contexts for non-primary input.
Novel interactions to manage multiple open apps/windows, especially for foldable devices.
I was a Team Lead and Senior Designer. I worked in and led the XR Design Group (XRDG), a group of designers and engineers who worked to understand what could be created with AR glasses and how.
As an AR/VR Team Lead:
As a spatial designer & prototyper:
One of my largest contributions was creating a new rapid iteration process that let us experiment both wide and deep, as part of our partnership with a group in HQ and external partners.
It had three goals
Depending on the timeline and problem we were addressing, we could shift time between components.
User, design, tech, and market research to understand opportunities and constraints in a space.
Renders and high level documentation to sell the idea internally and to guide the team.
Define our MVP while aligning the team. What can be made real? What should be faked? What needs to be tested?
Exploration, definition, and then refinement – each stage with mockups and prototypes, in/validated with user studies.
When sharing our work across the company, we create polished decks and demo videos of our prototypes.
We learn something new. Requirements or goals change. We pivot to the next thing.
With spatial displays and the proper imaging pipeline, sonograms could look like x-rays. (Personal work that is representitive of early concepting I would do at Samsung.)
Almost everything shown in these patent images were built by me and the team.
Flexible user representation in a volumetric call based on device capabilities: flat 2D screen, cutout 2D screen, volumetric projection.
Part of the flow for 2D users sharing content in a mixed-device spatial call. We made it feel simple with contextual and scoped interactions.
Async messaging app exploring volumetric content and interactions, like grabbing a friend's tiny avatar to start a new chat.
Headset users can move around 3D content easily. To support the same for 2D users, we created a "model inspector" view.
In a volumetric call, 2D users can change their 3D viewing perspective.
Unified sharing space for mixed-device calls. Headset user is pointing to shared "wall" while mobile user can see the wall in a volumetric view or as a 2D region on their screen.
High five with tiny avatars in an async messaging system.
Exploration for a memory palace. (Personal)
Look development for various materials. (Personal)
Siteless is a book full of abstract architectural forms. I use it for modeling inspiration. (Personal)
Hardware VR project for independently controlled eyes, like in Pan's Labyrinth. (Personal)
Video. When a project needed a tent model and I happened to be learning photogrammetry, I captured and processed my tent for use.
Model. The final processed tent. View in AR on mobile.
Whatever answers critical questions at the right fidelity to de-risk our next steps.
I might render an idea quickly in Blender or spend a few days tuning an interaction system to feel just right.
Most projects ended with very high fidelity multi-device prototypes and a video showcasing the what and why.
UX design
AR/VR Prototyping, 3D Modeling
XD Immersive presentation: From 2D to 3D product design. What changes and what stays the same in a spatial context? (~25min)
For other examples of my spatial computing work, you can look at Humane Virtuality and Moral Decisions & Haptics in VR as well as my sporadic YouTube uploads.