← All case studies

Whodat · July 2018 – August 2019

High-Performance AR and Vision

Low-level C++ vision and monocular depth research in an AR startup environment.

Role: Deep Learning Engineer

C++ORBSLAM-style visionMonocular depthAR

Executive summary

Built and researched performance-sensitive vision primitives before the team transitioned to Osmo after acquisition.

20% faster ORB detector than ORB-SLAM baseline

Problem and constraints

AR systems needed fast feature detection and research depth around monocular depth estimation.

  • Runtime performance
  • C++ implementation
  • Startup ambiguity
  • Research-to-product translation

Architecture

01Camera input
02ORB feature detection
03Vision tracking primitives
04Depth estimation research
05AR product experiments

Decision Theater

Decision fork

Use standard primitive vs optimize core detector

Vision performance depends on low-level primitives that run constantly.

Use existing detector

Pros
  • Lower engineering cost
Cons
  • Baseline performance only

Optimize detector

Pros
  • Better runtime characteristics
Cons
  • Higher implementation effort

Chosen: Optimized C++ detector. Low-level performance work compounds across real-time AR pipelines.

Evaluation and reliability

  • Benchmarked detector speed against ORB-SLAM baseline.

Observability and debugging

  • Performance measurement drove optimization choices.

Reflection

This work gives the portfolio low-level systems depth alongside modern LLM platform work.

This case study uses sanitized architecture and representative examples. It excludes confidential prompts, customer data, proprietary datasets, private implementation details, and internal traces.