Beta · opening soonAmerican MadeUpdated monthly

The CV SDK
no one else
could ship.

Hundreds of keypoints on humans. Behaviors trained in. LOD that auto-scales to camera distance. What the rest of the industry calls a premium add-on, an enterprise tier, or a “future roadmap item”, we built it. One SDK. One install. Beta opens soon.

See what’s in the box
300+
Keypoints / human
2.4MB
TensorRT engine
1.33ms
Inference / frame
6
Platforms · 1 SDK
01 · The premise

The real difference is what’s in the box.

Most computer vision SDKs ship detection. Maybe segmentation. Then they sell you everything else as add-ons, partner integrations, or “talk to enterprise.” We took a different bet: ship the whole stack, in one install, and let our pricing reflect the value of an integrated system instead of a fragmented one.

Most CV SDKs ship

  • Detection (sometimes more)
  • Pose, but only 17–33 keypoints
  • Tracking as a separate library
  • Behavior detection? Build it yourself
  • Depth requires a second camera
  • One-platform-at-a-time deployment
  • Per-call billing or per-device fees
  • AGPL licensing for “free” tiers

What LYNX ships

  • Detection, segmentation, depth, tracking, OCR
  • 300+ keypoints with LOD auto-scaling
  • Multi-object tracking with embedding re-ID
  • Behaviors: walking, running, fallen, concealing, loitering
  • Monocular depth — no second camera
  • iOS, Android, Linux, Windows, macOS, Jetson
  • Annual subscription. Unlimited deployments.
  • Commercial distribution. No per-device fees, ever.
02 · >300 keypoints, level-of-detail

The keypoints scale to how far away you are.

Pose models that ship 17 or 33 keypoints assume the subject fills the frame. That’s not how production cameras work. LYNX runs level-of-detail keypoints: at 50m we give you skeletal anchors, at 5m we give you joints and limbs, at touch distance we give you fingertips, lip corners, eyelid landmarks. You don’t toggle between models. The SDK picks the right resolution per detection, per frame.

LOD · 0
Distant
~28 keypoints · >25m
HEAD · TORSO · LIMBSBBOX 0.42
LOD · 1
Engagement
~127 keypoints · 5–25m
+ FACE · HANDS · JOINTSBBOX 0.78
LOD · 2
Foreground
>300 keypoints · <5m
+ FACE MESH · FINGERS · TOESBBOX 0.96
03 · Behaviors, in the SDK

Other SDKs tell you what. LYNX tells you what it’s doing.

A bounding box around a person isn’t a useful signal — it’s the start of a research project. Behaviors are first-class outputs in LYNX. The model returns the activity, the duration, and the confidence, frame-locked to the keypoint stream. Train more in any time with custom packs, or send us the case we missed.

B.01
Walking
Gait cadence, stride, direction
B.02
Running
Cadence threshold + posture
B.03
Trotting
Quadruped-class gait state
B.04
Limping
Asymmetric gait detection
B.05
Fallen
Vertical-to-horizontal transition
B.06
Concealing
Hand-near-body anomaly
B.07
Loitering
Spatial dwell + behavior gate
B.0n
Custom
Train your own. We help.
04 · The unfair part

Send us the image where it failed. We retrain.

Every other SDK ships a model and walks away. We ship a feedback loop. If LYNX misses something on your data, you submit the frame through the SDK. We render synthetic variations of that scenario and roll the improvement into the next monthly weight update — and you stay grandfathered on whatever rate you signed at.

→ STEP 01
You submit.
One SDK call: lynx.report(frame, expected="forklift"). We get the frame, the metadata, and what should have been detected.
⚙ STEP 02
We render.
Our procedural pipeline generates thousands of physics-accurate synthetic variations: lighting, occlusion, weather, angles. No manual annotation. No data team.
↻ STEP 03
Monthly drop.
Updated weights ship to every beta participant on a monthly cadence. Drop-in replacement, same SDK call, sharper model. Your edge case becomes everyone’s improvement.
05 · The full inventory

Every capability LYNX ships. All of them.

No marketing pages hiding behind “request a demo.” Here’s the full feature matrix — what’s in the base SDK, what costs more, and what you can pack on top.

#
Capability
What it does
Category
Tier
01
Detection
80 COCO classes baseline. Bounding boxes, confidence, class.
Perception
In Base
02
Segmentation
Pixel-level instance and semantic masks per detection.
Perception
In Base
03
Keypoints (LOD)
300+ human / 78 animal / 7 object. Auto-scales to camera distance.
Pose
In Base
04
Monocular Depth
Per-pixel depth from a single RGB camera. No second sensor.
3D
In Base
05
3D RGB
Object centroids in camera space. Feeds robotics pipelines directly.
3D
In Base
06
Multi-Object Tracking
IoU + embedding tracker. Stable IDs across frames and occlusions.
Tracking
In Base
07
Behaviors
Walking, running, trotting, limping, fallen, concealing, loitering.
Activity
In Base
08
OCR
Inline text recognition on any detection — labels, plates, IDs.
Recognition
In Base
09
Multi-Camera Intrinsics
Calibration utilities for stereo and array configurations.
Calibration
In Base
10
Zone Analytics
Define regions, get dwell, density, transit per class.
Analytics
In Base
11
Line Crossing
Counts, direction, hysteresis. Counts what the box is doing.
Analytics
In Base
12
Multi-Stream Manager
Schedule N streams across one inference engine, batched.
Platform
In Base
13
American Made
Designed, trained, and operated in the United States. No foreign-tech dependencies in the stack.
Origin
In Base
14
IR / Thermal
Same SDK on FLIR-class thermal sensors. Detects through smoke and dark.
Sensor
Add-on
15
LiDAR
Point-cloud-aware detection. Sensor fusion in one API.
Sensor
Add-on
16
Embedding / Re-ID
Per-detection feature vectors. Build similarity, dedup, re-identification.
Identity
Add-on
17
Agriculture Pack
Tractors, crops, barns, irrigation. Layered on the 80-class base.
Pack
Pack
18
Livestock Pack
Cows, horses, sheep, pigs. Pose, behavior, individual ID.
Pack
Pack
19
Industrial Pack
Equipment, workers, vehicles, machinery. Manufacturing floor.
Pack
Pack
20
Security / Defense Pack
People, vehicles, perimeter events, restricted zones.
Pack
Pack
21
Logistics Pack
Trucks, trailers, containers, railcars, yard movements.
Pack
Pack
06 · What would you build?

What will you build??

Every capability above is useful on its own. The interesting outcomes live in the seams — stacks of two or three primitives that produce something the feature matrix doesn’t list, because the value lives in the combination. A few we’ve seen customers build:

Loss prevention

Theft detection

Concealment + Loitering + Zone analytics + Tracking

Flag shoplifting-class events without writing manual rules. Combines hand-near-body anomaly with spatial dwell + which aisle + a stable identity across frames.

Safety

Fall alarm

Pose + Fallen behavior + Tracking

Elder-care floors, hospital wards, construction sites. Vertical-to-horizontal transition + persistent ID so a single fall doesn’t fire ten alarms.

Industrial

Restricted-zone safety

Detection + Zone analytics + Pose + Behavior

Person in a no-go zone is a different alert than person walking-vs-fallen in a no-go zone. Posture + behavior tell you whether to call OSHA or just radio the floor.

Retail analytics

Customer journey

Tracking + Re-ID embeddings + Zone analytics + Line crossing

Path-through-store, dwell-per-aisle, basket-vs-browser. Re-ID embeddings let the same person reappear across cameras without facial recognition.

Sports / fitness

Form analysis

Hundreds of keypoints + Tracking + Behavior

Per-athlete form scoring, gait asymmetry, technique drift over a season. Joint-level keypoints + stable tracking + the behavior layer that knows what walking-vs-running-vs-limping looks like.

Robotics

Manipulation perception

Detection + Segmentation + Monocular depth + 3D RGB

Pick-and-place / grasp planning from a single RGB camera. Object centroids in camera space + pixel-accurate masks; no second sensor, no stereo rig.

SDK
Every primitive runs in the same process, same memory, same frame. Composition is a matter of how you read the result struct — not a vendor integration project.
07 · Six platforms · one SDK

Install once. Deploy anywhere.

No PyTorch version pinning. No CUDA hell. No ONNX-conversion graveyard. C-first architecture means native bindings in Python, Rust, Java, C#, and more, across six OS targets — same model behavior on every one.

P / 01
iOS
CoreML · Metal
◷ Beta
P / 02
Android
NNAPI · GPU delegate
◷ Beta
P / 03
Linux
CUDA · ONNX · CPU
◷ Beta
P / 04
Windows
CUDA · DirectML
◷ Beta
P / 05
macOS
Metal · Apple Silicon
◷ Beta
P / 06
Jetson
TensorRT · ARM64
◷ Beta
08 · And it actually wins

The model is smaller, faster, and more accurate.

We’re not leading with this — that’s the whole point. The features above are the reason to use LYNX. The numbers below are what happens when you train on physics-accurate synthetic data with pixel-perfect annotations instead of internet-scraped images with crowd-labeled bounding boxes.

Footprint · TensorRT engine
2.4MB

lynx-nano-w. Runs on a Jetson Nano. Fits in firmware. Ships in your APK without a fight.

LYNX
2.4 MB
YOLO11n
~5 MB
YOLOv8n
~5 MB
RT-DETR-L
~120 MB
Accuracy · ApplesM5 / Blaga 2025
44×

A 0.73M-parameter CNN matching a 32M-parameter transformer on its own benchmark. LYNX beats RT-DETR-L at 1/44 the size — and never saw a real image during training.

LYNX (synthetic)
0.778
RT-DETR-L
0.774
YOLO11n
0.563
YOLOv8n
0.561
09 · Pricing — at GA

Annual subscription. Unlimited deployments.

No per-device fees. No per-call billing. Deploy to a million devices for the same price as one. Beta participants lock the rate they sign at — and stay grandfathered as the platform grows.

Beta Program
$0
Limited cohort · invite-required
  • Full SDK · all 13 base capabilities
  • Direct line to the team building it
  • Influence the roadmap before GA
  • Locked-in grandfathered rate at GA
Request access
Recommended
Base SDK
$24K/year
$20K with single-payment discount
  • All 13 base capabilities
  • 80 COCO class baseline
  • Unlimited deployments · all 6 platforms
  • Monthly model updates
  • American Made
  • Commercial distribution rights
Reserve this rate
Enterprise
Custom
Volume pricing available
  • Everything in Base SDK
  • Multiple expansion packs included
  • Dedicated Slack + SLA
  • Custom training pipeline
  • Priority on monthly update queue
Talk to sales
Expansion Pack
$12K/yr each

~40 classes per pack: Agriculture, Livestock, Industrial, Security, Logistics. Layer any combination on the base.

Premium Add-On
$12K/yr each

IR / Thermal, LiDAR, Embeddings / Re-ID. Same SDK, more sensors, more identity.

Custom Cross-Pack Model
$5Kone-time

Combine any classes you’ve licensed into a single optimized model. Includes one annual retrain.

Grandfathered pricing for the beta cohort: Sign on during beta and your subscription rate stays locked — through every monthly update, every expansion pack, every platform we add after GA. The earlier you commit, the better the rate sticks.

Get early access
to the model that shouldn’t exist yet.

Beta cohort is small and curated. Bring your hardest CV problem — we’ll tell you straight if LYNX is the right tool, and if it is, you get in early and stay grandfathered.

Talk to the team