Hundreds of keypoints on humans. Behaviors trained in. LOD that auto-scales to camera distance. What the rest of the industry calls a premium add-on, an enterprise tier, or a “future roadmap item”, we built it. One SDK. One install. Beta opens soon.
Most computer vision SDKs ship detection. Maybe segmentation. Then they sell you everything else as add-ons, partner integrations, or “talk to enterprise.” We took a different bet: ship the whole stack, in one install, and let our pricing reflect the value of an integrated system instead of a fragmented one.
Pose models that ship 17 or 33 keypoints assume the subject fills the frame. That’s not how production cameras work. LYNX runs level-of-detail keypoints: at 50m we give you skeletal anchors, at 5m we give you joints and limbs, at touch distance we give you fingertips, lip corners, eyelid landmarks. You don’t toggle between models. The SDK picks the right resolution per detection, per frame.
A bounding box around a person isn’t a useful signal — it’s the start of a research project. Behaviors are first-class outputs in LYNX. The model returns the activity, the duration, and the confidence, frame-locked to the keypoint stream. Train more in any time with custom packs, or send us the case we missed.
Every other SDK ships a model and walks away. We ship a feedback loop. If LYNX misses something on your data, you submit the frame through the SDK. We render synthetic variations of that scenario and roll the improvement into the next monthly weight update — and you stay grandfathered on whatever rate you signed at.
lynx.report(frame, expected="forklift"). We get the frame, the metadata, and what should have been detected.No marketing pages hiding behind “request a demo.” Here’s the full feature matrix — what’s in the base SDK, what costs more, and what you can pack on top.
Every capability above is useful on its own. The interesting outcomes live in the seams — stacks of two or three primitives that produce something the feature matrix doesn’t list, because the value lives in the combination. A few we’ve seen customers build:
Flag shoplifting-class events without writing manual rules. Combines hand-near-body anomaly with spatial dwell + which aisle + a stable identity across frames.
Elder-care floors, hospital wards, construction sites. Vertical-to-horizontal transition + persistent ID so a single fall doesn’t fire ten alarms.
Person in a no-go zone is a different alert than person walking-vs-fallen in a no-go zone. Posture + behavior tell you whether to call OSHA or just radio the floor.
Path-through-store, dwell-per-aisle, basket-vs-browser. Re-ID embeddings let the same person reappear across cameras without facial recognition.
Per-athlete form scoring, gait asymmetry, technique drift over a season. Joint-level keypoints + stable tracking + the behavior layer that knows what walking-vs-running-vs-limping looks like.
Pick-and-place / grasp planning from a single RGB camera. Object centroids in camera space + pixel-accurate masks; no second sensor, no stereo rig.
No PyTorch version pinning. No CUDA hell. No ONNX-conversion graveyard. C-first architecture means native bindings in Python, Rust, Java, C#, and more, across six OS targets — same model behavior on every one.
We’re not leading with this — that’s the whole point. The features above are the reason to use LYNX. The numbers below are what happens when you train on physics-accurate synthetic data with pixel-perfect annotations instead of internet-scraped images with crowd-labeled bounding boxes.
lynx-nano-w. Runs on a Jetson Nano. Fits in firmware. Ships in your APK without a fight.
A 0.73M-parameter CNN matching a 32M-parameter transformer on its own benchmark. LYNX beats RT-DETR-L at 1/44 the size — and never saw a real image during training.
No per-device fees. No per-call billing. Deploy to a million devices for the same price as one. Beta participants lock the rate they sign at — and stay grandfathered as the platform grows.
~40 classes per pack: Agriculture, Livestock, Industrial, Security, Logistics. Layer any combination on the base.
IR / Thermal, LiDAR, Embeddings / Re-ID. Same SDK, more sensors, more identity.
Combine any classes you’ve licensed into a single optimized model. Includes one annual retrain.
Beta cohort is small and curated. Bring your hardest CV problem — we’ll tell you straight if LYNX is the right tool, and if it is, you get in early and stay grandfathered.