Protected Content

Please enter the password to access this page

OpenCap Monocular: 3D Human Kinematics and Dynamics From a Single Smartphone

Movement Bioengineering Lab (MoBL), University of Utah, Salt Lake City, UT
*Corresponding author: selim.gilon@utah.edu

Demo

Walking

Squatting

Sit-to-Stand

Walking (with asymmetry)

Input: iPhone video - 45° angle
OpenCap Monocular logo
Output: 3D Human Movement Kinematics
Motion Capture (gold standard)
OpenCap Monocular (ours)
Rotate: Left Click & Drag | Pan: Right Click & Drag | Zoom: Scroll Open Visualizer

On the left, the video taken by an iPhone. On the right, the 3D kinematics from Motion Capture (ground truth) and OpenCap Monocular.

Project Summary

OpenCap Monocular estimates 3D human kinematics and musculoskeletal dynamics from a single static smartphone video. It is validated against marker-based motion capture for walking, squatting, and sit-to-stand activities and is free and open-source.

Key outputs include joint kinematics and kinetics, and example tasks are documented alongside videos and figures on this page.

Abstract

Quantifying human movement (kinematics) and musculoskeletal forces (kinetics) at scale--such as estimating quadriceps force during a sit-to-stand movement--could transform the prediction, treatment, and monitoring of mobility-related conditions. However, traditional motion analysis requires costly and time-intensive laboratory systems, which limit clinical translation. Scalable, accurate tools for biomechanical assessment are critically needed. We introduce OpenCap Monocular, an algorithm that estimates 3D kinematics and kinetics from a single static smartphone video. The method refines 3D human pose estimates from a monocular pose estimation model from computer vision (WHAM) via optimization, computes the kinematics of a biomechanically constrained skeletal model, and estimates kinetics via physics-based simulation and machine learning. We validated OpenCap Monocular against marker-based motion capture and force plate data for walking, squatting, and sit-to-stand tasks. OpenCap Monocular achieved low kinematic error (4.8° rotational mean absolute error [MAE]; 3.4 cm translational MAE), outperforming a regression-only computer vision baseline (9.3° rotational MAE; 11.0 cm translational MAE). It also estimated ground reaction forces during walking with accuracy comparable to, or better than, that of our prior two-camera OpenCap system. We demonstrate clinically meaningful accuracy in applications related to frailty and knee osteoarthritis, including estimating the knee extension moment during sit-to-stand transitions and the knee adduction moment during walking. OpenCap Monocular is deployed via a smartphone app and secure cloud computing, enabling free, accessible single-smartphone biomechanical assessments. Such accessibility enables large-scale remote studies and, ultimately, routine evaluations of mobility and function in the clinic or at home. Our code is available at github.com/utahmobl/opencap-monocular.

Graphical abstract showing the monocular pipeline and outputs

Best Practices for Recording

OpenCap Monocular has been validated on the following activities:

Sit-to-stand
Walking
Squats

It may work on other activities, but these have not yet been validated. Jumping is currently not supported and does not work reliably.

⚠️ Important Guidelines for Best Results:

Camera Position Place the camera at 45 degrees in front of the subject. If part of the body is not visible for long durations, it will not be tracked well.
Clothing Do not wear baggy clothes or multiple layers
Lighting Use normal lighting conditions (avoid harsh shadows or very dim environments)
Environment Avoid having multiple people in the recording frame. It is okay if people are in the distant background, but do not have two people in the foreground.
Distance Ensure the entire person stays in frame for the entire movement, and record with the participant less than 5 meters from the camera.

Interactive Visualizer

We developed an interactive web-based visualizer to explore the 3D kinematics computed by OpenCap Monocular alongside the original video and ground truth data.

You can interact with sample results, compare different methods, and view the movements from any angle.

Get Started

Ready to measure human movement with a single smartphone? Start using OpenCap Monocular today!

OpenCap Monocular is freely available and requires no specialized hardware, just an iPhone or iPad.