Docs
Quickstart
5-minute path from a new video to SMPL parameters via Colab.
Quickstart
Five minutes from a video file to SMPL parameters, using Google Colab's free T4 GPU. No install, no login, no data ever leaves Colab's sandbox.
Step 1 — open the notebook
Click the button on the landing page or open this link:
Notebook 01 · Video → SMPL (Colab)
Step 2 — enable the GPU
Runtime → Change runtime type → T4 GPU → Save. Colab's free tier hands out T4s on demand; occasionally you'll get Busy. If so, wait 1–2 minutes and try again.
Step 3 — run all cells
Runtime → Run all.
The notebook will:
- Check the GPU and torch install.
- Clone the public GVHMR repo.
- Download model weights (~2 GB, one-time).
- Prompt you to upload a video.
- Run GVHMR extraction (~1–2 minutes for a 10-second video).
- Export
hmr4d_results.ptand asmpl_params.jsonsummary. - Trigger downloads to your machine.
What you get
hmr4d_results.pt— full GVHMR output with SMPL global/incam params, camera, and keypoints.smpl_params.json— summary with frame count, shapes, dtype.
Feed the .pt into Notebook 02 to retarget to Unitree G1, or use Notebook 03 for the full pipeline in one pass.
Tips for better results
- Single person, full body visible. Both GVHMR and the retargeter assume a single subject.
- Static camera. GVHMR's
-sflag is enabled by default; camera motion estimation (DPVO) costs extra RAM and isn't needed for most short clips. - Side profile for gait/locomotion; front for upper-body motions.
- 10–60 seconds is the sweet spot for free Colab.
- ≥ 720p for reliable person detection; upscale with
ffmpegif needed.
Troubleshooting
Hit a wall? Start with the troubleshooting page — the top 12 failure modes plus exact copy-paste fixes.
Want to run locally?
If you have an NVIDIA GPU with ≥ 8 GB VRAM, see the local setup guide. A local install takes ~30 minutes the first time and gives you a Gradio UI plus a CLI.