The current generation of equiReelz — store, share, find — solves a real problem: getting your show jumping rounds from the smartphone that filmed them to the people who need to analyse them, in original quality, without manual forwarding. That's the boring necessary base. Anyone analysing video first has to be able to get the video.
The interesting work — the work that makes serious analysis available to riders who don't have a personal biomechanics coach — is what comes next. This post is the public roadmap for the ML and computer-vision features we're building into equiReelz. It is honest about what is shipped, what is in development, and what is research-stage. If you're a current user wondering whether equiReelz is going to keep getting better, this post is for you.
Where we are now (what's already shipped)
The base layer:
- Original-quality 4K upload from any phone, with no compression on the way to the people who need to watch it
- Auto-share rules — videos of horse X go to the right people automatically
- Search by horse, rider, location, date — find any clip in seconds
- Frame-by-frame and slow-motion playback for analysis
- Side-by-side video comparison
- QR code share links for ringside use
- iOS app with on-device AI that auto-detects horse videos in your camera roll
This layer is solid. The next layer is where the analysis becomes computational rather than purely visual.
What's coming next: the v2 ML stack
Each of the following is in active development. We're building incrementally and shipping individual pieces as they reach production quality, rather than waiting for a single big "v2" release. This is the order we're working in.
Auto-zoom
The single most-requested feature from current users. The problem: most riders' competition footage is filmed from a fixed point — the long side of the arena, from the parents' bench, from the warm-up rail. The horse and rider occupy maybe 10% of the frame. To actually see anything, you have to pinch-zoom and pan, which destroys analysis.
Auto-zoom uses object detection on the horse and rider to follow them through the round automatically — the output is a virtual close-up that tracks the action without manual scrubbing. The original wide shot stays available; the auto-zoomed version is a generated overlay you can watch instead.
Status: pose estimation pipeline working in the lab; productionising the smoothness of the camera path.
Rider joint angles and 3D pose
The biomechanics of the rider — hip angle on the approach, shoulder rotation over the fence, leg position on landing — measured automatically from the same video you already uploaded. Useful for working on position issues that are nearly impossible to see in real time, even from the ground.
This builds on 3D human pose estimation models that have improved dramatically in the last 18 months. The technical work is mostly about applying them robustly to the specific geometry of a mounted rider.
Status: 3D pose models evaluated; integrating into the analysis pipeline.
Fallen pole detection
Smaller feature, immediately useful: automatic detection of when a pole comes down. The video gets tagged with the fence number and the timestamp; you can search for "rounds with rails" or "clean rounds" without watching everything.
Status: object detection model in training.
Horse and rider re-identification
Once you've uploaded a few rounds of a horse, the platform can recognise the same horse in future uploads — even if the metadata wasn't filled in correctly. Solves the "who actually is this horse?" problem at scale, especially for riding schools with 20+ horses or sellers managing inventory.
Built using siamese networks on horse-specific feature vectors. The same approach extends to riders.
Status: prototype working on small dataset; needs scale.
Social media conversion
Take a horizontal competition round, automatically generate the vertical clips for Instagram or TikTok — focused on the jumps, with optional slow-motion and a tracking close-up. For the riders and stables building a public following, this turns "I should post some video this week" into a one-tap export.
Status: pipeline working with current pose estimation; productionising the editorial choices (which jump, how much slow-mo, how to crop).
Gait detection
Automatic identification of what the horse is doing in any clip — gallop, jump, trot, canter, walk. Useful as input to all the other features (you don't want to measure stride length during a halt) but also independently — search your library for "all the trotting clips" or "all the jumping efforts in 2025".
Approach: temporal action recognition models (LSTM or transformer) consuming pose estimation output frame by frame.
Status: in development.
Why we're shipping this in equiReelz instead of building it as a separate tool
The hard part of any serious analysis is having the source material in the first place. The reason existing biomechanics tools haven't taken off in the equestrian world is not the tools — it's that nobody has the videos, in original quality, organised by horse and rider, ready to feed into anything. You can't run pose estimation on a 480p WhatsApp clip.
equiReelz is solving the source-material problem first. Once your library is in place — every round in original quality, properly tagged — applying ML to it is the obvious next step. We get to do for the equestrian world what tools like Hudl did for team sports: not just analysis, but analysis on top of organised footage that would otherwise not exist.
What this means for current users
Two things:
- Your existing library will be ML-ready. Every round you've uploaded in original quality is the input these models need. You don't have to re-film anything; the historical archive becomes the analysis backlog.
- You're the alpha audience. ML features land first with current users for testing before going general. Real feedback from real riders shapes what ships.
Frequently asked
Does ML processing happen in the cloud or on-device?
Both. The iOS app already does on-device horse detection for the camera roll. The heavier models — pose estimation — run in the cloud after upload, because they need GPUs that phones don't have. Your videos stay private; processing happens within your account.
What if I just want the video storage and don't care about ML?
Fine. The base layer — store, share, find — is and will remain the core of the product. ML features are additive; you can ignore them entirely and still get the value of an organised, shareable video library.
The bottom line
equiReelz today is a video platform that solves the boring necessary problem: getting your rounds organised, shared, and searchable in original quality. What we're building next is that, plus a serious ML stack that turns your library into automatic analysis — auto-zoom, rider biomechanics, fallen pole detection, automatic social media clips, the works.
The version you sign up for today is the foundation; everything we ship lands on top of the videos you've already uploaded. The ML features land in the same account.