

XR experiences live or die by stitching seams. Inspired by Insta360-style capture rigs, we built a real-time panorama fusion stack that outputs natural-looking spheres without the ghosting you see in naïve alpha blending.
What we built
- Adaptive seam finding. We track parallax-heavy regions (like close-by hands) and steer seams toward low-motion zones, while multiband blending handles exposure shifts.
- GPU-first pipeline. Camera feeds stream into CUDA kernels tuned for batched projection, so we maintain 60 fps on backpack PCs and embedded XR modules alike.
- Auto-calibration. Field crews no longer run checkerboard routines; we estimate lens offsets and white balance on the fly using SLAM-style feature tracking.
Why it was hard
Perfect seams need both photometric and geometric agreement. Outdoor shoots throw brutal challenges—moving clouds, spinning selfie sticks, or spotlight flares at night markets. Hitting real-time budgets meant rewriting parts of the math into LUT lookups and bargaining with every memory copy.
Where it lands
- VR tourism. Panoramas stay immersive even when visitors peek straight up at neon signs.
- Security centers. Operators get uninterrupted situational awareness as we fuse multiple fisheyes into a single, latency-free console feed.
What’s next
We are prototyping depth-aware blending so virtual objects can hide behind real pillars, plus a lightweight SDK so third-party camera makers can drop our stitcher into their firmware.
Visited 5 times, 1 visit(s) today



