--·--

Carving 3D Anatomy Out of Noisy Scans


PET/MR overlay reminding us how multi-modal data layers together.

Our collaboration with Duke University’s medical school supplied us with rich—but messy—volumes of CT and MRI scans. The goal: segment complex organs and vasculature in full 3D so surgeons and researchers can study change over time, not just slice by slice.

What we built

  • Multi-modal fusion. We align CT, MRI, and contrast-enhanced volumes through deformable registration, letting the network borrow the sharp edges of CT and the soft-tissue nuance of MRI in one pass.
  • Topology-aware loss. Classic dice scores are not enough when a missed hepatic vein could change a surgical plan, so we penalize topological breaks and reward continuity along vessel trees.
  • Research-ready tooling. Outputs stream straight into volumetric viewers plus Python notebooks so clinicians can measure tumor volume, distance to arteries, and longitudinal growth without switching apps.

Why it was hard

Three-dimensional data is huge. Memory pressure forced us to invent chunked training pipelines and leverage mixed precision without blowing up gradients. Aligning modalities also meant obsessive QA—tiny registration errors cascade into big segmentation mistakes.

Where it lands

  • Diagnosis planning. Oncologists can map tumor boundaries precisely before deciding on ablation vs. resection.
  • Academic studies. Longitudinal datasets finally get high-quality labels, enabling statistically significant research rather than anecdotal insights.

What’s next

We are pairing the model with active learning loops so radiologists only annotate the most uncertain voxels, and we are exploring federated fine-tunes so hospitals can keep data in-house while sharing model improvements.

Visited 3 times, 1 visit(s) today

Leave a Reply

Your email address will not be published. Required fields are marked *