Alpha-LFM
On paper: [https://www.nature.com/articles/s41467-025-62471-w]
TLDR / Main Takeaway:
Alpha-LFM couples a light-field add-on for a standard inverted microscope with a physics-assisted, multi-stage deep net (“Alpha-Net”) to deliver high-fidelity 3D super-resolution at hundreds of volumes/sec and with very low photobleaching. This essentially enables minutes-to-hours live-cell imaging and new measurements of organelle interactions.
The interactive GUI makes it pretty easy to use; just load up main() in matlab, select your .tifs/training pairs - which in this case was provided lysosome and mitochondria data - and network validation did the rest by creating a recon_enhanced folder containing a complete 3D volume. I made this into a .gif using matlab with a side-by-side of accumulated MIP throughout the slices (will add these here later).
Methods:
-
Optics: Light-field microscopy retrofit (microlens array + relay optics) on an inverted scope (also shown on Fourier LFM)
-
Three-stage network (Alpha-Net):
- View-attention denoising → 2) Spatial–angular de-aliasing (uses sub-aperture-shifted light-field projections, SAS-LFP) → 3) VCD 3D reconstruction. Trained end-to-end but decomposed for better fit and speed.
-
Physics-embedded data synthesis: From the same 3D “ground truth,” synthesize Noisy LF, Clean LF, De-aliased LF to supervise each subtask.
-
Decomposed-Progressive Optimization: Independently train subtasks first, then progressively joint-optimize (denoise → +de-alias → +reconstruct) to suppress hallucinated high-freq artifacts and reach a “global” optimum. This was actually really helpful — other than “seams” in recovered .tifs, there weren’t any hallucinations/artifacts.
-
Adaptive tuning: For unseen structures, fine-tune rapidly 2D wide-field images as lateral constraints, alternating with a small set of original volumetric constraints to avoid overfitting.
Results:
-
Resolves 120-nm line pairs; ~126 ± 6 nm at high light dose, ~135 ± 15 nm at low doses; volumetric rates up to ~333 Hz.
-
Photon efficiency / longevity: >40,000 SR volumes with <50% photobleaching; dramatically outlasts 3D-SIM and LSFM (which I’ll discuss some other time) under comparable conditions.
-
Biology at a deeper level:
-
Peroxisomes & ER captured in 3D at 100 vps; fast motions (peroxisomes up to ~10 µm/s; ER tubule stretch/formation in <50 ms).
-
Dual-color 5D (mitochondria outer membrane + lysosomes): true 3D contact rates are lower than 2D projections (e.g., fission 48% (3D) vs 71% (2D)); lysosome contact correlates with higher fission/fusion speed (3D analysis significant)
-
Day-long studies: 60h tracking across two cell cycles; automated z-drift correction, 8 FOVs/cycle; tracing of individual mitochondria and fate analysis
-
Alpha-LFM is proof that optics + physics-guided deep learning can actually help the speed–dose–resolution trade off for live-cell SR. The split-task training (denoise → de-alias → 3D) with matched synthetic supervision curbs hallucinations really well and makes adaptation to new structures practical IN REAL TIME. Basically, model each degradation, solve them one-by-one, and combine the fixes. This is plug-and-play and could help a ton of labs.
SSAI-3D
On paper: [https://www.nature.com/articles/s41467-025-56078-4]
TLDR / Main Takeaway:
SSAI-3D is a system/sample-agnostic axial deblurring framework that turns anisotropic 3D stacks into isotropic volumes across many microscopes. It does this by fine-tuning a large blind-deblurring mode on a synthetic, physics-aware dataset made from raw data (PSF-augmented + noise-robust), so it travels well across domains with minimal retraining. Literally plug and play :o
I tested it on two datasets — both of which showed pretty outstanding results when recovered. Side-by-side MIP accumulation photos really evidenced this.
Methods:
-
Denoise lateral slices, blur them to mimic axial blur → supervision without hardware calibration.
-
Begin from NAFNet (~100–150M params) trained on natural images; a “surgeon” network picks the most impactful ~10% of layers to fine-tune, leaving the rest frozen
-
2D in, 3D out: Denoise XZ and YZ planes, deblur with the tuned model, then fuse to a single isotropic stack. Typical run: ~30 min training on RTX 3080; ~50 s to process a 300×300×300 volume. For reference, it takes my fried thinkpad about 10 minutes to process a a volume of 960x928x350 voxels.
Results:
-
Simulated tests:
- Beads (easy): recovers axial FWHM ≈ 200 nm with low error, more stable than Self-Net.
- Directional strands: SSAI-3D keeps shapes/intensities while baselines degrade;
- Efficiency: sparse FT cuts training time ~2.5–3.5× vs common baselines.
-
Real data across systems: Light-sheet, confocal, wide-field, and label-free nonlinear imaging—all show clear axial frequency gain and visually crisper z-detail (e.g., dendrites, nucleoli), often with ≤0.5 GPU-hours of tuning. It took me a little longer than that but I’m working on an old ThinkPad connected to an A10 cloud GPU.
-
Hardware-validated fidelity:
- mesoSPIM (ASLM) brain: neuron detection on SSAI-3D reconstructions closely matches hardware near-isotropic reference.
- Triple-view confocal mitochondria: DNA-puncta volumes from SSAI-3D agree with multiview reference; baselines lag.
This is a great upgrade for everyday 3D imaging: use your own stack to teach the model what your blur and noise look like, then lightly adapt a big deblurring net. You don’t change hardware or collect paired ground truth, you just kind of adapt. That’s why it ports across microscopes and tissues. Cool stuff.