Installation

Quick Start (All Platforms)

  1. Install Miniconda (if not already installed):

    • Download from Anaconda

    • Follow platform-specific installation instructions

  2. Clone and Install KINTSUGI:

# Clone the repository
git clone https://github.com/smith6jt-cop/KINTSUGI.git
cd KINTSUGI

# Create conda environment (choose your platform)
# Linux:
conda env create -f envs/env-linux.yml

# Windows:
conda env create -f envs/env-windows.yml

# macOS:
conda env create -f envs/env-macos.yml

# Activate and verify
conda activate KINTSUGI
kintsugi check
  1. Verify Installation:

kintsugi check

Windows Installation

Option B: Manual Installation

# Open Anaconda Prompt
conda update -n base conda
conda install -n base conda-libmamba-solver
conda config --set solver libmamba

# Clone repository
cd C:\Users\[your username]
git clone https://github.com/smith6jt-cop/KINTSUGI.git
cd KINTSUGI

# Create environment
conda env create -f envs/env-windows.yml
conda activate KINTSUGI

Download Windows Dependencies

Windows requires additional binary dependencies from Zenodo:

Note: Java, Maven, and FIJI are no longer required. KINTSUGI now uses pure Python implementations (CuPy/NumPy) for all processing including EDF.

Linux Installation

Option A: Using Installation Script (Recommended)

git clone https://github.com/smith6jt-cop/KINTSUGI.git
cd KINTSUGI
chmod +x scripts/install.sh
./scripts/install.sh

Option B: Manual Installation

# Install system dependencies (Java/Maven no longer required)
sudo apt-get update
sudo apt-get install -y libvips-dev

# Clone and setup
git clone https://github.com/smith6jt-cop/KINTSUGI.git
cd KINTSUGI

conda env create -f envs/env-linux.yml
conda activate KINTSUGI

macOS Installation

# Install system dependencies (Java/Maven no longer required)
brew install vips

# Clone and setup
git clone https://github.com/smith6jt-cop/KINTSUGI.git
cd KINTSUGI

conda env create -f envs/env-macos.yml
conda activate KINTSUGI

GPU Acceleration (Optional)

KINTSUGI supports multi-GPU acceleration for significantly faster processing.

Basic GPU Support (CuPy + PyTorch)

For GPU-accelerated deconvolution, stitching, and denoising:

# Install via kintsugi CLI (recommended)
kintsugi install gpu

# Or manually for CUDA 12.x
pip install cupy-cuda12x torch torchvision

# Or for CUDA 11.x
pip install cupy-cuda11x torch torchvision

Multi-GPU Support

KINTSUGI automatically detects and uses all available non-integrated NVIDIA GPUs:

from kintsugi.gpu import get_gpu_manager

# Check available GPUs
gpu = get_gpu_manager()
print(gpu.summary())
# Output: Found 2 GPU(s):
#   [0] NVIDIA A100 (40.0 GB, CC 8.0)
#   [1] NVIDIA A100 (40.0 GB, CC 8.0)

# PyTorch models automatically use DataParallel
from kintsugi.denoise import CAREDenoiser
denoiser = CAREDenoiser()  # Uses all GPUs for inference

# Multi-GPU stitching
from notebooks.Kstitch._translation_computation import get_multi_gpu_accelerator
accelerator = get_multi_gpu_accelerator((1024, 1024))

GPU-Accelerated Image Processing

KINTSUGI uses CuPy for GPU-accelerated image processing operations:

Operation

GPU Acceleration

Illumination Correction

CuPy FFT-based BaSiC algorithm

Stitching

CuPy phase correlation

Deconvolution

CuPy Lucy-Richardson with FFT

Extended Depth of Focus

CuPy variance projection

# Check GPU availability
from kintsugi.gpu import get_gpu_manager
gpu = get_gpu_manager()
print(gpu.summary())

# GPU-accelerated illumination correction
from kintsugi.kcorrect_gpu import KCorrectGPU
corrector = KCorrectGPU(device_id=0)
flatfield, darkfield = corrector.estimate(images)

# Multi-GPU stitching
from notebooks.Kstitch._translation_computation import get_multi_gpu_accelerator
accelerator = get_multi_gpu_accelerator(tile_shape)

GPU-Accelerated Single-Cell Analysis

For GPU-accelerated single-cell analysis (clustering, UMAP, spatial analysis with RAPIDS), see the dedicated repository: rapids_singlecell

External Dependencies

Dependency

Purpose

Installation

libvips

High-performance image I/O

conda install libvips (Linux/macOS) or Zenodo (Windows)

VALIS

Image registration

Vendored as Kreg (no separate install needed)

CuPy

GPU image processing

conda install cupy or pip install cupy-cuda12x

PyTorch

Deep learning models (optional)

pip install torch torchvision

Note: Java, Maven, and FIJI/CLIJ2 are no longer required. KINTSUGI now uses pure Python implementations (CuPy/NumPy) for all processing including Extended Depth of Focus (EDF).

HPC Installation (SLURM Clusters)

For HPC environments like UF HiPerGator, KINTSUGI supports distributed batch processing via Snakemake + SLURM.

Environment Setup

# Load conda module (HiPerGator-specific)
module load conda

# Create environment from Linux yml
conda env create -f envs/env-linux.yml
conda activate KINTSUGI

# Install GPU support and Claude Code integration
kintsugi install gpu
kintsugi install claude

Optional Feature Groups

Install additional capabilities as needed:

kintsugi install gpu        # GPU acceleration (CuPy for CUDA)
kintsugi install torch      # PyTorch for deep learning models
kintsugi install bio        # Spatial biology analysis (scanpy, scimap, squidpy)
kintsugi install viz        # Napari visualization
kintsugi install claude     # Claude Code MCP integration
kintsugi install dev        # Development tools (pytest, ruff, black, mypy)
kintsugi install all        # All optional features

HPC-Specific Notes

  • CuPy on login nodes: kintsugi check will report CuPy as unavailable on login nodes (no GPU hardware). This is expected — CuPy works correctly on compute nodes.

  • Cache redirection: SLURM jobs automatically redirect pip/torch/numba caches to /blue/ storage via account-specific scripts. Do not install packages inside jobs.

  • SLURM plugin patch: SLURM >= 24.11 requires a patch to the Snakemake jobstep plugin. See the main README for details.

Updating Existing Projects

When you update the KINTSUGI repository (git pull), project copies of notebooks, modules, and workflow scripts may become stale. See the Updating Existing Projects section in the README for full details.

Key commands:

# Sync notebooks and Python modules to all project folders
python scripts/sync_to_projects.py          # Auto-discovers KINTSUGI_Projects/*/notebooks
python scripts/sync_to_projects.py --dry-run  # Preview changes

# Refresh workflow scripts for a single project
kintsugi workflow config /path/to/project --force

# Bulk-refresh workflow scripts across all projects
for d in /path/to/KINTSUGI_Projects/*/workflow/scripts; do
  cp workflow/scripts/*.py "$d/"
done

Note: A git post-commit hook automatically runs sync_to_projects.py after each commit. Notebooks use %autoreload 2 — no kernel restart needed after module updates.

Verifying Installation

# Check all dependencies including GPU
kintsugi check

# Or via Python
python -c "
from kintsugi.gpu import get_gpu_manager
print(get_gpu_manager().summary())
"