GraPHFormer: A Multimodal Graph Persistent Homology Transformer for the Analysis of Neuroscience Morphologies
Abstract
Quantitative analysis of neural morphology is central to understanding how circuits develop, compute, and fail. Skeletonized reconstructions of neurons and glia enable systematic study of branching patterns, path lengths, tapering, and spatial organization, with implications for neurodevelopment, learning and memory, and neurodegenerative disease. Current learning pipelines often treat either topology (via persistent homology) or graph structure (via graph neural networks) in isolation. We argue that these views are complementary and introduce \emph{GraPHFormer}, a multimodal architecture that fuses topological and graph representations for cell morphology analysis. Our vision branch operates on a novel three-channel persistence image derived from the morphological tree: an unweighted TMD-style density, a branch-length channel (persistence), and a branch-radius channel (mean radius along death-to-leaf paths). In parallel, a graph Transformer processes the original skeleton with geometric/radial attributes. We explore lightweight fusion strategies (late fusion and cross-attention) and train under both supervised and contrastive regimes. We extensively assessed GraPHFormer through established morphology benchmarks, and we showcase that it consistently and significantly outperforms strong topology-only, graph-only, and morphometrics baselines. Beyond accuracy, we demonstrate practical relevance by discriminating neuronal and glial morphologies across cortical areas and species, and by detecting signatures associated with developmental trajectories and degenerative conditions.