Neural Mixture Density Processes
Abstract
The neural process (NP) is a probabilistic meta learning model that learns distributions over functions via a global latent variable.It enables fast adaptation in few-shot scenarios by leveraging past experience. However, the design of latent variable structures and conditioning mechanisms in NPs remains underexplored, despite their importance in capturing diverse functional distributions.This paper proposes a new variant of NPs via mixture density modeling, referred to as the neural mixture density process (NMDP).The NMDP decomposes model parameters into task-agnostic and task-specific components to represent function distributions more flexibly. We train the model via the Expectation–Maximization algorithm to construct expressive functional priors.Compared with existing work, our method maintains several advantages: (i) less overfitting by updating a small part of the network parameters, (ii) compact task representation via distributions in the simplex,(iii) an improvement guarantee of generative likelihoods over iteration. Experimental results show that our method can achieve competitive performance with adequate explainability.