Abstract
The aim of distributional semantics is to learn the meanings of words from a corpus of text. The aim of formal semantics is to develop mathematical models of meaning. Functional Distributional Semantics provides a framework for distributional semantics which is interpretable in formal semantic terms, by representing the meaning of a word as a truth-conditional function (a binary classifier). However, the model introduces a large number of latent variables, which means that inference is computationally expensive, and training a model is therefore slow to converge. In this work, I introduce the Pixie Autoencoder, which augments the generative model of Functional Distributional Semantics with a graph-convolutional neural network to perform amortised variational inference. This allows the model to be trained more effectively, achieving better results on semantic similarity in context, and outperforming BERT, a large pre-trained language model.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)