Abstract
Past work has shown that many aspects of natural language semantics can be modelled using tools from logic and probability theory. I will first discuss how it is possible to train such a model in practice on various kinds of data, and how the probabilistic logical structure of the model is important for generalisation. Turning to the future, I will take stock of the bigger picture, and explain why it is (unfortunately) impossible for such a model to satisfy all the properties we might ideally expect, including computational tractability and logical/probabilistic coherence. I will sketch a new approach to probabilistic modelling, which maintains tractability by relaxing the strict demands of Bayesian inference. This has the potential to explain how patterns of language use arise as a result of computationally constrained minds interacting with a computationally demanding world.


