The chapter focuses on problems in higher-level cognition: inferring causal structure from patterns of statistical correlation, learning about categories and hidden properties of objects, and learning the meanings of words. This chapter discusses the basic principles that underlie Bayesian models of cognition and several advanced techniques for probabilistic modeling and inference coming out of recent work in computer science and statistics. The first step is to summarize the logic of Bayesian inference based on probabilistic models. A discussion is then provided of three recent innovations that make it easier to define and use probabilistic models of complex domains: graphical models, hierarchical Bayesian models, and Markov chain Monte Carlo. The central ideas behind each of these techniques is illustrated by considering a detailed cognitive modeling application, drawn from causal learning, property induction, and language modeling, respectively.