Experiments in research on memory, language, and in other areas of cognitive
science are increasingly being analyzed using Bayesian methods. This has been
facilitated by the development of probabilistic programming languages such as
Stan, and easily accessible front-end packages such as brms. However, the
utility of Bayesian methods ultimately depends on the relevance of the Bayesian
model, in particular whether or not it accurately captures the structure of the
data and the data analyst's domain expertise. Even with powerful software, the
analyst is responsible for verifying the utility of their model. To accomplish
this, we introduce a principled Bayesian workflow (Betancourt, 2018) to
cognitive science. Using a concrete working example, we describe basic
questions one should ask about the model: prior predictive checks,
computational faithfulness, model sensitivity, and posterior predictive checks.
The running example for demonstrating the workflow is data on reading times
with a linguistic manipulation of object versus subject relative sentences.
This principled Bayesian workflow also demonstrates how to use domain knowledge
to inform prior distributions. It provides guidelines and checks for valid data
analysis, avoiding overfitting complex models to noise, and capturing relevant
data structure in a probabilistic model. Given the increasing use of Bayesian
methods, we aim to discuss how these methods can be properly employed to obtain
robust answers to scientific questions. All data and code accompanying this
paper are available from https://osf.io/b2vx9/.