Disentangling replicable mechanisms of complex interventions: what to expect and how to avoid fooling ourselves?
Abstract
Background: Guidance on process evaluation recommends analysis of mechanisms of impact to understand how complex interventions change behaviour. This is challenging, as mediation analysis has serious pitfalls even in experimental setups. Additionally, logic models often contain constructs from many theories, making the full causal model too complex for statistical testing. This paves the way for analytical flexibility, which science commonly deters by replication. In the absence of replication (and hence falsification), it is easy to be misled by data. We demonstrate opportunities to mitigate these risks using trial data from Let's Move It, a complex theory-based intervention to promote physical activity (PA). Methods: 1120 older adolescents participated in the trial. Hypothesised mediators included psychosocial variables (e.g. autonomous motivation, self-regulation). The primary outcome was objectively measured PA (7-day accelerometry). Statistical methods included structural equation modelling. Findings: Converting the logic model to a statistical causal model was challenging due to the multitude of estimable parameters. Piecemeal evaluation solved this problem, but created new ones for causal interpretation of mediation (e.g. excluding correlated mediators). Overfitting due to model complexity or researcher degrees of freedom could be alleviated by splitting one’s data into training and testing sets. This “cross-validation†may be the best available alternative for replication, but requires adequate samples. Discussion: We need to consider reliable ways of evaluating logic models statistically. When replication is not an option, special care must be taken to separate signal from the noise. Complexity science can aid in deciding between misleading and useful goals of mechanism analysis.Published
2017-12-31
Issue
Section
Symposia