Guest Post: Elaine Toomey, Health Behaviour Change Research Group, National University of Ireland, Galway
Sometimes the journey is just as, if not more, important than the destination. A motivational quote that perhaps graces one too many t-shirts of travellers seeking to ‘find themselves’, it is never more appropriate than when it comes to thinking about behaviour change intervention research. Changing human behaviour is embedded within much of the research that we conduct on a daily basis, and in particular within public health and health psychology settings. Interventions to change health behaviours such as smoking and physical activity have huge potential to make a positive impact on both the individuals themselves and our overburdened healthcare systems. Accordingly, health behaviour change research is a growing area, with estimates of 100s of behavioural trials of interventions being conducted each day globally. However, a lack of transparency regarding what has actually occurred during these interventions has clouded our ability to accurately interpret and make sense of this research and make best use of it.
For example, in pharmacological interventions the dose and specific chemical ingredients of a drug are typically highly specified, and as such study outcomes can usually be accurately attributed to the drug in question. However, within behaviour change interventions, the ‘active ingredients’ of the intervention are often less well specified, and even then it is often unclear whether these ingredients were delivered or received as intended, creating somewhat of a ‘black box’ of uncertainty (Michie and Abraham 2004; Grant et al. 2013). Process evaluation is a study which aims to investigate how well an intervention was implemented as intended (i.e. intervention fidelity), and determine why outcomes have occurred (e.g. mechanisms of impact and contextual influences) (Moore et al. 2015). In other words, understanding what happened during the intervention journey in order to better understand the outcomes, i.e. how and why you have arrived at this destination.
Process evaluations and fidelity assessments are crucial for facilitating transparency in the development and reporting of interventions in numerous research fields, including psychology, social science and public health, as they help to increase confidence that changes in study outcomes are due to the influence of the intervention being investigated, and not due to differences or variability in the implementation of the intervention (Borrelli 2011). Knowledge of these aspects also assists in identifying the essential elements of the intervention (Robb et al., 2011) and enhances the ability to accurately replicate effective and successful interventions (Bellg et al., 2004). Intervention fidelity was also recently highlighted during recent BITSS workshop in South Asia as an important area to be addressed in order to assist intervention replication and reproducibility. It is also important to note that while the examples and much of literature cited here is based in health psychology and healthcare settings, process evaluations and intervention fidelity are relevant to other fields such as environmental behaviour change, educational interventions and many more.
Despite its importance however, fidelity is still often poorly addressed and several reviews of intervention fidelity spanning over two decades have found that significant gaps still exist in how IF has been addressed across a wide variety of disciplines (Moncher and Prinz, 1991, Dane and Schneider, 1998, Parham et al., 2007, Naleppa and Cagle, 2010, McArthur et al., 2012, Schinckus et al., 2014, Prowse and Nagel, 2015). Previous research has found that the barriers to addressing intervention fidelity in research include lack of knowledge about fidelity, lack of specific guidelines, lack of editorial requirement to report fidelity, and time, cost and labour constraints (Perepletchikova et al., 2009, Toomey et al., 2016). While less is known about how process evaluations have been used with less reviews of this area, previous research has suggested that there is a lack of standardisation around how these are conducted (Grant et al. 2013). Therefore it would seem that conducting process evaluations and assessing intervention fidelity within behaviour change research clearly is not straightforward or easy. Indeed, most behaviour change interventions are often complex interactions between multiple interacting components (e.g. individuals, providers, behaviours, outcomes), and each of these numerous components have the potential to affect or influence intervention outcomes separately, making this work difficult.
While the journey towards better understanding of behaviour change interventions is ongoing, a number of recent promising initiatives are moving us closer to our destination. For example, in 2014 a group of international experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide to improve the reporting and replicability of interventions, which explicitly includes intervention fidelity (Hoffmann et al., 2014). Additionally, the UK Medical Research Council (MRC) published comprehensive guidance for conducting process evaluations within complex interventions in 2015. BITSS has also been hugely supportive of this research through the Leamer-Rosenthal prize awarded in December 2016 for my PhD work in this field. Recent Catalyst funding has also been announced to support training in process evaluations within complex interventions to be run by the lead author of the MRC guidance Dr. Graham Moore, and Dr. Rhiannon Evans of the DECIPHER centre. These are all steps in the right direction, but we need to continue to increase awareness of intervention fidelity and process evaluations, to promote it and its importance, and to provide support for training and upskilling in this area. Otherwise, behaviour change research runs the risk of ending up somewhere with no idea how or why we got there.
Dr. Elaine Toomey is a 2016 recipient of the Leamer-Rosenthal Prize for Open Social Science. These prizes of up to $10,000 recognize social scientists who advance practices of research transparency and the use of reproducible methods. BITSS is accepting nominations for a third round of prizes. More information about the prizes, nomination guidelines, and past recipients can be found here.