Within the BC Public Service, BC BIG’s role is as a corporate consulting team, which means we work with clients from across all ministries and policy areas in B.C. Rigorous evaluation is a foundational principle of our behavioural insights practice. By conducting a robust evaluation, we provide our ministry clients with a high level of evidence that demonstrates what works, and what doesn’t. To design and execute a robust evaluation is a complex process, which is why we developed the RIDE Model for Behavioural Shift.
During the scoping phase, we work with prospective clients to understand their behavioural challenge and to determine if the problem is best addressed with behavioural insights or through another approach. At this stage, we learn about the behaviour of interest, the context around the challenge, and the prospective client’s readiness to participate in a project.
Once we’ve decided to move forward with a project, we conduct background research to gain a deeper understanding of the problem and to uncover barriers to behaviour change. To gather this information, we conduct field research, which can include surveys, interviews, and site visits with people who use services and frontline staff. We also perform desk research to learn what we can from the academic literature and from other jurisdictions.
After gaining a better understanding of the problem, we work with our clients to co-design innovative solutions that we can later test.
During the Innovate phase, we borrow the EAST Framework, developed by our friends at the Behavioural Insights Team. The EAST Framework takes behavioural insights and summarizes them into four simple categories. We use these to brainstorm ways to encourage positive behaviour change by making the behaviour easy, attractive, social, and timely (see right-hand box).
After designing a new intervention, we test it to see if it works. Our goal is to produce the highest level of evidence possible, which means conducting randomized controlled trials the gold standard in experimentation when we can. When randomization isn’t possible, we conduct quasi-experiments such as pre/post-tests.
After running the RCT in the field, we analyze the data we collected using descriptive and inferential statistics, and other analytic tools. Through this process, we learn what worked well and what didn’t to shift behaviour. The findings help us make evidence-based recommendations to our clients.
If an intervention is successful and we achieved the desired behaviour change, we support our clients to scale it across their program.
If the intervention doesn’t produce a change, we can go back to the Research phase and incorporate what we learned to test another intervention. For us, learning what doesn’t work is just as important as knowing what does.