Heather Lodge (Public Health England) May 2015
There is something special about the Mawby room at Kellogg College, Oxford. You could be forgiven for finding it unprepossessing. It is quite a dark room with a good view of the car park and the busy Banbury Road. At best, the room holds about 25 people and negotiating your way between the tables requires a dexterity of footwork that wouldn’t go amiss on Strictly Come Dancing. The room has no whizzy technology to enable group learning; it has simply a lectern, projector and flipchart but, much of the time, it doesn’t need even those.
Why? Well, have you stopped to consider what makes a good learning experience? On my list are fairly basic things, such as: confident, engaging trainers who love their subject, a safe environment in which to ask questions and make mistakes (and I make many of them, believe me), an interesting course that stretches me and varied training methods that help me learn. When it comes to teaching critical appraisal, that learning environment needs to be incredibly good. Speaking personally, the only statistics I tend to deal with as a Librarian are numbers of people using various resources and whether this justifies the expense. The prospect of having to read, let alone explain, a scientific paper full of numbers and concepts like ‘P values’ and ‘heterogeneity’ is usually so far out of my comfort zone, I think I’d be better at feeding lions at the zoo. But, unless you are a statistician, the chances are that most people will recognise the feeling – if not the wish.
And yet, the challenge is there for anyone who works in or uses healthcare in whatever capacity. How do you know what is best practice if you don’t know how to read the research literature? And if you do read research, what makes you trust it? The people who have written it? The journals that publish it? The article that appears in the media?
At the CASPfest on 27 March 2015, founders, experts and beginners in critical appraisal gathered to celebrate all that CASP has achieved and to look forward to its future development. Larry Chambers reminded us that critical appraisal is not necessarily about knowing clinical detail but looking instead at:
- Evidence: is the research done in a way that its findings are reliable?
- Facilitation: how do we make sense of the results?
- Context: what do the results mean for decision-making?
CASP set the precedent for critical appraisal when it began in the 1990s with its checklists and problem-based approach to learning. And because – as Richard Lehmann pointed out – “evidence dissemination doesn’t happen of its own accord,” we have a responsibility to encourage engagement with research so that it can be used to make informed decisions about the commissioning, delivery and practice of healthcare.
So what is it about the Mawby room at Kellogg College that catches my attention? It is special because of the quality of teaching that is delivered and learning that is achieved at CASP workshops there. Actually, it isn’t about the room at all. It is about trainers who understand that critical appraisal isn’t about being able to calculate confidence intervals. It is about knowing how to ask a question and how to find and interpret the answer.
I can only dream of the day when I, too, might be able to deliver critical appraisal training as expertly as CASP. Especially if I can ditch Powerpoint and use chocolate bars, the measuring spoons from my kitchen drawer and a bag of jelly beans as teaching aids.
Medical Librarian and participant on CASP international workshop May 2014.
CASP Fest was held 27th March 2015, see photos from the day on the HERE