At the start of the year, I had the pleasure of reading through a stack of reports on UKRI’s Citizen Science Evaluation Grant projects. These were accounts of the research process, detailing the unexpected turns and nitty gritty of decision-making. I found that understanding that context made each of the projects feel alive and real in a way that a polished journal article does not.  

Researchers were frank about challenges they faced – and the advent of the COVID-19 pandemic provided plenty! They were also more open and reflective about the successes their projects had than the stricture of academic publishing would normally allow.  

Our evaluation resulted in some recommendations for those designing and funding citizen science. But it also prompted questions and provocations about how we advocate for a method while noting its complexities, how we engage with people in ways that work for them, and how support structures can be developed to build participatory and other citizen-led research capacities for the future. 

The devil’s in the detail 

Full disclosure; I used to be a librarian – so, of course, I value documentation – and access to it. But this was something new. In the candour of the reports, interviews, and roundtable conversations, we heard from researchers about what it was like to conduct these projects. What was challenging, how they made it work, and how much they learned from engaging with their participants. This kind of sharing is incredibly valuable.  

And yet there is a tension here. How can we be open about the warts-and-all complexities of research with real people in the real world, while at the same time making the value of this kind of research uncontroversially clear? Participatory research requires adaptability when life gets in the way, and it cannot be designed to rigid pipelines. But this messiness, while off-putting to some, can yield excellent data and meaningful results. 

Tensions and protections  

Our evaluation notes that it could have benefited from direct input from citizen scientists themselves. Many studies involved fully anonymous participation, and while anonymity can be important and valuable where sensitive topics are investigated, or simply to reduce the burden for taking part, this comes at the cost of not being able to ask participants for their input later. Where participant data was held, this was done only for the short duration of the projects. All of this is excellent data protection in action. But it points to a tension for when, and how, we can re-engage with people who take part in research, to ask them for their experiences so we can learn from them. It’s also interesting to note that many of the projects funded by the UKRI pilot didn’t factor in any form of evaluation themselves, which indicates this wasn’t considered important by the projects. 

Though these projects were short, there was still the potential for long-term ripple effects spreading through participants’ lives and researchers’ approaches. To meet the full potential of citizen science, we have advocated for long-term funding and research design that builds in longitudinal evaluation. This would also allow more planning for different opportunities to engage research participants in decision-making and evaluation. However, this amounts to making the recommendation for building long-lasting citizen science infrastructure against a backdrop of academic precarity, and funding pressures that push in the opposite direction.

Pondering conundrums 

I am interested in conundrums. I think we often need to sit with them, rather than picking sides. We need to both share the trickiness of doing research collaboratively with citizen scientists and members of the public, and sing the praises of this approach and other participatory approaches, to share learnings. We need to provide anonymity and protect the data of those who take part in research, while also giving them the chance to voice their thoughts on it. We need to have short pilot studies that can do a lot with little, while also developing infrastructures that support citizen science and wider participatory research for the long term. 

There are dotted lines that link this back to questions of what works and what doesn’t work in citizen science approaches – and it’s critical that we pause to consider what infrastructure is needed to support their exploration and continuation. 

Join The Young Foundation’s free webinar on 16 November, exploring a funding participatory future with spokespeople from UKRI 

Methods & Measurement Peer research citizen science citizen scientist evaluating research evaluation event grant grants participatory research research UKRI Posted on: 31 October 2022 Authors: Helena Hollis,

Top

Community research

We are a not-for-profit tackling societal issues with a collaborative, multi-disciplinary approach.

Contact our team

Social innovation

Involving people from diverse sectors to shape game-changing initiatives.

Get in touch

© The Young Foundation 2022