Go back

Open research is a means, not an end

Celebrating and enabling openness gives better results than trying to measure it, says Elizabeth Gadd

Open science, or, as I prefer to call it, open research, has become a bit of an obsession in higher education. We have concordats, initiatives, mandates, statements and plans. Most of these recognise that the biggest barrier to achieving open practices is the current research evaluation regime. 

If we continue to obsessively measure research success based on traditional, closed practices—most notably, papers in big-brand journals—where is the incentive to engage with open practices? The answer, for many, seems to be to change what we measure. If we value openness, we should start measuring openness, right? 

Such measures are starting to be used. Open access publication is increasingly linked to university funding, for example in the UK Research Excellence Framework (REF). The Leiden University Ranking now compares institutions’ engagement with open access. The European Commission has measured the proportion of open access publications from leading research nations worldwide. 

However, there is a lot more to open research than open access. And beyond the cascade effects of high-level policy onto individuals, scholars currently have no real incentive to give up one way of working and engage with another. 

One or two have tried to change this. There is the Open Science Career Assessment Matrix, but I know of no adopters. The odd university has expressed a desire to value open science activity in its promotion criteria.   

Such efforts seem logical. But we have to be careful not to fall back into the trap of valuing a research output’s vehicle—this time a journal’s openness, not its citedness—rather than its quality. 

Is openness really what we value most about research? Or is openness a means to achieving something else? 

 

Measured response

Advocates will argue that open practices—such as the pre-registration of research studies, open methods, open lab books, open software, open data, pre-prints, open peer review and open outputs—all ultimately lead to better research. And it is hard to counter that argument. 

But flash forward 20 years, to a time when it might be hoped that everyone is doing this sort of thing. Are we saying that all this open research will be of the same quality? If, as the palaeontologist and open-science advocate Jon Tennant argues, “open science is just science done right”, surely what we should be measuring is not openness, but the improved scholarship that results?

 At Loughborough University we are wrestling with all of these questions as we review our open research policy. A working group, formally commissioned by the university research committee and comprising champions of open research at all career stages, drawn from different schools and professional services, is considering our response to the openness agenda. Part of that involves the role of measuring openness. 

Inspired by the work of the research evaluation working group of the International Network of Research Management Societies, which I chair, rather than seeking to measure openness in and of itself, we started by thinking about what it is about openness we value.  

To help, we referred back to our mission as a research-intensive organisation, namely—to quote directly from our research strategy—“to enhance quality, visibility and impact, while celebrating research excellence wherever it is found”. 

For us, it became clear that openness was an important route to these aims, but not an end in itself.

Openness leads to quality by enhancing rigour and reproducibility.

Openness leads to visibility by making the whole research life cycle more transparent and accessible.

Openness leads to impact through improved engagement with outputs, and with the communities affected by our research.

 

Different strokes

But openness is not the only route to these things. Quality can be achieved through excellent research leadership and creative thinking. Visibility can, in some disciplines, still be achieved by publishing in closed outlets that are read and cited by their communities. Impact can be achieved through confidential discussions with industry.  

It is the quality, visibility and impact of our research that we at Loughborough prize, and there are a range of ways to achieve these. For this reason our working group believes we should encourage and value open practices equally alongside other efforts to achieve quality, visibility and impact.

Some would argue that as long as academics can achieve quality, visibility and impact through traditional routes they have little incentive to pursue open practices that may help them to do things better. For many, this justifies rewards or sanctions based on engagement with open practices.

However, as outlined above, we need to be cautious about sending the message that openness is an end in itself. At Loughborough, our solution to this problem was inspired by the second element of our research strategy, to “celebrate research excellence wherever it is found”.

We felt that celebrating openness as a route to research excellence would probably be the most appropriate incentive.

 

Cause for celebration

First, celebrating openness is infinitely preferable to sanctioning non-compliance. Creating a recognition culture through awards or open research fellowships, rather than an evaluation culture through unhelpful metrics, has to be the first choice—especially when presenting beleaguered and hard-working academics with yet another expectation. 

Celebrating openness, rather than pitting one researcher against another in some kind of openness league table, is a far more constructive and collegiate way to encourage good practice. It recognises that openness is not a zero-sum game, and is usually its own reward.

Second, as long as open practices remain virtually unheard of in some quarters of academia, it would be unfair to start setting universal targets. Targets applied without nuance can have tragic unintended consequences, as shown by the case of Stefan Grimm, a researcher at Imperial College London who committed suicide while under pressure to bring in more grant income.

The opportunity to engage with openness is not currently a level playing field for all researchers and institutions. This means that any reward system based exclusively on open practices is not going to be accessible to the whole community. However, celebrating good open practices wherever they are found gives those who are engaging the recognition they deserve, while also creating a network of champions through which good practice can spread. 

Third, celebrating openness rather than sanctioning non-compliance recognises that motivating through measuring is only one route to culture change. In fact, it should probably be the last resort.

At Loughborough, we have concluded that to move towards open practices, academics need: 1. Understanding—why is this important?; 2. Capability—how do I actually do this?; 3. Opportunity—where can I do this?; 4. Motivation—what are the benefits or who’s going to make me?

If we get 1-3 right, number 4 might not be needed at all. Think about recycling. We all understood why recycling was important, and we all knew where the bottle banks were. But most of us didn’t recycle regularly until the local council provided us with a bin in our front yard. 

Once that was available, we didn’t need recycling rankings, or a field-weighted-recycling-index to motivate us. We just did it, because it was easy. 

So, if celebrating openness is better than measuring it, and support is more important than scores, what are the implications for funders? 

 

Changing the culture

Policies such as Plan S, the increasingly global plan to accelerate open research, have the right heart. Plan S seeks not just to open up access to research outputs but to pave the way for better, open publication choices by changing the way research is evaluated. 

But it does this as so many do, by counting things that are easy to count, such as open access outputs and responsible metrics policies, rather than the more effective but harder-to-measure course of enabling universities to shape understanding, capability and opportunity.  

The fear is that such policies make openness—and usually just open access—an end in itself rather than a route to greater quality, visibility and impact, or whatever else a university values. 

The culture change needed to usher in open practices is as hard for funders to mandate as it is for universities to implement. There is an argument that demanding open access outputs led to a culture change of sorts, but I fear that culture mainly constitutes beleaguered librarians and box-ticking researchers. 

Call me a dreamer, but I think if funders really want to create an open research culture, they might look to the REF’s approach of requiring institutional codes of practice. If each university had to produce a policy for engaging with openness that was context-specific but also met certain requirements, it might have better, longer-lasting effects. Who knows, counting open access outputs might not be needed at all.

We at Loughborough plan to put openness on a par with other activities that enhance the quality, visibility and impact of research, and celebrate open practices. But we believe the best route to achieving openness will not involve any alternative forms of research evaluation. 

The critical elements will be around understanding (better communications, the use of champions, open research fellowships), capability (training, sharing expertise) and opportunity (services and support). 

Measuring and valuing openness undoubtedly have their place in the transition towards a brave new open world. But this should be carefully thought through and sensitively implemented, not deployed as a poor substitute for proper open infrastructures and support. 

With thanks to Steve Rothberg for commenting on an early draft, and to the members of Loughborough University’s Open Research Policy Group. 

Elizabeth Gadd is research policy manager (publications) at Loughborough University 

This article also appeared in Research Europe