Go back

AI in research could be a rocket-booster or a treadmill

Image: Prostock-studio, via Shutterstock

How the technology will impact academic life is poorly understood, say Jennifer Chubb and colleagues

Research funders worldwide are exploring how artificial intelligence might enable new methods, processes, management and evaluation. Some, such as the Research Council of Norway, are already using machine learning and AI to make grant management and research processes more efficient.

A review by the UK’s public funder UK Research and Innovation, to give another example, suggested that AI might “allow us to do research differently, radically accelerating the discovery process and enabling breakthroughs”. The UK’s National AI Strategy, published in September, reinforces this approach.

But there are concerns about potential downsides, such as reinforcing biases and degrading working life. AI might turbo-charge research, or it might drive a narrow idea of academic productivity and impact defined by bureaucracy and metrics, replacing human creativity and judgement in areas such as peer review and admissions.

To better understand AI’s future in academia, we interviewed 25 leading scholars from a range of disciplines, who identified positive and negative consequences for research and researchers, both as individuals and collectively.

So far, AI is used mostly in research to help with narrow problems, such as looking for patterns in data, increasing the speed and scale of analyses, and forming new hypotheses. One interviewee described its labour-saving potential as “taking care of the more tedious aspects of the research process, like maybe the references of a paper or just recommending additional, relevant articles”.

Another strong theme was that, by analysing large bodies of texts and drawing links between papers, AI systems can aid interdisciplinary research by matchmaking across disciplines. AI is also seen as a way to boost the impact of multidisciplinary research teams, support open innovation and public engagement, develop links beyond academia and broaden the reach of research through technology. All of these can enhance the civic role of universities.

Some foresaw a revolution in citizen science, enabling projects that reshaped their priorities in response to participants’ interests and behaviours. One interviewee noted the possibility of “co-creation between a human author and AI that then creates a new type of story”.

The question remains, though, as to whether these efficiency gains will just feed fiercer competition, forcing researchers to run even faster to stand still—or possibly replace them altogether. AI’s labour-saving potential will also come at the cost of privacy, through the gathering of large amounts of personal data.

Our interviewees were fairly confident that AI would not replace established academic labour. The technology was, though, seen as a potential threat to more precarious groups, such as those in the arts and humanities, and early career researchers. Elsewhere in the university workforce, ‘white collar’ data-based jobs were felt to be more at risk of automation than manual work.

Transparency is crucial

As technology has a bigger role in funding decisions, our research underlines that it is critical that such applications are introduced transparently and gain the trust of the academic community. Care must be taken not to disadvantage particular groups by reinforcing pre-existing biases.

With AI already having a profound impact on how scientific research is done, there is an acute need for a greater understanding of its effects on researchers and their creativity. We need to balance research quality and researchers’ quality of life with demands for impact, measurement and added bureaucracy. The research policy expert James Wilsdon has drawn parallels between understanding and regulating AI in research and the effort to make sure that metrics and indicators are used responsibly.

Further steps are needed to examine the effects of AI and machine learning. This requires the research policy community to develop and test different approaches to evaluation and funding decisions, such as randomisation and automated decision-making techniques.

Beyond this, studies of the role of AI in research need to go much further, and ask fundamental questions about how the technology might provide new tools that enable scholars to question the values and principles driving institutions and research processes.

The UK’s National AI Strategy, for example, emphasises the need to “recognise the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors”, but contains little on whether this makes life any better. 

We must be willing to ask whether AI in the workplace supports human flourishing and creativity or impedes it.

The report on which this piece is based can be found here.

Jennifer Chubb is a research fellow at the University of York; Darren Reed is a senior lecturer in sociology at the University of York; and Peter Cowling is professor of AI at Queen Mar, University of London

This article also appeared in Research Europe