On behalf of VSNU I attended the one day EUA workshop on Research Assessment on 14 May 2019. In July 2018 EUA launched a roadmap on Research Assessment, especially focused on the transition to open science. Besides that there is a briefing with reflections on Research Assessment within universities, from April 2019. The VSNU launched a statement in 2018, and will organize a joint event on research assessment with EUA on 15 November 2019.
It might be sufficient to mention that the actions proposed in the EUA documents concentrate on gathering and sharing information, engaging in dialogues, and developing policy and good practice recommendations. However, the first workshop that EUA organized covered more than just a recap of this. First of all it was an invitation to participate in a survey (the gathering information part) the EUA launched to get an overview of research evaluation methods universities use (developed or implemented). Secondly the same day of the workshop Science Europe and EUA launched a statement to join efforts to improve scholarly research assessment methodologies.
Furthermore there was much alignment in the presentations of the day, i.e., changing the way we assess research, is actually a cultural change, and it needs both a top-down and bottom-up approach. There were some different opinions on whether we need disruption (more difficult to do from within the system), or evolutionary steps. Managing the change however is crucial for our success in implementing more open science, better science. A lot of presenters emphasized the differences between researchers, disciplines, metrics, careers and also publishers, so the necessity to understand that there is not one big solution (no one size fits all). Let’s further just pick a few of the recurring words of the day:
As Stephen Curry, Chair of the San Francisco Declaration on Research Assessment (DORA) Steering Committee, stated: “There is no academic freedom, there is academic responsibility”. All stakeholders should act in a responsible way. John-Arne Røttingen, Chief Executive of the Research Council of Norway, made a plea to help researchers getting back to an intrinsic motivation to do research.
Eva Méndez Rodríguez, Chair of the Open Science Policy Platform (OSPP), said it very loud and clear at the end of the day: “We have enough declarations, we should now work on implementations”. A nice concrete example of a different research assessment model was presented by the Rector of Ghent University, Rik van de Walle. The basis of changing their research assessment was to work with the principles of trust instead of control (which means both giving freedom and having responsibility), and to simplify administrative processes. Very promising to follow this initiative, where we also should realize, as Van de Walle said: “a career is more than research assessment”. Changing a whole university might be considered to be a big step (and of course it is), however smaller steps will also get you there in the end. Pastora Martínez Samper and the DORA Working Group of the Open University of Catalonia, Spain, very wisely first thought what to do with the implementation, before signing DORA, and start with setting different criteria for postdoctoral calls.
Perhaps most reference was made to how we communicate, what language we use, and what dialogue is needed. Just a few lines I wrote during the sessions about this:
- “We risk of not pursuing that what benefits health the most but that what boosts your career the most or gives most academic credit”.
- Noémie Aubert Bonn, Doctoral candidate at University of Hasselt, Belgium, told about her research and how she found that researchers felt unable to act on their own, and this generalized inaction will foster frustration and an unclear agenda. We just should start to talk about it. “If we want change to happen we really want you to talk.”
- Note that the narrative remains to be “what is good for your career”, and not “what is best for disseminating your research”. We can work on using proper language.
Impact can be seen in different ways, that’s for sure. Too much focus on your own impact and on the quantity of your output instead of the quality might result in neglecting tasks for which no clear measures of performance are available. Something else to keep in mind is that people are not averages. Thinking in averages is dangerous whatsoever. On average a paper in Nature might be of higher quality, but that does not mean this is true for all papers. Sabina Leonelli also stressed that people are not averages but also not numbers. Use, if you need to, all these metrics wisely, use more than one metric, and weigh them depending on discipline and age (see here a summary of older reports about this topic). A nice example to mention here is the concept Openness Profile, developed by Knowledge Exchange and presented by Lorna Wildgaard, Data Management Specialist at the Royal Danish Library, University of Copenhagen, Denmark. The profile will show what a researcher contributes to open science or open scholarship in a sustainable way. Of course this is still referring to the “what” and not yet showing how all this output has a real impact on society.
The approach of Catriona MacCallum, Policy Advisor for the Open Access Scholarly Publishers Association (OASPA), was also a nice one. She wondered what scholarly communication should look like in a digital and connected work. She called out to embrace sharing and to embrace diversity and to take the opportunity to redefine what quality in science means. That we should celebrate failure as well as success, and that other work (than just the article) needs to be rewarded as well. Let’s work together and think how research needs to be rewarded.
I said it before, thinking about how we assess research and researchers/teachers will be essential to furthering science and its stakeholders. And yes sorry I also tried to make a funny acronym. RIDIQulous of course.