Agencies should exchange experiences in the evaluation of research projects
Analysis of 31 partnered research programmes conducted by nine funding agencies is presented to 8th Annual Meeting of Global Research Council in São Paulo.
By Maria Fernanda Ziegler | Agência FAPESP – More and more research is conducted in partnership worldwide by academia and other sectors – manufacturers, tech startups, NGOs or government. Partnerships bring in more resources and contribute to internationalization, technology transfer and commercialization, as well as building capacity and helping solve economic, social and environmental problems.
This trend requires funding agencies to measure and evaluate the results of their programmes as a strategy to ensure their investments are efficient and effective, according to Peter Kolarz (photo), a senior consultant with Technopolis Group, a UK-based provider of policy advice and research services focusing on science, technology and innovation. Kolarz delivered a presentation to the 8th Annual Meeting of the Global Research Council (GRC).
“Research funding agencies need to talk to each other and exchange experiences in pursuit of more efficient methods and strategies,” he said. “We tend to follow the same methods and techniques as always, but we need to look at what works and what doesn’t work.”
The GRC’s 2019 Annual Meeting took place in São Paulo on May 1-3, and was attended by some 50 heads of research councils and funding organizations in a similar number of countries on all five continents. The event was organized by the São Paulo Research Foundation (FAPESP), Argentina’s National Scientific and Technical Research Council (CONICET) and the German Research Foundation (DFG).
Kolarz presented the results of a study on project design, monitoring and evaluation of results for 31 partnered research programmes conducted by participating agencies in nine countries: Argentina, Canada, Mexico, Morocco, Peru, Saudi Arabia, South Africa, Switzerland and Uruguay. The study also analyzed typical partnership models and key difficulties.
“Research programmes run as partnerships are growing prolifically,” he said. “However, in most countries academics have few incentives to prioritize objectives other than doing excellent science, and non-academic partners know little about the existence of these research programmes. We need to encourage efforts to enhance the structures of higher education institutions and bolster the dissemination of information about the programmes.”
For Kolarz, the scope of the partnerships in question is expanding. “Partnerships may be with not just industry but also other sectors,” he said. “We’ll see more and more researchers collaborating with NGOs, government, manufacturers and other business organizations. The problem is knowing how to ensure that partnered programmes get the best possible results.”
The study detected a number of common problems for funders, however advanced the science system of the country concerned may be. “Perhaps the main problem they all share is the lack of incentives for basic research by academics in universities, despite the countless reasons to fund basic curiosities including the desire to go beyond basic science. Innovation occurs only if there are researchers who can understand the scientific fundamentals,” Kolarz said.
Despite significant differences among the nine countries surveyed, best practice does not come only from countries with advanced science systems. “Sometimes countries that have been funding science for the last 400 years get stuck and their systems become out of date. Meanwhile, countries that are just getting under way have the chance to start with what’s most sophisticated,” he said.
Kolarz highlighted the importance of creating metrics and efficient techniques to evaluate the results of programmes. “Evaluation techniques don’t necessarily have to change in accordance with the different objectives of each programme. They should change only in accordance with the budget available to do the metrics and appraisals,” he said.
Defining goals and targets for the programme from the start also helps define the most suitable metrics. “This varies a great deal from one programme to another. Programmes involving innovation, basic science and sustainable development require different questions to evaluate success. Qualitative metrics are also needed, as what makes a project successful can’t be captured merely by the things agencies usually count, such as numbers of articles and patents,” Kolarz said.
“We need to continue sharing what works in terms of programme design, because in many cases programmes try to combine two different groups, such as academia and industry, to achieve a single purpose, but for some reason the partnership doesn’t work,” he said.
Kolarz also stressed the importance of swapping experiences on programme design and monitoring, and on IT systems. “These three points are very important and very hard to do. Sometimes it’s too costly to build an IT system, despite huge benefits to evaluation and to the users, especially those who want to apply for these programmes and aren’t academics. This is critical the world over,” he said.
Photo credit: Felipe Maeda / Agência FAPESP