Research publication metrics

The University uses metrics to inform its assessment of research performance, whilst recognising that metrics should be used only in an appropriate and responsible way.

Manchester was one of the first UK signatories of the San Francisco Declaration on Research Assessment (DORA), which was published in 2012.

We also endorse the five principles underpinning 'responsible metrics' published in The Metric Tide (2015). The principles were informed by an independent review of the role of metrics in research assessment and management, and include:

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope.
  • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment.
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results.
  • Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system.
  • Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.

We use metrics about journals to inform publication strategies, and metrics about authors’ publications to inform our assessment of the quality of our outputs.

Output-based metrics – citations data

The University uses citations data, based on its research outputs published in the preceding five years, as an indicator of academic reach and impact.

For each output assessed, the number of citations is normalised by field, and the citation impact of the output is assigned to a percentile based on field and publication year.

Research published in 2012 showed that such an approach helps to address the difficulties which arise when outputs in different fields are compared in terms of citation impact.

The resulting data are used at an aggregate level to track the University's performance against one of its key performance indicators for research, and as one indicator of performance for a discipline or School.

Where appropriate to the discipline or field, citation data are used to inform assessments regarding the quality of an individual publication, the publication profile of a researcher and the aggregate publication profile of a group of researchers in a specified field or sub-field. In doing so we recognise the following caveats:

  • The evidence that citation metrics can be problematic, particularly when applied at the individual level, because citation practices in some fields may under-represent the publications of women, minority ethnic groups and items published in languages other than English.
  • Citation data at an individual level is therefore used only judiciously to inform judgements as one part of the wider set of information gathered from peer review and other indicators of research quality and esteem.

Journal-based metrics

The focus of DORA is to ensure that research is 'assessed on its own merits rather than on the basis of the journal in which the research is published' and that 'the use of journal-based metrics (notably the Journal Impact Factor and lists) in funding, appointment and promotion decisions' is eliminated.

As a signatory of DORA, the University is therefore committed to ensuring that journal-based metrics are not used to assess the quality of researchers' publications in our internal processes, including the Research Review Exercise and promotion and recruitment decisions.

Within our commitment to DORA principles, it is still consistent and appropriate to use journal-based metrics as one indicator of the relative standing of different publication outlets, and therefore as a means of guiding, but not determining, decisions about where to submit work for publication in order to achieve maximum impact.

Guidance for staff and postgraduate students