MRC will be running their annual e-Val exercise from Monday October 4th to Friday November 26th.
Staff who hold, or recently held, an MRC award will be asked to complete the survey.
Seems to me that, although specialist medical questions are included, many of the questions are the sort of thing that might be relevant to any award and that MRC e-Val represents a good starting point for standardising questions for output/outcome/impact collection for Research Council’s.
I plan to do some more analysis of this this autumn.
The question set for this year’s e-Val survey has now been published and can be found via the following link:
MRC have attempted, as a principle, to try and maintain the same question set as last year, wherever possible, as they do not want researchers to feel that they are being asked a different survey. However, having reviewed the data, some changes to the questions were required to ensure that MRC had the most accurate data set available. The changes are:
Section 3 – Further Funding – Added questions to ask when funding started and ended. MRC were previously unable to report these data in financial or calendar years. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
Section 4 – Next Destination – added a question to ask which country a staff member moved into (if known). mrc were unable to identify what percentage of personnel moved into roles within the UK. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
Section 5 – Dissemination – Changed the drop down options to describe how the research was disseminated. Responses from last year will have been mapped across into the new responses, meaning that researchers need only check they are satisfied that the new category is correct.
Section 6 – Influence on Policy – Has been renamed to Influence on Policy and Practice.
Section 8 – Intellectual Property – Researchers are now asked to provide the patent publication number for all patent applications published or granted. MRC were previously unable to identify where different researchers had referenced the same discovery. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
Section 9 – Products and Interventions – All the drop down menus have been revised and simplified. Responses from 2009 will have been mapped into the new categories, with no further input required from researchers. A new question has been added to summarise the development status of the product. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
For NPRI/LLHW awards – New sections have been added to replace the generic use of section 12.
Please do not hesitate to contact the MRC Project Manager if you have any questions (contact details below).
A quick update on where we are with capturing impact and output on our core systems.
Our approach is to identify those entities that we know we need to report on and explore options for capturing information in a generic manner so that the systems and processes are not just geared to one requirement such as REF or RCUK but can facilitate many uses of the information.
We have set some fields up in our test ePrints system. Basically a narrative box, date, and publicity flag for each ‘impact’ entry. Initially we have kept impact as a seperate entity to outputs. I’ve tested this out and can record an impact with a relationship to an output. We need to do some more work on the many relationships between people, outputs and impact and we need to add some more fields to allow categorisation of impact e.g. influence on policy, economic.
We may share some of the specifications and we will be sharing the ePrints code we devise for these entities. We are happy to demo our systems to others. Please email firstname.lastname@example.org if you want to be notified of any demos we are setting up.
We are also listening to other projects that are investigating CERIF (http://www.eurocris.org/) for sharing of Research information and expect to be able to output our data in a specified XML format.
At our recent workshops we did a quick poll based on some of the entities that RCUK suggested they might want to gather data for.
These entities may change now that the RCUK Outcomes Project is not planning to build a new system but use several existing systems however we expect that they will want to capture information about a large % of these.
DISCLAIMER: This was a quick unscientific poll deliberately copied from an informal exercise carried out at one of the RCUK Outcomes Focus Groups – we did not choose the questions ourselves. Please treat the information with caution but we do think it does give some useful indications. We’d like to run another poll in future with different questions to try and get more useful results.
From a Research Organisation perspective the following were generally voted easy to capture and worthwhile to capture:
FOLLOW UP FUNDING
The following were voted as worthwhile capturing though there may be difficulties:
The following were seen as difficult to capture and of limited value to capture:
From this I understand that there will be at least 3 different sets of questions about Research Council awards rather than one standard set of metrics for all Research Councils. Whilst I recognise that some subjects will have different specialist questions I am not convinced separating them out is the most useful approach. Like other metrics exercises I think ignoring questions where they are irrelevant (or indeed having the system ignore them for you) might be an option.
I am sure that RCUK will provide a schema to allow us to upload data directly from our systems rather than data having to be uploaded direct to the RCUK systems with the risk that this might differ to what HEI’s hold. I am anxious to hear more about this so that we can manage the requirements here at Glasgow.
Seems it may be time consuming with a greater potential for error and confusion than one standard questionnaire.
Of course our friends at RCUK may come up with some easy fixes so here’s hoping!
What do others think? Please do comment in the box below and take part in this quick poll.
In addition to adding the draft report to the blog soon we will post the posters showing how the groups tried to classify activity and observations from our quick poll of what was worthwhile capturing and what was easy/difficult to capture.
*Update* Here’s how both groups attempted to categorise research activity :
Some of my colleagues attended the HEFCE-sponsored “Impact in the Context of REF” event on Friday 25th June at King’s College London. Kerry Revel very kindly provided this brief report.
The event was very informative. There were several presentations from pilot institutions on their similar experiences of participating in the REF Impact Pilot. David Sweeney, Director of Research, Innovation and Skills at HEFCE reported that the pilot is going well and that the panels have found themselves able to use case studies to differentiate scores. The pilot has raised various issues to be resolved in consultation with the assessment panels.
It was particularly interesting to hear from the Chairs of the Clinical Medicine and the English Language and Literature pilot panels. They reported that, despite some initial scepticism from panel members, the process has worked well, which should provide reassurance and confidence to the academic community. The importance was highlighted of institutions being able to showcase the benefits their research has had to the economy and wider society, particularly in the present climate, when funding for research is so tight.