MRC E-Val 2010

July 30, 2010

MRC will be running their annual e-Val exercise from Monday October 4th to Friday November 26th. 

Staff who hold, or recently held, an MRC award will be asked to complete the survey.

Seems to me that, although specialist medical questions are included, many of the questions are the sort of thing that might be relevant to any award and that MRC e-Val represents a good starting point for standardising questions for output/outcome/impact collection for Research Council’s.

I plan to do some more analysis of this this autumn.

The question set for this year’s e-Val survey has now been published and can be found via the following link: 

http://www.mrc.ac.uk/Utilities/Documentrecord/index.htm?d=MRC007039

 MRC have attempted, as a principle, to try and maintain the same question set as last year, wherever possible, as they do not want researchers to feel that they are being asked a different survey. However, having reviewed the data, some changes to the questions were required to ensure that MRC had the most accurate data set available. The changes are:

  • Section 3 – Further Funding – Added questions to ask when funding started and ended. MRC were previously unable to report these data in financial or calendar years. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
  • Section 4 – Next Destination – added a question to ask which country a staff member moved into (if known). mrc were unable to identify what percentage of personnel moved into roles within the UK. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
  • Section 5 – Dissemination – Changed the drop down options to describe how the research was disseminated. Responses from last year will have been mapped across into the new responses, meaning that researchers need only check they are satisfied that the new category is correct.
  • Section 6 – Influence on Policy – Has been renamed to Influence on Policy and Practice.
  • Section 8 – Intellectual Property – Researchers are now asked to provide the patent publication number for all patent applications published or granted. MRC were previously unable to identify where different researchers had referenced the same discovery. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
  • Section 9 – Products and Interventions – All the drop down menus have been revised and simplified. Responses from 2009 will have been mapped into the new categories, with no further input required from researchers. A new question has been added to summarise the development status of the product. Complete responses from last year will now show as incomplete, prompting researchers to go back in and add these new responses.
  • For NPRI/LLHW awards – New sections have been added to replace the generic use of section 12.

 Please do not hesitate to contact the MRC Project Manager if you have any questions (contact details below).

____________________________________________________________________________________________________________________________________________________________________________

 Philip Anderson

Project Manager

MRC Head Office,

20 Park Crescent

London
W1B 1AL

 Tel: 0207 670 5360

Mob:07747 476 269

 http://www.mrc.ac.uk/Achievementsimpact/Outputsoutcomes/e-Val/index.htm

Advertisements

Research Outcomes Update

July 30, 2010

I’m off on holiday for the first two weeks in August but work is on-going. 

Coming soon on the blog:

Report from the workshops

More detail about how we linked awards from our Research System, to outputs in our Repository.

Update on the tests we are currently doing on capturing additional impact and output information on the repository.


Capturing Information about Research Impacts and Outputs

July 22, 2010

A quick update on where we are with capturing impact and output on our core systems.

Our approach is to identify those entities that we know we need to report on and explore options for capturing information in a generic manner so that the systems and processes are not just geared to one requirement such as  REF or RCUK but can facilitate many uses of the information.

IMPACT

We have set some fields up in our test ePrints system.  Basically a narrative box, date, and publicity flag for each ‘impact’ entry.  Initially we have kept impact as a seperate entity to outputs.  I’ve tested this out and can record an impact with a relationship to an output.  We need to do some more work on the many relationships between people, outputs and impact and we need to add some more fields to allow categorisation of impact e.g. influence on policy, economic.

OTHER OUTPUTS

We have draft specifications for enhancements to (0r addition of) some key entities and plan to test screens for these over the next few weeks.  These include public lectures, exhibitions and performances (required for HEBCIS https://researchoutcomes.wordpress.com/2010/07/02/higher-education-business-and-community-interaction-survey-hebcis/) award/recognition (relevant to REF), artwork, and compositions.

We may share some of the specifications and we will be sharing the ePrints code we devise for these entities. We are happy to demo our systems to others.  Please email a.cook@enterprise.gla.ac.uk if you want to be notified of any demos we are setting up.

We are also listening to other projects that are investigating CERIF (http://www.eurocris.org/) for sharing of Research information and expect to be able to output our data in a specified XML format.


What can and should we capture information about?

July 22, 2010

At our recent workshops we did a quick poll based on some of the entities that RCUK suggested they might want to gather data for.

These entities may change now that the RCUK Outcomes Project is not planning to build a new system but use several existing systems however we expect that they will want to capture information about a large % of these.

DISCLAIMER: This was a quick unscientific poll deliberately copied from an informal exercise carried out at one of the RCUK Outcomes Focus Groups – we did not choose the questions ourselves.   Please treat the information with caution but we do think it does give some useful indications.   We’d like to run another poll in future with different questions to try and get more useful results.

From a Research Organisation perspective the following were generally voted easy to capture and worthwhile to capture:

SPINOUT COMPANY
LICENCE
FOLLOW UP FUNDING
JOURNAL ARTICLE
CONFERENCE PAPER
BOOK CHAPTER
BOOK
COMPOSITION

The following were voted as worthwhile capturing though there may be difficulties:

IMPACT
DESTINATION
AWARD/RECOGNITION
PATENT
CONSULTANCY
EVENT
COLLABORATION
BROADCAST
OTHER PUBLICATIONS
DATA SET

The following were seen as difficult to capture and of limited value to capture:

OTHER OUTPUTS
SKILL
WEBSITE
TRAINING MATERIAL

A summary of the poll can be found at:

http://www.gla.ac.uk/services/enrich/projectdocuments/


RCUK – No Single System Likely

July 22, 2010

Hi All,

The latest update from RCUK can be found at:

http://www.rcuk.ac.uk/aboutrcuk/efficiency/Researchoutcomes/default.htm

From this I understand that there will be at least 3 different sets of questions about Research Council awards rather than one standard set of metrics for all Research Councils.   Whilst I recognise that some subjects will have different specialist questions I am not convinced separating them out is the most useful approach. Like other metrics exercises I think ignoring questions where they are irrelevant (or indeed having the system ignore them for you) might be an option.

I am sure that RCUK will provide a schema to allow us to upload data directly from our systems rather than data having to be uploaded direct to the RCUK systems with the risk that this might differ to what HEI’s hold.  I am anxious to hear more about this so that we can manage the requirements here at Glasgow.

Seems it may be time consuming with a greater potential for error and confusion than one standard questionnaire.

Of course our friends at RCUK may come up with some easy fixes so here’s hoping!

What do others think?  Please do comment in the box below and take part in this quick poll.


Outcomes Workshop Update

July 16, 2010

Hi All,

In addition to adding the draft report to the blog soon we will post the posters showing how the groups tried to classify activity and observations from our quick poll of what was worthwhile capturing and what was easy/difficult to capture.

*Update* Here’s how both groups attempted to categorise research activity :

London_summary   Glasgow_summary

London Communication and Impact discussion

London Communication and Impact discussion

Glasgow further funding discussion

Glasgow further funding discussion

Here is what we did at the workshops:

Key issues arising included:

  • Discussion about the purpose of data collection for RCUK  – is it really for justification of spend to treasury and the public or will future funding be affected?
  • Overlap of categories
  • Definitions need to be clearer to ease the pain of allocation to categories
  • Difficult of capturing information about some of the entities
  • Quality difficult to assess – could be based on story telling ability

 

Glasgow publications and performances

Glasgow publications and performances

London Identifying Issues

London Identifying Issues

Here is the short presentation on output management at the University of Glasgow:


Impact in the Context of REF

July 6, 2010

Some of my colleagues attended the HEFCE-sponsored “Impact in the Context of REF” event on Friday 25th June at King’s College London.  Kerry Revel very kindly provided this brief report.

The event was very informative.  There were several presentations from pilot institutions on their similar experiences of participating in the REF Impact Pilot.  David Sweeney, Director of Research, Innovation and Skills at HEFCE reported that the pilot is going well and that the panels have found themselves able to use case studies to differentiate scores.  The pilot has raised various issues to be resolved in consultation with the assessment panels. 

It was particularly interesting to hear from the Chairs of the Clinical Medicine and the English Language and Literature pilot panels.  They reported that, despite some initial scepticism from panel members, the process has worked well, which should provide reassurance and confidence to the academic community.  The importance was highlighted of institutions being able to showcase the benefits their research has had to the economy and wider society, particularly in the present climate, when funding for research is so tight.

The event included a presentation from Sue Smart, Head of Performance and Evaluation, EPSRC on RCUK’s Research Outcomes Project.  Sue’s slides are available at http://www.kcl.ac.uk/content/1/c6/07/35/62/09SmartREFKCL25Jun2010.pdf

The programme and presentations from the event can be accessed at http://www.kcl.ac.uk/iss/support/ref/june2010