Feature | Measure for measure

A think piece exploring the questions around how and why we measure art and culture...

‘Who is the greatest Italian painter?’ ‘Leonardo da Vinci, Miss Brodie’

‘That is incorrect. The answer is Giotto, he is my favourite.’

The Prime of Miss Jean Brodie by Muriel Spark


How can you measure art or culture? Art is, after all, about the vision of an artist and on the other hand, a personal interpretation. Can we measure this? Should we?

The quotation above sums up the problem with evaluation. We aim for objectivity but we just find subjectivity. We may sit in the same theatre, but whilst one person is enjoying the summit of human achievement, another is wondering how much money has been spent on such a boring evening. And that, in a sense, is what the arts is about.

That’s not really what evaluation is about though. That would be a misunderstanding about what it is, what we are evaluating and why.

What is evaluation?

A question often asked is - what is the difference between evaluation and monitoring? The terms are sometimes used interchangeably and there is some overlap in practice. However, evaluation is distinctive in that it is measurement against a set of standards, usually the objectives of a particular project, programme or set of activities. It is also ‘outcomes’ orientated in the sense that it focuses on making judgements about the effect or impact of the activities rather than focusing purely on the characteristics of the audience or stating what is happening or has happened.

The idea of evaluation has been strongly influenced by work in the health sector. St Leger and Walsworth-Bell (1991) refer to:

The critical assessment, in as objective a manner as possible, of the degree to which a service or its component parts fulfils stated goals.

Of major note in the arts sector is Felicity Woolf’s Partnerships for Learning, published by the Arts Council of England. It focuses specifically on education but its principles are transferable to many aspects of cultural practice. She writes:

Evaluation involves making judgements, based on evidence, about the value and quality of a project.

Evaluation is therefore distinct from monitoring, which is more to do with the systematic collection of information as a project progresses. Monitoring can form part of the evaluative process but it tends not to have the analytical component at the heart of evaluation. It’s like the difference between checking how many tickets have been sold in the lead up to an event and an assessment of whether the right audiences have been reached afterwards.

What are we evaluating?

It is unlikely that evaluation would be used in deciding if Leonardo da Vinci was a ‘greater’ Italian artist than Giotto. Measuring ‘greater’ would be quite difficult and it would be unclear what could be done with the answer once it was known.

In terms of the actual use of evaluation to arts managers, it is probably better to concentrate on the processes, management, participant and audience view of what is happening. Usually, this sort of evaluation can split into three elements:

  • Evaluation of processes
  • Evaluation of the outcomes for audiences and participants
  • Evaluation of wider and longer term impacts (e.g. on society or the economy)

Sometimes, this is described in terms of efficiency, effectiveness and impact.

It can also be thought of as being like a radiating circle with rings or ripples moving out from the centre. The part that is closest to you is the assessment of the processes – the management and organisation of the project - which will mainly involve the people working around you. Then there are the set of people connected to you – such as audiences and participants. Finally, there are the impacts on people and elements that might not have a direct connection with you such as on a city or area as a whole.

Why evaluate?

There are many good reasons for undertaking evaluation and they aren’t all about proving to funders that their money has been well spent. Fundamentally, it helps us to learn and improve what we do and to do this in an evidence based way. Beyond this, it can also provide a chance to reflect - on yourself and other’s attitudes and efforts - to enable you to have a sense of the importance or place of your work in wider contexts and can provide useful legacy or ideas for the future.

It is therefore a tremendously powerful and useful tool, even though within the sector there can be negative and sceptical concerns about its purpose. This reluctance is usually based on a refusal to accept that anything needs improving, combined with what is seen as having to justify what shouldn’t need justifying to people who have no right to know.

When undertaken well, evaluation can be a liberating experience; it demonstrates organisations have confidence in what they are doing and are strong enough to accept what evaluation might discover. Thus, to say why evaluate? is equivalent to saying why learn? The highest performing people, companies and organisations are ones that strive constantly to examine, review and reflect in order to change and improve.

I've not failed, I've just found ten thousand ways that won't work
 Thomas Edison

If work in the arts and cultural sector is seen as a journey or a continuous cycle of improvement - as with David Kolb’s learning cycle - then it can help to release organisations from instinctive defensive reactions. To improve, does not negate what has been done previously. Of course, the inherent problem is a fear of failure or criticism. We’d rather not know. As a result, we live in a never, never world in which we are always right. Kathryn Schulz, author of ‘Being Wrong:

Adventures in the Margin of Error’ presents it as:

the present tense is where we live … so we’re trapped in this bubble of feeling very right about everything …if you can step outside of this feeling it is the single greatest moral, intellectual and creative move you can make

This can happen through personal reflection, but evaluation, if done well, enables it to be done in a systematic way. It can also help attitudinal problems because it is de-personalised, making it about process rather than blaming people.

Another reason for doing evaluation – showing impact – can also be problematic for arts and cultural organisations. The truth is that art, theatre, music, literature etc is being assessed all the time; the critic, professor, programmer or funder judge it and this is accepted because these people are informed insiders or part of a peer group; they’ve been educated into accepted ways of articulating the cultural offer. Asking the audience or public on the other hand may run the risk of puncturing this protective bubble.

Evaluation enables an organisation to take control of the process. As Felicity Woolf states, a key element is the ‘evidence’. This is crucial because it moves assessment away from the opinions and decisions of a few people, and places it within a less biased and more objective framework. By making this clear and transparent, it also makes it capable of further scrutiny.

Therefore, good evaluation makes its methodology clear, a good example being the Creating an Impact report about Liverpool 2008’s European Capital of Culture. Not everyone will agree with the analysis, but the authors are clear about how the evaluation was conducted and there is a distinct connection between findings and analysis.

The danger is that if we don’t do this ourselves, someone else will do it for us. It makes us susceptible to the imposition of targets and outcomes which are not useful, appropriate or desirable for our work. In addition, whilst not wanting to over emphasise the ‘justification’ element of evaluation, organisations need to be mindful of the reality that resources, whether public, private or individual are in short supply. As the sector increasingly competes with other demands on the public purse, funders (on behalf of their tax paying citizens), have a right to know how resources are being used.

Tips for carrying out evaluation effectively

Evaluation at its heart is a simple process. It involves:

  1. Stating what you intend to achieve
  2. Deciding how you will show whether this has been achieved
  3. Gathering the necessary evidence
  4. Summarising and analysing the evidence
  5. Comparing findings with what was originally outlined, deciding on the implications and providing recommendations and ideas for future work

The elements that organisations seem to find most difficult are steps 1 and 2. When external evaluators are called in at step 4 they sometimes have to engage in back tracking – an unpicking of what the project was trying to achieve - which can hinder the overall credibility of the final report. So whilst it may be a cliché, it’s worth remembering that evaluation is not something undertaken at the end of the process but integral throughout.

A simple framework can be used to articulate this. For example, starting with questions such as ‘what do we want to achieve?’ against each aim. Each subsequent objective is then given a ‘measure of success’, i.e. the ‘evidence’ that is needed, the ‘methodology’ by which it will be collected and an outline of how this will be ‘reported’.

There are other ways of bringing these ideas together, depending on the requirements of the project. For example, Ixia’s evaluation of public art looks with sophistication at areas such as the values of the partners and stakeholders involved. Their matrix and personal project analysis tools are described in more detail here.

Similarly, the W.K. Kellog Foundation’s Logic Model emphasises the importance of linking the planning and evaluation of the project by articulating its desired results.

So deciding what success means at the outset will help to devise what measures to use and the evidence to gather. This is important across the cultural sector, as looking at the opinions of those involved and presenting them in a robust and rigorous way requires careful consideration. Qualitative research or ‘anecdotal comments’ can be a valuable component of evaluation, but needs to be well structured. Evaluation reports are often peppered with quotations from participants exulting that this was the best project they’d ever been involved in - however, the question(s), contexts, circumstances involved are also rather important. Quoting from the one person that enjoyed it and ignoring others obviously wouldn’t be very representative. In addition, a participant may have enjoyed the workshop or performance but where did they start from? Had they done anything like this before? Did it make a difference in the longer term?

This means being careful about what it is that requires demonstrating, paying close attention to some of the more onerous aspects like establishing ‘baselines’. In this way, rather loose concepts can be better tied down e.g. looking at the ‘distance travelled’ of a participant rather than purely where they have arrived from.

Standardising approaches to evaluation is also important at a bigger quantitative level too. One of the aims of Audience Finder is to set comparable questions across the whole sector. Without this, it is difficult to create meaningful benchmarks. Some ratings questions for example, need to ask about the same elements and use the same scales. This makes comparison possible in all sorts of ways, not just between organisations in a geographic or sectoral cluster, but between different types of organisations or for the same organisation year on year.

Having said this, it’s important not to get too bogged down in methodology. It’s much better to do some evaluation even if it’s not perfect. A few simple questions can elicit a great deal of useful information. For example, The Mill Road Winter Fair in Cambridge is an annual community arts festival with a large number of volunteers. Every year volunteers are asked the same three questions, one of which is ‘How can we improve what we do next year? Approximately 50% usually respond with a range of excellent suggestions, many of which have been implemented at subsequent events.

Many people work in the arts and cultural sector because it is an inspiring, magical, mysterious, emotional and energising sector – elements which are somewhat intangible and difficult to evaluate. But organisations can be creative and imaginative with evaluation. The Museum of Modern Art, New York’s ‘I went to MOMA and …’ is a wonderfully simple way of gaining feedback. It’s open but consistent - visitors draw pictures, diagrams, make statements and MOMA then share these on their website.

If you want to go further, the work of Alan Brown in the USA on ‘intrinsic impacts of culture’ is further evidence of how it is possible to measure how people might be changed by an arts experience. Brown’s studies investigate the impact of the arts and culture at individual, group and societal levels, researched rigorously over time. He is not limited by the niceties of this process, stating:

If you can describe something, you can measure it. It took a long time to work out that no matter how abstract something is, if it can be described, then questions can be drafted that would elicit responses to offer an insight into the process

Their ‘arc of engagement’ doesn’t ‘dumb down’ but on the contrary, demonstrates how powerful the arts are to the people who see, hear and feel what they do. By showing the effect, it enriches understanding of the connection between artist and audience.

Reporting

Evaluation reporting is dependent on the nature of the project and the people who need to see the results. However, there are a few principles worth noting.

Firstly, good evaluation reports combine summative and formative elements. That is, that there is a mixture of reporting of numbers and outputs together with an assessment of what its implications are and recommendations for the future. Torbay Council’s evaluation of their summer events from 2013 does this well here. It’s clear and open and outlines the way in which the organisations involved can benefit from the evaluation. It is not evaluation which sits on a shelf or is used to make funding decisions; it’s a useful, shared document which addresses key questions for the area.

Secondly, good evaluation separates reporting from the advocacy of the project. An organisation may want to make a case for the worth of their work and disseminate its outcomes, but ideally this shouldn’t be confused with the evaluation itself. It could draw from it, but the original evaluation should aim to be objective and unbiased in assessing what has happened.

Finally, let’s not forget that this should serve a purpose. To do this, we need to find the right place for evaluation in the work. The late Dragan Klaic once said that ‘the problem with the Brits is that they are obsessed with evaluation, you can never go to any conference or event in the UK without someone coming up to you and asking you to fill in a feedback form’. However, the point he really wanted to make was that he wouldn’t have minded if it made any difference; instead every conference he went to was ‘just as dreadful as the last one’!

The goal is not just to analyse the world but to change it.



This article is a revised version of an earlier essay ‘Sustaining Cultural Development’, by Jonathan Goodacre, (2013) Gower