How do you know how creative your employees are?
In our last blog
But how do you measure creativity? How can you determine just how creative your employees are? How successful are your attempts to make the working environment conducive to creative thinking? You might think that assessing creativity is an oxymoron – how can you measure something that is intangible, has no boundaries?
This is a topic that has perplexed psychologists and researchers into psychometrics for decades. Many papers have been written on the topic, and they all start with the challenge of defining creativity. There is disagreement around what constitutes creativity: is it only radical new thoughts, or can children’s artwork be considered creative? Mark Batey, of the Psychometrics at Work Research Group at the Manchester Business School, says, “Many researchers have adopted the ‘new and useful’ definition of creativity (Mumford, 2003); which suggests that a creative product is that which is deemed to be novel or original and useful or adaptive. Another area of agreement has involved the concept that creativity may refer to a person (or persons), processes, products, and also the environmental press (Rhodes, 1961/1987).” 
It’s interesting to note that our notion of creativity as the original product of an individual is a predominantly Western perspective. The Western construct of creativity includes novelty and utility, as described above, whereas the Eastern view sees creativity as an expression of self-growth or personal truth. The ancient Greeks thought creativity arose from divine inspiration (and therefore could not be measured). Modern tests of creativity tend to concentrate on the Western concepts of originality and utility.
Subjective vs. objective measures
There are numerous assessments of creativity widely available today; and just as many opinions on which methods are the most authentic and reliable. Researchers are divided as to the most effective means to evaluate creativity in an individual. However, despite the variety of tests on the market, measurement methods can be classified along two poles – subjective or objective – according to the tools used.
Subjective methods, as the term suggests, involve an assessment – by an individual or a close observer – of creativity, e.g. an answer to the question, “What have you done that might be considered creative?” Other examples of subjective assessment include direct fieldwork observations, surveys, interviews and written documents. It can be argued that we know ourselves better than anyone else can know us; and an observer might not be aware of subtleties in our thinking or the frequency of creative behaviour. While you might expect self-reports to exaggerate creativity, studies show that subjective measures often correlate closely to objective tests. However, subjective tests are prone to “social desirability bias”, the tendency to answer questions in a way that will be perceived favourably by others. A recent study comparing subjective and objective measures of creativity found a smaller variance in the subjective results, suggesting that desirability bias was present.
Objective tests often measure “divergent thinking”, or the ability to think differently – to come up with ideas, e.g. alternative uses for an object, that are different from the majority of responses, and this can be measured quantitatively. By contrast, “convergent thinking” is the ability to reach a single, correct solution to a problem, as in most knowledge-based tests or exams. Problem solving in the corporate environment generally encourages convergent thinking, but innovation results from divergent thinking.
Evaluation by supervisors and experts
Another form of objective assessment is evaluation by supervisors or experts. Several scales have been developed to enable a supervisor to measure a subordinate’s creativity. However, a supervisor’s perspective, even using a standard tool, is inevitably subjective. Unconscious bias may creep in, based on like or dislike of the individual or qualities the supervisor perceives, rightly or wrongly. For example, an individual who has good attention to detail and approaches their work in a methodical manner may be assumed to lack creativity, and so the supervisor may score them lower than a more flamboyant colleague for a similar idea or output. A study by Amy Randel, Kim Jaussi and Anne Wu showed that supervisors are notoriously bad at assessing the creativity of their employees, with absolutely no correlation between supervisor assessment and objective creativity.
An expert is someone with specialised knowledge in a particular field. In the context of evaluating creativity, the expert should not have a supervisory relationship to the person being assessed. Examples of expert evaluation include reviewers of patents or research papers. Expert evaluation is generally considered to be among the most objective, and often uses quantitative measures, i.e. something that can be scored (such as number of patents).
Interpretation of results
There are risks involved in using subjective and objective methods interchangeably; however a strategic mix of methods may offer the most reliable results. Scholars in the field acknowledge considerable room for improvement in the effectiveness of creative assessments, but agree that a mixed method approach provides the most comprehensive view of an individual’s creativity.
Whatever tests you use, interpreting the results is a complex process and not a task for those without specialised knowledge and training. If you’re going to invest time and money in measuring your employees’ creativity, make sure you get the best results from your outlay and employ the services of professional psychometric assessors.
Find out more
We can help you measure creativity in your teams and your employees. If you’d like to know more, contact us on on 012 940 6300 or firstname.lastname@example.org.
 “When Does Being Creative Lead to Being Rated as Creative? The Moderating Role of Perceived Probability of Successfully Bringing Ideas to a Supervisor’s Attention”, published in Creativity Research Journal, Volume 23, 2011