With wonder and pride, you look at the whiteboard covered with new ideas written on coloured sticky-notes. The hard part of a group creativity session (most people call it brainstorming) is over, isn't it? We all know that generating many ideas is often difficult and cognitively exhausting, but this is only one face (the divergent phase) of the creativity process. The other one, the convergent phase, is often overlooked and taken for granted, but for actual implementation, we must first recognise and select the best idea or ideas from the pool of generated ideas.
The common assumption is that 'I can spot a good idea when I see it'. This view was shared also by creativity researchers, who for many years thought that brainstorming participants would also be able to identify the most creative ideas. If this were true, brainstorming sessions would end just after the idea generation phase: the best available option is clearly recognised by anyone, like a flashing sticky note there on the whiteboard, and everybody happily agrees.
Too bad we know the actual world is different: usually there is no clear agreement about the best available option and sometimes the selection process is made more difficult by people trying to favourite their own or their boss's pet idea (even when it's far from being brilliant!). Indeed, spotting the best idea amongst a vast amount of just-average ideas, is a difficult task: research has demonstrated that the idea selection procedure is often ineffective, and can easily lead to suboptimal results even when the idea generation session before it was highly effective (Rietzschel, Nijstad, & Stroebe, 2006; 2010). Recent research has suggested that the most brilliant innovators have the ability to diverge, that is generate a lot of different and also crazy ideas, but also to converge and focus their efforts on a few promising ideas (Zabelina & Robinson, 2010).
So it's clear that we must not only put effort into creating many possibly good ideas, but we must carefully select which idea is worth moving forward. Choosing an idea when working in a group can be a complex task, due to the well-known group interaction dynamics (team cohesion, groupthink, social influence and conformism, personality factors, etc. See for example Brown, 1988).
In this article, we will present and analyse 7 group selection techniques, in accord with 'the robust beauty of majority rules in group decisions' (Hastie & Kameda, 2005).
This is straightforward: each member votes for their preferred option. The ultimate winner is the option that receives the most votes.
This voting system is extremely simple, but it requires some individual cognitive effort, since only a single vote per member is allowed. It works best when there are many members in the group and not many alternatives.
Each member has a number of votes, typically 10, represented by dot stickers, and they can place their votes next to the options that they like. Members can place all their votes on a single option or divide their votes into multiple options. The ultimate winner is the option with most votes.
This voting system is usually better received than the 'Choose the best' system, due to lower cognitive effort: if you are not 100% certain that an idea is the best, you can split your votes into multiple options. Moreover, this voting system reflects how our judgement works: most of the time we like more than one single idea, and have different levels of confidence about their usefulness or originality. This system works with any number of participants and can be used when the number of alternatives is high, just remember that when the group is composed of very few participants, there is a risk that some members will fall in love with their pet-idea and go all-in, possibly biasing the final outcome.
This is another straightforward method: members are asked to rank all available options. This means that each member must order and assign a number to all the ideas: their preferred idea will be ranked '1', the second best will be ranked '2' and so on. Afterwards, the individual rankings are summed up, and the ultimate winner is the option with the lowest score. Alternatively, the winner is the option with the lowest median score.
This method is difficult and tiring to use when there are many options. Moreover, calculations can be long, especially with more participants. So this voting system can only reasonably be used with small groups and when there are few options.
This voting system is a mixture of the dot-voting and the simple ranking methods. Members have five voting dots numbered from 1 to 5. The dots are weighted so '5' carries a value of 5 points, and so on. They can then assign the weighted dots to their 5 favourite ideas, only one dot per idea being allowed. For each idea, the voting dots are summed up and the final winner is the option that received the highest score.
This method is quick and less demanding than simple ranking. Moreover it reduces the 'in love with my pet idea' bias of the dot-voting system. This system also works with any number of participants and alternatives. If there are less than 5 options, reduce the number of voting dots accordingly (e.g., when there are only 4 options, weight it from '4' to '1').
Each group member is asked to evaluate how much they like each alternative on a 1-to-5 Likert-type scale, with 1 meaning 'not at all' and 5 meaning 'very much'. Individual evaluation are then pooled, and the winner is the alternative with the higher mean value.
This method is somewhat boring for the users, who must evaluate and score each alternative. Calculating the total score for each alternative is also computationally demanding for the group, unless it's done automatically by software. As for the simple ranking, this voting system works best with small to medium sized groups and when there are only a few options. For a variation, it's possible to evaluate other attributes (such as originality, feasibility or potential impact) instead of ideas appeal.
Each group member is asked to evaluate each idea on a set of attributes established in advance, using a 1-to-5 Likert-type scale. Typically, useful attributes for this kind of evaluation are originality, feasibility, cost to implement, and potential impact. Since not all the attributes have the same importance, you can assign relative weights to the attributes: higher for more relevant attributes (for example originality); and lower for less important ones (for example cost to implement). Then individual scores are pooled and an overall value for each idea is computed (weighted mean). The alternative with the highest value is the ultimate winner.
This is a compensatory decision rule, this means that when the final values are computed, strength on one attribute (for example high originality) can compensate for weakness on another (for example long implementation time). Multiple attribute evaluation is usually considered as one of the most accurate forms of decision-making, but it comes at a high price, since it is also is the most cognitively and computationally demanding selection method on this list. Having software do the computation automatically certainly comes in handy. The hardest part, however, is to select which attributes are worth considering and decide their relative weights, and the effectiveness of this method is critically dependent on the quality of that criteria.
A simpler variant of this method is to use just two attributes, originality and feasibility, without a weighting procedure. Results can than be plotted on a 2-dimensions graphic, making it very easy to spot the ideas that are both original and easy to implement.
This idea selection method has been developed by the Mindiply team, and it is like playing a strategic card game with ideas. The full description of the Scroop technique is available here.
This selection method is somewhat cognitively demanding, since users should strategically plan their moves, but at the same time it's fun to play, due to its game elements and mechanics (gamification, see e.g., Deterding et al., 2011; Hamari, Koivisto, & Sarsa, 2014). This method works best with small groups (4-6 members) and it can be used when there are many alternatives available, as the game forced you to play with a smaller subset of ideas (typically 5).
Remember to combine or remove very similar ideas. Failing to do so can result in vote-splitting, that is a scattering of votes that reduce the chance of similar ideas winning;
Any group selection technique works best when voting is anonymous, otherwise there is a risk of getting biased results due to social pressure and conformity (Asch, 1951);
Sometimes a combination approach works best: you can use a non-compensatory strategy, such as simple/smart ranking or dot-voting to reduce a huge number of alternatives to a smaller sample, and then for the ultimate decision you can use a more complex strategy, such as multiple attribute evaluation or the Scroop technique.
Things can go awry unexpectedly, so it's always a smart move to have a plan-b ready!