What is your experience?
I recently posted this question on two LinkedIn group forums – IEDP’s own and ICEDR. The feedback was wide-ranging both in scope of people’s roles, experience and locations but all generally angled in the same direction in terms of views.
Design-in Evaluation during diagnostic and design phase
The overriding observation in response to my question was the critical importance of designing-in evaluation right at the start of the program. Clarity of program objectives are essential in terms of a robust design. This can reflect either behavioural goals or organisational goals that can be achieved in action learning projects. For example Dr. John O'Connor, who developed the Results Assessment model at Ceres Management, observed: ‘Knowing what data needs to be found, and from which stakeholders, puts you in a compelling position to reflect back to the organisation and to help them make decisions. We may call this the 'Return on Expectations' evaluation model.’
Dr Simon Fletcher, an organisational psychologist at Innovative HR Solutions in the Middle East noted ‘I've very rarely seen organisations apply the same discipline to deciding to continue a development programme that they may have used at the procurement stage’
Shirine Voller of Ashridge, in her excellent article on 'Re-framing Programme Evaluation' in the Ashridge Business School Journal, makes a key contribution to this debate,observing ‘different stakeholders will have different expectations about the purpose of evaluation, and importantly if purpose isn’t clear and aligned up front, it can lead to confusion and compromise further down the line. Setting stakeholder expectations prior to program design is crucial to enable useful evaluation.’
Dr Andreas Lohmer Vice Director, Executive School of Management, Technology and Law (University of St. Gallen) quotes an insightful experience at AXA where they are deploying an online portal to support evaluation. The online evaluation is sent out to participants seven days after the program/ module has taken place. Then again three months after the program a so-called "performance evaluation". In addition AXA University double-checks with the participants’ managers whether their participation has had an impact on their daily business and/or the quality of their performance.A lot of dismissive comments are made about post program evaluation forms or ‘happy sheets’, yet business schools throughout the world seem wedded to the 1-5 ubiquitous scale of evaluation, often referred to as ‘happy sheets’. Yet Happy Sheets do serve some purpose in collecting immediate data -. They can give real time data as to how something has landed. However, a good program design is more analogous to a good movie, the effects and impact can last days months and sometimes years. There is clear merit in post program evaluation and these can be enhanced as illustrated by Emma Simpson of RBS Talent Management. In partnership with the Centre of Creative Leadership Simpson evaluated a recent leadership program to define 'what next' as a feed of diagnostic for the next phase. She comments ‘this was a really useful exercise beyond the traditional review of happy sheets, and a great catalyst for us to pick up with respondents and create a group wide story of how the learning lives on.’
Fiona Stewart, Managing Director at Leadership Talent Australia, cites a robust practice with her clients. These include pre-program assessments (both participant and line-manager view) and then compares this with post-program assessment (again participant and line-manager) which is conducted about 3 months after the intervention. Stewart comments ‘Generally our interventions are longitudinal so overall we are spanning at least 6 months or so. We have also started playing with testing for statistical significance on our data and the early findings are very strong.’
I am also aware of clients using techniques from marketing by using ‘net promoter’ scores on a 0-10 scale as an evolution of the 1-5 scale . This is based on respondents being categorised into one of three groups: Promoters (9–10 rating), Passives (7–8 rating), and Detractors (0–6 rating)
Measuring Real impact – Success Case method
Perhaps one of the most significant inputs from the many contributors was the longer term evaluation process summarised by Dr Marguerite Foxon who acknowledges Brinkenhoff’s ‘The Success Case Method’ which is invitingly subtitled ‘Find Out Quickly What's Working and What's Not’.
In essence the success case method is focused on capturing the stories of change that may have happened following a program or intervention. The method uses the stories of individuals participating in an initiative to investigate and understand the roots of their successes, and proceeds on the premise that small successes can lead to greater ones. If only 5 out of 50 participants achieve marked success, it follows that a detailed study of what these five are doing could yield instructive results for those who are struggling. A cost effective option for many organisations would be to employ Masters or PhD students in capturing these stories of impact that can also result in some strong corporate narratives aswell as helping a student develop robust data source and potential Masters project.
In summary my checklist for successful executive development evaluation is:
I am grateful for all the contributors to this topic on evaluation effectiveness and hope my summary may be helpful to those designing and deploying evaluation processes.
The Saïd Business School is Europe’s fastest growing business school. An integral part of the University of Oxford, it embodies the academic rigour and forward thinking that has made Oxford a world leader in education.