AVOIDING NEW PROGRAMME OR SERVICE FAILURES

Taking risks is an essential element of success. In the commercial world many business titans from Henry Ford to Richard Branson experienced and came back from bankruptcy. In the US up to 90% of new businesses fail, with 10% of those failures occurring within the first 12 months. The mantra is that without risk we cannot succeed, and without failure we cannot learn.

While the financial devastation that such business titans usually leave in their wake is often underplayed, whether that be their customers, family and friends, or employees and suppliers, there is undoubtedly some truth in this. However, in our sector this mantra comes with a few caveats; generally clients, funders, politicians, interest groups, media outlets, and taxpayers have little tolerance for failure. Also in child welfare and OOHC in particular, programme and service failures can come with adverse client consequences, and sometimes devastatingly so.

Here are six common types of failure that I see and some thoughts on how to avoid them:

  1. Problem definition failures: In our sector a sense of optimism is vitally important, as is valuing the capacity of people to change. However, having a clear understanding of the problem that you are trying to solve, and the varying perspectives on its importance, the ways in which it is presented, and potential impacts, are all critical. When drawing on evidence from overseas, particular care needs to be taken around terms and contexts. Be thorough in your analysis about the extent to which this solution will address this problem for whom and it what circumstances – social programmes rarely deliver outcomes to the extent indicated during a budget application and approval process, and I would struggle to think of more than one example where expectations have actually been exceeded.

  2. Timing failures: Some good programmes and services fail because the timing was wrong. ‘Catching the wind’, often reflected in political, media, public or sector support, or being opportunistic, has almost become a pre-requisite. However, it is not the only potential timing failure; monitoring the changing environment, and the extent which you, your organisation, or the wider sector can deliver the change in the context of other competing priorities, is also important. Another aspect of timing is being realistic about outcome (and indeed implementation) timeframes. If positive outcomes are not going to be apparent for 5-10 years this needs to be explicitly identified at the outset accompanied by a realistic assessment of how much any proxy indicators are actually going to tell you; otherwise it’s likely just an excuse for poor performance.

  3. Design failures: While we’ve largely moved on from “build it and they will come” developing the right programme and service design to meet an identified need, and balancing out policy intent, getting expertise in its various forms including care experienced people, and assessing existing evidence, is tricky and complex. Particular care needs to be taken with core assumptions and how key components might link together, as what we intuitively think ‘should’ work often simply doesn’t. Some design weaknesses may only become apparent once a programme or service has been scaled up; during a pilot for example, a committed and respected manager or team can often make a poorly designed programme work, for it to then disappoint in the hands of others, or fall apart altogether.

  4. Implementation failures: Although design and implementation tend to be more integrated activities than in the past and there is perhaps today more recognition of the importance of (and time and support needed for) effective implementation, for me this is where the learning really happens and we see can positively change happening. Unfortunately, one particularly obvious example of failure that I come across again and again is individuals not actually fully implementing the service or programme in the first place! Whether it’s challenges that could have been often anticipated around budget timelines, staff recruitment, engagement of key individuals outside of the programme or service, key staff leaving, or other priorities etc., too many new programmes or services simply do not work as intended because they have either not been fully implemented or have been fully implemented so recently that there has been insufficient time to see any positive results.

  5. Monitoring and evaluation failures: While outcome (aka summative) evaluations are important, I’m a strong proponent of evaluations that help a new programme or service to ‘find its feet’ (aka formative evaluations) and early monitoring; learning years down the track that any critical programme or service components aren’t working as intended is simply not good enough.  Too few of our new programmes and services have strong monitoring and project management systems in place to support ongoing high-quality decision-making.

  6. Self-awareness failures: Individually and as an organisation, what was your last programme or service failure? Has there been an honest acknowledgement of this failure? What did you learn, how has such learning been captured, and what will you do differently next time? Is there also a strong understanding of strengths and weaknesses, as well as the capabilities that will be needed?