Share this Post
Basing Stabilisation Efforts on Evidence of What Works: Lessons from Afghanistan
The current context of the Middle East demands complex multifaceted strategies that merge hard, soft and smart power. The UK Government has continued its commitment to supporting interventions in fragile states as evidenced in the launch of the GBP 1.3 billion Conflict Stability and Security Fund. It is therefore vital to understand the structures that have to be in place to ensure money is effectively targeted.
Monitoring and evaluation of collective stabilisation efforts is extremely challenging; there have been limited examples of successful approaches. Most approaches for fragile states are predicated on the notion that a complex series of concurrent mutually reinforcing interventions needs to be carefully sequenced and integrated to foster stability. This implies that there is some overarching stabilisation strategy and plan. It also implies that there is a process in place to monitor and evaluate activity to support the continued application of these approaches. Crucially, interventions need to be designed to be capable of modification during implementation, reflecting changes in the context and environment (changes captured through monitoring).
There is limited evidence and experience of how to implement such monitoring and evaluation (M&E) successfully. This paper draws on some recent experience and seeks to identify attributes of a successful approach. It captures the experience of the author through 10 years of working in governance, security and justice in fragile and conflict affected states.
A Short History of Monitoring and Evaluation in Stabilisation
In late 2007, the UK government commissioned a short internal review of about USD 700 million of investment in stabilisation activity undertaken in Iraq from 2003 to 2007. Funds had been used for everything from fish and poultry farms, roads, bridges, power and water distribution schemes to spraying date palms and revitalising the entire date growing industry. The diversity and scale of activity was quite bewildering.
The intent of the review was to establish an evidence base to support a narrative of progress. It rapidly became apparent that there had been no consistent strategy or plan, and that there was almost no effort to systematically evaluate what was working, or what failed, and most importantly why.
It was also apparent that key actors, including the Multi-National Division South East, responsible for security in the south east of Iraq from 2003 to 2009, UK and US development agencies and the government of Iraq had very different perspectives on what should be done. The political pressures on all actors were intense and the incentive was toward action now. Consequently, it was impossible to discern common strategy, let alone a plan, and no way of evaluating impact.
The result of the conflicting pressures and levels of understanding as well as the need to accommodate a variety of actors is frequently a basket of disparate measures – any one of which might be justifiable on its own merits but which collectively falls short of an integrated strategy merging hard, soft and smart power to foster a level of stability that will permit progress toward a lasting political settlement and that will set the conditions for peace growth and prosperity.
Effective stabilisation is about the consistent integration and synchronisation of a range different activities over time. It is understandable that major nations and coalitions are attracted to the idea of overarching M&E programmes that cover an entire theatre, country or region and that attempt to monitor and evaluate various projects with a common goal of improving stability.
Recent examples of such programmes in Afghanistan include the UK government’s Helmand Monitoring and Evaluation Programme (HMEP), the Australian government’s Uruzgan Monitoring and Evaluation Programme (UMEP) and the US government’s Measuring Impacts of Stabilization Initiatives (MISTI). Currently, some donors are looking at possible country-level M&E programmes following on from the success of MISTI. Donors are also considering the feasibility of programmes that can monitor and evaluate the impact of stabilisation support funds, either regionally or globally.
Experiences of the last 20 years do suggest that there are some guiding principles and approaches in the M&E of stabilisation efforts that are likely to deliver a better outcome than others.
Conflicting Agendas, Narratives and Audiences
The stabilisation landscape is complicated by the multiplicity of donors and nations which are a necessary feature of coalition work; each bringing its particular interests and prejudices to the table. There is the classic military – civilian rift, exacerbated when military actors seize on the importance of non-kinetic interventions, sometimes as a substitute for the tougher challenge of getting the security situation right. In doing so they can attempt to railroad host nation governments, security forces, international aid agencies and other actors into strategies they know will be unlikely to succeed.
By contrast, civilian actors are often seen as slow to respond to a dynamic situation and indifferent to the sacrifices of national and international forces. NGOs are understandably, but perhaps not always realistically, concerned about militarisation of aid, the compromising of humanitarian space and being co-opted into supporting overtly political agendas that are inconsistent with their charters.
All stakeholders – be it the putative governments of conflict states, the governments of nations committed to stabilisation, development agencies, NGOs or civil society - have their own narratives to buttress and their own constituencies to inform, influence and maintain the support of to ensure they build and maintain their political licence to operate.
Frequently, the need to cement support at home or ensure collation cohesion takes precedence over the need to understand what is and is not working in the concerned region or country and to apply the evidence to inform future activity. At times, the last thing decision makers want or need is evidence or information that suggests that not everything they have done has been successful, or that a partner’s intervention is counterproductive, their dilemma being that to publicly admit mistakes is to deny themselves the ability to continue their intervention. One manifestation of this is that even where effective and impartial monitoring and evaluation is undertaken, the evidence gleaned is often jealously guarded and so the potential benefits in terms of lessons learned and applied are denied to the wider community of practitioners.
It is unrealistic to expect that every decision can or should be taken on the basis of comprehensive evidence and analysis. Paralysis by analysis is an enduring feature of the response to low level conflict and insurgencies in countries as diverse as Afghanistan, Iraq, Pakistan, Somalia and elsewhere. There are times when decisions need to be taken and acted on quickly and perhaps modified later. Effective and comprehensive monitoring and evaluation is one tool that can facilitate this approach.
Features of Effective Monitoring and Evaluation – Conspicuous in their Absence
Effective M&E is normally based on a deep understanding of context and a strong evidence-based theory of change, consistently applied. This needs to be linked in turn to a carefully chosen set of indicators that allows for monitoring of outputs, outcomes and impact to provide a comprehensive measure of progress and achievement at every level of programming.
Most importantly, the M&E framework allows for the testing of the assumptions that underpin the theory of change – and therefore enables lessons learning and adaptation. Given that stabilisation interventions invariably operate in highly complex political and security contexts, it is this feature of an effective M&E approach that is most important. Good M&E is one of the enablers for an adaptive learning approach to programming.
Of course, not all successful examples of complex adaptive approaches are founded on deep contextual understanding and a strong evidence based theory of change. In unstable and violent areas, where there are big political considerations at stake driving the pace of operations, it is often all but impossible to develop a sufficiently deep understanding to craft either the optimal approach or to formulate a theory of change based on strong evidence. Instead, it is necessary to accept that at best it may only be possible to arrive at an interim framework that identifies the most likely entry points and best approaches, and have in place a robust M&E approach that allows for subsequent development of both the theory of change and the approaches adopted.
A good example of an organic adaptive process was seen during the Iraq surge in late 2007. A range of pilot interventions ultimately evolved to deliver a highly effective short term impact in the Anbar Awakening. In effect, a strategy evolved from a series of rapid experiments. Some of these were abject failures, others demonstrated clearly the drivers of local level political legitimacy and what was required for the US military strategy to win the hearts and minds of the tribal leadership. None of that would have been possible if leaders and decision makers had waited until they had all the relevant evidence in hand to help determine the decisions they made. Today it is easy to deride the US strategy in Anbar, but at the time it transformed a failing campaign. The Anbar example is interesting because it evolved, reflecting a permissive command environment within which licensed experimentation within boundaries was encouraged, as well as an assessment process that identified and scaled up activity that appeared to be working. Critically, it also occurred within the context of a single organisation, so it didn’t have to accommodate the interests of multiple actors or the incentives and interests of the Iraqi government.
Get the Context or the Context Will Get You
Contextual understanding is the foundation of everything that follows. As Stanley McChrystal, former US commander in Afghanistan, said in 2011, "We didn't know enough and we still don't know enough.” Both Afghanistan and Iraq offer excellent examples of well-meaning interventions that failed to achieve the impact they deserved because they were not designed in the light of a deep contextual understanding.
Poor contextual understanding can lead to disastrous decisions early in a stabilisation effort from which it is hard to recover. In Iraq, the UK government and military consistently underestimated, misunderstood or chose to ignore the complexity of Iranian efforts to influence the Shia population in the Basra, in southern Iraq. The UK discounted the multitude of non-kinetic investments that comprised a combination of soft and smart power which led to the loss of effective control over the security situation in 2006/07 as it became apparent to citizens that the UK narrative lacked substance.
In 2006, the planners of Task Force Helmand (ISAF’s military command in Helmand Province, Afghanistan) also failed to consider how their deployment to northern districts such as Sangin Musa Qala and Nowzad would impact on a complex tribal political economy where the population perceived the UK to be taking sides in a long-standing tribal struggle for influence and control of resources. It took ISAF five years and a significant commitment of US military forces to recover from a decision that might have been very different had the context been fully appreciated. Such miscalculations are not unique to the UK or US military, or coalition partners - they also occur in the myriad of civilian agencies and NGOs.
Look at the Whole Picture – Not Just the Piece that Interests You
For some of those working in Helmand in 2008, one of the most useful references on how Pashtun society actually worked was still Frederik Barth’s excellent work from the 1950s. This, combined with some interesting contemporary work from the Tribal Liaison Office, an NGO, provided a good theoretical framework to start from. The UK and US governments had also commissioned some useful work on drivers of radicalisation. What was less obvious was an understanding of the extent to which principles of socio-political organisation outlined by Frederik Barth might apply in Afghanistan’s Pashtun belt, how they might be translated into practice in Helmand, and what this meant for the practical application of stabilisation and counter insurgency strategies and operational plans. In short, the analysis of the political economy of the area was inadequate. We simply didn’t understand enough of the context.
The vogue approach to understanding context in the UK is focused on practical application of the concept of political economy analysis, a brief summary of which is encapsulated in a UK Department for International Development how to note. The basic idea is simple enough – understand the formal rules of the game and the informal realities of doing business, analyse the stakeholders in terms of their incentives, motivation and relative influence, and then use this information to inform the selection and implementation of strategies that are likely to work with the grain of what is uncovered.
Political economy analysis needs to encompass everything from micro-local to national and regional politics. To work effectively in tribal and sectarian societies, it is fundamentally important to understand how political authority is acquired, maintained and utilised to support the interests of key stakeholders and sections of society. This goes to the heart of understanding what contributes to political legitimacy in the eyes of key interest groups.
A strong monitoring and evaluation framework should always be anchored in first class political economy analysis. Critically, such an analysis enables us to understand how information is transmitted and received – especially important in societies where there is limited media penetration. HMEP was able to use techniques based on social network analysis to better understand how information was disseminated and how de-facto power and influence were exercised through a network of hujras (or guest houses) across the various districts. This information was then used to inform the development and implementation of initiatives to support local governance which built on existing practice and custom but also offered a transitional pathway toward the model on democracy set out in the constitution.
In Helmand, in contrast to a comprehensive political economy analysis, ISAF’s counter insurgency narrative tended to limit analysis to a small number of key leaders and the machinations of the formal institutions of the state and in doing so missed one of the most valuable sets of insights needed to inform programming, which is that in a land of weak institutions and volatile security the real centres of gravity, contested by the Government of Afghanistan, the Taliban and ISAF, were the enduring tribal networks that controlled central Helmand’s agricultural economy. Consequently interventions often targeted the wrong constituencies, which reduced their impact considerably.
Finally, it is necessary to validate and understand key background data, including the location, distribution and size of settlements. This has proved particularly challenging in the Afghan context, but is a feature of working in places as diverse as Pakistan, Colombia and Somalia. Considerable care needs to be devoted to ensuring that individual settlements are clearly identifiable and that basic demographic data is as accurate as possible given the circumstances. Technology is helpful in this respect, and aerial surveys combined with the use of GPS and GIS technology have made this process much easier than it was in the past. Almost by definition, most of the fragile and conflict affected states we are currently concerned with are characterised by very poor information on demographics, and often basic mapping data is considerably out of date and difficult to obtain.
The HMEP programme devoted significant resources to reconciling data collected with existing maps and survey information. Detailed social network analysis conducted by HMEP was preceded by a comprehensive geospatial validation exercise to ensure that data was anchored to an accurate map that reflected the understanding of the local population in terms of where people live and how communities were grouped and chose to associate.
Have a Plan; Remain Engaged
In Helmand, in 2005, an original UK stabilisation strategy was delivered through a cross government planning team. By 2006 the original strategy had gone out of the window as UK forces deployed rapidly out of central Helmand to northern districts, and civilian actors found themselves rapidly overtaken by a deteriorating security situation that constrained their ability to operate.
The Helmand Provincial Reconstruction Team (HPRT) grew in strength and capability, but it was not until late 2008 that there were sufficient civilian resources to provide a credible and consistent input to stabilisation planning. There followed a series of efforts to develop a joint civilian and military multi-national multi-agency stabilisation plan, and the first fully formed plans appeared in early 2009. It was not until 2010 that we saw the first fully worked out product of a joint analysis and planning process. When the HMEP team deployed in 2010, there was neither a strategy nor plan capable of sustaining a tight conventional monitoring and evaluation framework. Instead, it took a series of evolutions of HMEP culminating in the development of a transition readiness template before the potential of M&E could be realised.
Given the very different perspectives of key stakeholders, it is remarkable they were able to agree on a joint planning process that survived significant differences of opinion and operational challenges. One of the more successful features of the subsequent stabilisation effort in Helmand was the way in which senior leaders were able to overcome differences and seek to provide a consistent approach over time.
The stabilisation environment is crowded, and any overarching programme to monitor and evaluate interventions within a particular theatre will only be successful if it is treated as a joint and shared endeavour with the people who are delivering and implementing interventions. Given the pressures on stakeholders and the challenges of the environment it would be unreasonable for an M&E provider to expect full co-operation and disclosure from all implementers from the outset – even when high level direction is consistent and unambiguous
It is necessary for those charged with implementing countrywide, theatre-specific or fund-specific monitoring and evaluation to work closely with implementers and to establish a strong element of trust from the outset. This is a huge challenge, requiring an approach that is able to flex to accommodate different institutional cultures and agendas and to refrain from overt value judgments as to the relative value of particular activities.
Establish Value – Earn Trust
Effective M&E in a fragile state multi-agency stabilisation context cannot simply be imposed on all actors from the outside. A programme needs to be negotiated and supported at a senior leadership level, but more importantly it must demonstrate value before it can earn trust. Credibility will be hard won and very easily lost.
One of the biggest challenges experienced by the team implementing HMEP in 2010 was to overcome a mixture of scepticism, indifference and downright hostility on the part of various implementers and stakeholders. Key to this was positive messaging that M&E was there not to identify poor or under-performing programmes but to assist implementers in identifying what might work and help them to find ways to deepen the impact of their interventions. Understanding the political economy of the donor and international environment proved as important as understanding the local context.
There are lessons that can be derived from approaches such as Problem-Driven Iterative Adaptation (PDIA) to help in achieving ownership from implementers. PDIA has a strong emphasis on joint problem analysis. An M&E team can add significant value in this process because it is likely to bring broad evidence-based contextual understanding and an awareness of what others are doing in the same space, and because the team can jointly agree indicators and instruments with implementers and can input to programmes at their design and inception phases rather than having to retrofit the M&E approach during implementation.
Insights are only useful if they are widely disseminated and are available in time to influence programme design and implementation. This presents a number of challenges. Stakeholders operate on a number of IT platforms and with varying security policies, which can result in different standards and thresholds for the classification of data. One of the earliest tasks in implementing a common M&E programme is therefore establishing a mechanism that allows suitably anonymised data to be shared across platforms. In practice this is harder than it appears, especially when civilian actors are attempting to work with military, diplomatic and other government counterparts.
One important benefit of adopting an open platform and making as much data as possible available to as wide an audience as possible is that it does much to help establish the credibility of the M&E provider.
Be Responsive, Yet Maintain Consistency Over Time
Just as the external environment changes rapidly over time in an unstable and fragile state, so M&E providers need to flex and adapt their approaches without compromising the quality and integrity of their data sets. It is remarkable how few stabilisation contexts are monitored consistently over time – this is an unglamorous but necessary activity that is sometimes hard to get funded. A good example of an enduring data set in a fragile state is the Asia Foundation’s national perception survey in Afghanistan which is used extensively by a wide range of actors but has many well documented limitations, including that it cannot be dis-aggregated by district.
An interesting example of flexible and responsive implementation can be found in the way in which the HMEP team adapted its approach to sampling and survey work to accommodate a fluctuating security situation in some of the more challenging and insecure districts. Once a sampling framework had been established, it became apparent that enumerators would not be able to access all the planned sampling points, and the programme was exhausting its supply of reserve sampling point. One work-around was to group similar districts and pool them to maintain the statistical integrity of the data set. Thus as security became more variable it was possible to continue to sample within a defined group of northern districts and deliver data that was reliable.
HMEP tried to be responsive in a number of ways. The programme was structured around a series of quarterly research waves. Prior to each wave, the team consulted with the end users of data and implementers of programmes to adjust survey instruments to reflect the interests and emerging concerns of the implementers. This process not only ensured the M&E programme remained relevant but was also important in securing the implementers’ cooperation.
Clearly, there is a balance to be struck between adjusting the data collection to reflect changing interests and ensuring consistency to allow for inter-wave comparison and the identification of long term trends.
One way of ensuring responsiveness over time is to build in the capability to conduct in-depth focus studies that seek to answer particular research questions that have a direct bearing on an aspect of stakeholder intervention. In 2011, HMEP was able to study political networks in one district which helped to validate donor approaches to support local representative bodies. In 2012, the programme conducted a focus study on the expansion of opium cultivation in Helmand’s desert areas. By understanding who was occupying land, how they came to obtain rights to cultivate, how their operations were financed and where they obtained public services it was possible to contextualise data from other sources. In turn, this challenged a number of assumptions around the role of the Taliban in the desert areas and the implications of the ISAF drawdown in security.Finally, also in 2012, HMEP also looked in depth at the fiscal sustainability of investments in infrastructure and what it would cost to support continued provision of basic services after ISAF’s drawdown.
Be Honest on Limitations of Data
An interesting feature of the public perception data gathered through HMEP was the public’s consistently high levels of confidence in the Afghan Police. Unsurprisingly, there was a temptation for a number of stakeholders to quote headline percentages that appeared to support their particular narrative on the police. The HMEP team was at pains to provide an honest assessment of social desirability bias in perception survey work and to say that individual data points were far less reliable than long term trends (which in effect discount preference bias). HMEP made it clear that it would be wrong to conclude that the police were held in universal high esteem, but that the surveys could show whether public confidence in the police was rising or falling over time and point to why that might be.
HMEP attempted to use regression analysis to help explain how different variables combined to impact on public perceptions. This approach has potential to help explain how and why particular activities might impact on public perceptions, but it needs to be used with care. Firstly, the base sample needs to be statistically significant and fully representative – not an easy task in most stabilisation contexts. Secondly, it is hard to isolate variables in a crowded environment where stakeholders are struggling to share all the information on what they are doing and where. Finally, independent and dependent variables can only be successfully identified by an M&E team that has a deep understanding of the context.
Regression analysis was useful in Helmand in that it did help the planners to weigh and prioritise activities over time. In particular analysis demonstrated a strong and enduring correlation between the availability of quality public information and positive perceptions of local government and security bodies. It also highlighted the importance of interventions in key public services that were perceived to be drivers of government legitimacy.
Data is Interesting – But How You Use Data is What Counts
M&E programming in fragile states has limited utility if it is not used to inform the design, timing and implementation of interventions. A challenge in Afghanistan was that many donors had pre-determined programmes that were rolled out pretty much regardless of what monitoring and evaluation might show. Under such circumstances, M&E has limited benefit - it can provide a snapshot of how a programme is being received, but unless the programme in question has been structured to allow for revisions and mid-course corrections there is a limit to how M&E insights can be applied.
Complex adaptive stabilisation programming will not work unless it includes near real-time M&E and unless the actors owning interventions have the ability to insulate their implementation teams from political pressures. In Anbar in 2007 General Petreaus was able to create just enough time, space and a permissive environment to find what worked. Arguably, in Marja in Helmand in 2010, General McChrystal did not have time on his side and therefore had to run with what was available off the shelf, with a less satisfactory outcome.
HMEP provided a range of insights into how people’s perceptions, including perceptions of the army, police, local government and informal governance mechanisms, changed over time and, critically, how public priorities evolved. Some research products were extremely helpful in understanding what was happening on the ground – for example the expansion of opium cultivation into the desert areas or the impact of ISAF base closure on security. What was harder was to change was the nature of the interventions in light of the emerging evidence as to their impact.
Sometimes this was due to stakeholders disputing the validity of the data. More often it was to do with models of programming or decision making that were inflexible or because stakeholders were trapped in narratives that precluded any substantive change in direction irrespective of evidence suggesting that their actions were capable of improvement. Under such circumstances, there are limitations on what adaptive M&E can deliver.
The overwhelming lesson is to design and implement M&E programming at the same time that interventions are being designed and go through inception. A flaw in HMEP was that it did not appear until 2010, by which time many of the stabilisation approaches in Helmand had become quite fixed. It is inevitable in a multi-donor stabilisation context that the timing will never be perfect for all actors and some element of retrofitting will be necessary, but it is certainly a factor that needs to be borne in mind for the future.
Integrity Matters – Protecting the Process
Given the well documented challenges of working in fragile states it is very easy for the research product to be compromised because data collection protocols have not been observed and enumerators have cut corners or failed to apply their training consistently.
HMEP outsourced some survey work and data collection to a reputable international survey company established in Afghanistan and utilised by a number of other major donors and organisations. In time, it became apparent to the HMEP team that there were occasions where the company was not consistently applying the agreed data collection protocols . The solution adopted was to grow a local network of trained validator teams and apply an external validation process to ensure the integrity of survey processes. A key lesson learned was that it was relatively easy to recruit, train, and deploy independent validation teams that ensured that the sampling frameworks were correctly applied. (Subsequently in Khyber Pakhtunkhwa, Pakistan, the UK funded Aitebaar Programme has taken the M&E function in-house as opposed to outsourcing data collection. This has significantly improved the quality of the M&E process as well as providing a much stronger research capability to inform programming.)
Field researchers face many challenges. Aside from action by insurgents, there is also understandable suspicion from host nation security, intelligence agencies and police forces. The best solutions have been to invest time and effort in preparing the ground with such agencies well in advance to ensure consents are in place and to be as transparent as possible in dealing with authorities, whilst ensuring that research is conducted in accordance with ethical principles and that the confidentiality of data is guaranteed.
Care needs to be taken to ensure that survey instruments are designed to be administered quickly and that enumerators are selected to be representative of the communities in which they are working. Considerable attention needs to be paid to small details such as dress and appearance. Use of smart technology, including GPS equipped mobile phones, is helpful, but it needs to be borne in mind that possession of such equipment in some contexts can be misinterpreted by governmental and nongovernment actors. In practice, there are relatively few locations where remote monitoring by survey teams suitably selected and trained is completely impossible, provided appropriate measures are adopted specific to the context.
With careful planning, clear understanding and preparation of the political environment, experience from Afghanistan suggests that comprehensive M&E can be made to work and can provide valuable insights even in dynamic multi-national, multi-agency contexts, provided basic ground rules are applied. As the complexity of the challenges facing the international community in countries such as Syria and Iraq increases, and resources committed continue to rise, it is essential that these lessons are not forgotten.