Exploratory Case Studies: African Monitoring and Evaluation Systems
When decision makers wish to use evidence from monitoring and evaluation (M&E) systems to assist them in making choices, a demand for M&E is generated. When the capacity to supply M&E information is high, but capacity on the part of decision makers to demand quality evidence is low, supply and demand are mismatched. In this context, Picciotto (2009) observes, “monitoring masquerades as evaluation”. Based upon this observation, this paper seeks to answer the question: What evidence is there, from the six cases in this volume, that African governments are developing stronger demand for evidence from M&E systems?
The argument presented here is that although demand for evidence is increasing, monitoring is still dominant in the six cases of African M&E systems in this volume. There are nuanced attempts to align monitoring systems to emerging local demand, which still responds to donor demands. There is also evidence of increasing demand through the implementation of government-led evaluation systems. A problem that exists is that the development of the M&E systems is not yet conceptualised within a reform effort to introduce a comprehensive results-based orientation to the public services of the countries. Results concepts do not yet permeate throughout the planning, budgeting and M&E systems of the cases. In addition the results-based notions that are applied in the systems appear to be generating incentives that reinforce upward accounting or contrôle to the detriment of more developmental uses of M&E evidence.
These arguments are based upon an analysis of three indicators drawn from existing literature (Kusek and Rist, 2004; Mackay, 2007; Plaatjies and Porter, 2011), that show endogenous demand for M&E when there is: (i) Well-positioned individual and institutional champions across the system; (ii) incentives that link performance data, monitoring information and evaluation recommendations to resource allocation that is results orientated; (iii) commissioning of appropriate evaluations that use the recommendations, rather than focusing on monitoring.