The monitoring and evaluation (M&E) framework describes the plan for how data will be systematically gathered, analyzed, and interpreted in a way that serves the needs of accountability and program improvement. Monitoring can be defined as “the ongoing, systematic collection of information to assess progress towards the achievement of objectives, outcomes and impacts,” while evaluation is “the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results, with the aim to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability”. These two statements imply that the M&E framework should contain the outlines of a plan for how that will be carried out systematically, by what means, by whom, how frequently, and for what purposes (e.g., accountability, sustainability, service improvement, cost accounting etc.).
The framework should state an overall, integrated set of ECCE system objectives and goals in a measurable fashion. A set of core measures or indicators should be proposed to provide information to policymakers, the public, and service providers about what progress is being made and in what areas progress is not yet observable. One such conception, organized by various types and purposes of indicators, is shown in Figure 2, below.
There are a number of internationally recognized indicators germane to the ECCE enterprise and many others can be identified in benchmarking systems of countries with more advanced systems. One such resource developed by UNESCO is the Holistic Early Childhood Development Index (HECDI) discussed in the introductory parts of this document. HECDI comprises a basket of indicators across ECCE spectrum. Its technical guide is included in the Annex, and it should be seriously considered as a number of advantageous accrue to adopting already tested, validated, and accepted indicators or indicator systems. In addition, the World Bank sponsored Toolkit for Measuring Early Childhood Development in Low- and Middle-Income Countries (Fernald, Prado et al. 2017) is an excellent compendium of resources. In short, a combination of standardized measures and measures unique to the design of the MS will be needed.
This will require that some form of an organized data system be developed that will allow stakeholders to determine the coverage of eligible participants (e.g., who is being left out?) over time and the degree of participation of individuals in specific ECCE system services so that program effects can be associated with participation rates. Key concepts, outputs, and outcomes will need to be defined in a way that supports valid and reliable measurement. The M&E plan need not go into detail in this respect but it should discuss these matters pointedly.
Figure 2: Conceptual Rendering of Various
Indicator Sets Relevant to ECCE
Member states will need to examine their capacity to conduct such data collections and analyses and identify where that capacity will need to be enhanced and means of doing so. Decisions will be necessary about whether to centralize the M&E function within government or outside of government or whether to distribute this role to various parties. In either case, the framework should address how confidentiality, data quality and integrity, and independence and objectivity will be preserved. By necessity and by design, MSs will want M&E to be a somewhat distributed competence and behavioural practice, especially when recognizing that ECCE system progress and outcomes will be in the hands of multiple parties responsible for ensuring success for each child.
Monitoring regimes will not likely suffice for assessing large questions about outcomes and payoffs to society. To address those needs, the framework will need to outline a longer term process of evaluation advisably performed by an independent and respected body. Evaluation can focus on mid- and medium-term results but ultimately will need to pinpoint the answers to large questions such as:
- What outcomes have been achieved for children given the costs?
- What effects has the ECCE system had on social outcomes and economic development?
- What alterations to the system are warranted to make it more effective?
- Where has the system succeeded and failed at reaching its goals (or were the goals the correct ones to begin with)?
Most of all, the M&E system is of little investment value of it goes unused by critical parties in service provision and policy and decision making. The framework should discuss how this information will be made usable and useful (e.g., analysed, interpreted, presented, disseminated), by or for whom, how frequently, and in what form (including public form). This may require establishing a timeline of regular reporting and discussion opportunities. Questions surrounding the use of research and evaluation based information by program administrators and designers, users, and policymakers is not a new one2. Ultimately, data-based and research-based knowledge must successfully compete with other sources of information in human decision making. A comprehensive review of this topic in the context of healthcare suggests that factors such as timing of the availability of findings, relevancy to problems encountered, and actual collaborations between researchers (or evaluators or other scholars) and policy makers or other decision makers increase use (Oliver, Innvar et al. 2014).