You are here:

*ECCE

Monitoring and Evaluation framework

The monitoring and evaluation (M&E) framework describes the plan for how data will be systematically gathered, analyzed, and interpreted in a way that serves the needs of accountability and program improvement. Monitoring can be defined as “the ongoing, systematic collection of information to assess progress towards the achievement of objectives, outcomes and impacts,” while evaluation is “the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results, with the aim to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability”. These two statements imply that the M&E framework should contain the outlines of a plan for how that will be carried out systematically, by what means, by whom, how frequently, and for what purposes (e.g., accountability, sustainability, service improvement, cost accounting etc.).
The framework should state an overall, integrated set of ECCE system objectives and goals in a measurable fashion. A set of core measures or indicators should be proposed to provide information to policymakers, the public, and service providers about what progress is being made and in what areas progress is not yet observable. One such conception, organized by various types and purposes of indicators, is shown in Figure 2, below.
There are a number of internationally recognized indicators germane to the ECCE enterprise and many others can be identified in benchmarking systems of countries with more advanced systems. One such resource developed by UNESCO is the Holistic Early Childhood Development Index (HECDI) discussed in the introductory parts of this document. HECDI comprises a basket of indicators across ECCE spectrum. Its technical guide is included in the Annex, and it should be seriously considered as a number of advantageous accrue to adopting already tested, validated, and accepted indicators or indicator systems. In addition, the World Bank sponsored Toolkit for Measuring Early Childhood Development in Low- and Middle-Income Countries (Fernald, Prado et al. 2017) is an excellent compendium of resources. In short, a combination of standardized measures and measures unique to the design of the MS will be needed.
This will require that some form of an organized data system be developed that will allow stakeholders to determine the coverage of eligible participants (e.g., who is being left out?) over time and the degree of participation of individuals in specific ECCE system services so that program effects can be associated with participation rates. Key concepts, outputs, and outcomes will need to be defined in a way that supports valid and reliable measurement. The M&E plan need not go into detail in this respect but it should discuss these matters pointedly.

Figure 2: Conceptual Rendering of Various
Indicator Sets Relevant to ECCE

Member states will need to examine their capacity to conduct such data collections and analyses and identify where that capacity will need to be enhanced and means of doing so. Decisions will be necessary about whether to centralize the M&E function within government or outside of government or whether to distribute this role to various parties. In either case, the framework should address how confidentiality, data quality and integrity, and independence and objectivity will be preserved. By necessity and by design, MSs will want M&E to be a somewhat distributed competence and behavioural practice, especially when recognizing that ECCE system progress and outcomes will be in the hands of multiple parties responsible for ensuring success for each child.
Monitoring regimes will not likely suffice for assessing large questions about outcomes and payoffs to society. To address those needs, the framework will need to outline a longer term process of evaluation advisably performed by an independent and respected body. Evaluation can focus on mid- and medium-term results but ultimately will need to pinpoint the answers to large questions such as:

  • What outcomes have been achieved for children given the costs?
  • What effects has the ECCE system had on social outcomes and economic development?
  • What alterations to the system are warranted to make it more effective?
  • Where has the system succeeded and failed at reaching its goals (or were the goals the correct ones to begin with)?

Most of all, the M&E system is of little investment value of it goes unused by critical parties in service provision and policy and decision making. The framework should discuss how this information will be made usable and useful (e.g., analysed, interpreted, presented, disseminated), by or for whom, how frequently, and in what form (including public form). This may require establishing a timeline of regular reporting and discussion opportunities. Questions surrounding the use of research and evaluation based information by program administrators and designers, users, and policymakers is not a new one2. Ultimately, data-based and research-based knowledge must successfully compete with other sources of information in human decision making. A comprehensive review of this topic in the context of healthcare suggests that factors such as timing of the availability of findings, relevancy to problems encountered, and actual collaborations between researchers (or evaluators or other scholars) and policy makers or other decision makers increase use (Oliver, Innvar et al. 2014).

*MONITORING AND EVALUATION FRAMEWORK

Probes

Describe

What is the current state of data collection, monitoring, and evaluation in areas related to ECCE as conducted by any entity (governmental or non-governmental) within country? What such activities are being undertaken, for what purposes, and how is the information being used, by whom? How do these practices differ across ECCE service sectors?
Which of these projects within country or internationally may be readily useful for ECCE system design and planning?
What current policies and laws are in place in terms of data collection and storage, monitoring and evaluation, and the use of such for program improvement, accountability, and decision-making?
How are data collection, monitoring, and evaluation currently financed and how much is being spent?

Asses

How can existing forms of data collection, monitoring, and evaluation practices be adapted or built upon for ECCE use? Where are these existing practices strong and weak? What gaps exist in current practice?
What technical skills and capacities of designers, service providers, and other stakeholders need to be enhanced to ensure integrity of data collection and its use in program design and improvement? How will that capacity development be provided?
Is there a logical organization or set of possible organizations within or outside government that can serve as the headquarters of the monitoring and evaluation activity or should that activity be distributed?
Do (previously discussed) frameworks include an appropriate emphasis and level of detail about how monitoring and evaluation will be incorporated? Are ECCE system goals and objectives measurable and appropriate from the perspective of monitoring and evaluation?

Benchmark

What exemplary approaches to monitoring and evaluation can be identified within country or internationally that can be used as models to be adapted or expanded?
What existing sources of domestic or international indicators (e.g., HECDI) or standards for the development of such can be identified as models and built upon?
What processes for designing and constructing a monitoring and evaluation activity exist within country or internationally that can be adapted and used as model?

Plan and Design

What major questions concerning the operation and implementation of an ECCE system would ideally be answered by a newly designed monitoring and evaluation activity?
What are the major priorities in terms of the capability and function of the monitoring and evaluation activity? Do those include: ensuring laws and regulations are adhered to, the tracking of individual children’s participation over time, child progress and outcome measures, assessment of the quality of services provided, assessment of service provider performance, assessment of equity goals, measures of progress in implementing the ECCE system country-wide, an assessment of whether objectives and milestones are being met, an assessment of overall outcomes useful for policymakers?
How can the monitoring and evaluation activity promote continual ECCE program and service improvement and evidence based management and decision-making? What steps can be taken in ECCE program design to ensure that the monitoring and evaluation activity is useful and used by managers, decision-makers, and policy-makers?
What planning processes, policies, practices, and organizational arrangements need to be put in place to ensure that a high-quality monitoring and evaluation activity is designed and implemented?
How should cultural values and differences inform data collection, monitoring, and evaluation approaches? How can monitoring and evaluation be used to ensure equity of access and outcomes?
Where is institutional conflict and friction (e.g. principal-agent problems) likely to develop as a result of changed responsibilities and resources? Where are obstacles likely to be encountered?
How can regular collaborative contact between program designers, service providers, decision makers, policy makers, evaluators, researchers, and other experts be established for the purpose of continual improvement of the monitoring and evaluation activity and the ECCE system as a whole?
Is this framework consistent with the contents of the other frameworks?
Download the fillable PDF