MES Blog

Malaysian Evaluation Society

By: Lim Kheng Joo, MES Adviser and Senior Associate at CeDRE International

I had the opportunity to attend the inaugural Asia-Pacific Evaluation Association (APEA) International Evaluation Conference which was conducted over five days from 21-25 November, 2016 in Hanoi, the capital city of Vietnam.  The theme of the Conference was "SDGs: Making a Difference Through Evaluation".  There was the usual conference sessions during the first two days which were then followed by another two days of post-conference workshops.  I would say the highlight of the conference was the final day Seminar titled as the “The Great Evaluation Debate” (GED).  The GED was specially tailored and geared towards identifying issues and challenges related to the institutionalization of evaluation in developing countries. As attendance at this special event was by invitation only, I would like to share some key takeaways with those who missed the GED.

Now, here are my six (6) key takeaways from the GED 2016.

1. Necessary to have a national evaluation policy but not necessarily be legislated.

There was much debate on the necessity of developing countries having a national legislation on evaluation in order to ensure institutionalisation of evaluation. Arguments against such a move drew examples of developed countries e.g. USA, Canada and European countries, not having any legislation on evaluation and yet evaluation is a culture flourished. On the other hand, those in favour argued that developing countries lack discipline and political will to carry out development programme evaluation and therefore, legislation is the best prescription.  The agreed position is that evaluation policy is necessary and need to be embedded into other existing national policies and regulations and the legislation of evaluation policy is an option for each developing country taking into context respective local environment and cultural factors.

2. Right strategy with right target group yields the right results.

The current strategy of VOPEs to reach out to Parliamentarians of respective nations to create the demand for evaluation may not be effective. Efforts to engage and educate Parliamentarians on the importance of evaluation in order to effect the legislation of national evaluation policy may not yield the desired results as Parliamentarians have many agendas on their minds. As they are answerable to their constituents who voted them into the national legislative assemblies, they naturally would pay more attention and listen to the demands of voters.  Notwithstanding the constant engagement and advocacy with Parliamentarians, it would be advisable for national VOPEs to also reach out, engage and educate members of local communities in evaluation capacity development so that they can become powerful allies in compelling their representatives to demand for evaluation in order to ensure delivery of desired programme results.

3. Evaluation is everybody’s business!

The demand for evaluation does not belong to the international donors/funding agencies nor does it belongs to the national governments. Similarly, national VOPE is not the custodian   of the supply of evaluation. It is not an issue of public sector versus civil societies. The professionalization, mainstreaming and institutionalisation of evaluation in a sustainable manner is everybody’s business. Addressing issues related to both the demand and supply sides of the evaluation equation requires an integrated multi-stakeholders and multi-sectoral approach. All development actors have to play their respective roles and at times, may even need to assume multiple roles and responsibilities.

4. Beware of Greeks bearing gifts.

An interesting issue was raised by one of the participants and attested to by other participants. Some international development agencies make it a point to include evaluation of programmes/projects as one of the funding conditions.  While these agencies emphasised on the imperative of developing local evaluation capacity and capability, it seems that they are only paying lip service. Most of the times, they would write the Terms of Reference (ToR) in such a manner that only their preferred consultants or evaluation service providers are qualified and selected. So it was not surprising that the same group of consultants/service providers are the ones being selected for the available evaluation jobs.

5. Government to take charge and be an Evaluation Champion.

In any developing country, the national government is the largest funder of development programmes and projects. Development is its core business. In the quest for excellent public programme performance results, the public sector should take charge and lead in the institutionalisation of evaluation. It is possible to carry out this leadership role and responsibility with the cooperation and collaboration of other development partners, public sector organisations would be seen as Evaluation Champions, thereby laying a strong foundation for the development of a culture of evaluation within and outside their organisations. They would also become exemplary role models for private sector organisations and civil society organisations to emulate.

6. Act with bravery and innovativeness and yet retaining convention.

The conduct of the event is a powerful message by itself.  The GED kicked off by a presentation of a position paper entitled “Strategies, systems, approaches and tools for sustainable institutionalization of  evaluation within developing countries”. The presentation was in an unrehearsed dialogue format between two experienced VOPE representatives, one from the developed North and the other another from the developing South. This brave act was then followed by a moderated debate session conducted in the conventional panel format. The breakout session ala World Café style was another innovation adopted to effectively elicit the thoughts and viewpoints of participants in a facilitated environment. The value of the GED lies in the summary findings, recommendations and action plan distilled from the facilitated breakout sessions.

Prepared by:

Lim Kheng Joo

Senior Associate

Centre for Development and Research in Evaluation (CeDRE), International

31 January 2017

Integrated Results Based Management: Better SDG results and reporting

One of the most commonly asked questions that we get at the Malaysian Evaluation Society is, how can I show the impact that my organisation/department/programme is making toward the Sustainable Development Goals?  Given that there are 17 global goals with 169 targets between them, this is a very good question.

While Results Based Management (RBM) can useful for assessing results, this approach has led to a system of global silos; governments, NGOs, programmes and projects are working toward the same goals but are not integrated in their approach.  Sometimes silos even appear across organisations. If we’re not integrated in our approach to achieving the SDGs, how can we possibly measure our combined impact?  The same question can be asked of governments and organisations.

The Solution

Integrated Results Based Management (IRBM) is based on Results Based Management but forms integral linkages between departments, ministries and organisations working toward the same goals. IRBM also looks at the bigger picture, harmonizing development planning, budgeting, personnel performance, M&E, and evidence-based decision making (all of which are components of donor requirements).

This global best practice will push forward the achievement of the SDGs.  Take a closer look at some of the linkages made through the IRBM system:

The IRBM System:

- Systematically cascades national/organisational priorities to relevant contributing lower levels
- Uses a programme/activity approach to assign responsibilities and accountability for shared impact and outcomes between national & lower levels of government, departments within an organization, or across programmes conducted by the same or different organisations
- Integrates all resource use from different levels or areas towards one or more relevant programme area
- Links policy with priority development needs at grass-root/project levels
- Identifies relevant programs at the same level that jointly contribute to an outcome/impact area
- Establishes contributions of complementary projects under a particular programme to one or more outcomes
- Assesses overlaps and redundancies between different programmes/projects at the same level towards a particular programme outcome/impact area
- Integrates different sources of funding and resources application towards a particular programme
- Manages complementary sources of funding from various contributors towards a programme

Because this system is so dynamic, organisations of any size can use this to:

  1. Have better programmatic results
  2. Measure those results in a way that really showcases the good work put in

This system will help to link the SDG outcomes to your work, which will inevidably have an impact on those who are most vulnerable.

Sources:, 2016

What are Key performance indicators?

A performance measurement tool is not only used to evaluate outcomes, it’s a tool for program management. Here’s a simple guide on how to set effective KPIs.

Step1: Defining KPIs

KPI stands for ‘Key Performance Indicator’.  KPIs are the key contributors to the success of the goal and should be measurable, quantifiable and adjustable.  KPIs are very useful for keeping track of project/program progress throughout the year and fine-tuning actions related to your KPIs along the way can improve performance dramatically.

For example, if the goal of your project is: ‘to improve the health status of 5-12 year olds from the Roma population in Belgrade’, it will likely consist of several KPIs that contribute to the success of the goal.  They might be: ‘the extent to which the population has access to primary health care’, ‘number of diagnoses made among the population’, ‘number of community health care workers per KM/sq.’, etc.

By monitoring these KPIs throughout your project, you’ll be able to keep track of your results in real time and you’ll be able to act quickly if one of them is not achieving the results you want.  That’s why it’s important for KPIs to always be measurable - they will always provide an indicator of what you have defined to be a successful outcome.

Step 2: Defining Tools and Methods for Monitoring, Measuring and Evaluation

After establishing KPIs, it’s time to define the methods, measures and tools for keeping track of them.  Some of the things you’ll want to consider are what sources of data can be used to collect information on your KPIs, what method you’ll use to collect data (ie, document review, interview), how frequently you’ll collect data, who is responsible, budget, etc.  Methods, measures and tools will likely vary across KPIs.

Step 3. Make a Monitoring Plan

During this step, you’ll want to list the activities that you need to do in order to monitor each KPI using the methods decided upon in Step 2.  It would be helpful to put this into table form and determine how frequently your KPIs will be measured. It’s also important to foresee resources required to conduct monitoring and measurement and assign responsibility to the team members responsible.


Article written by: Nicole Ristic

You are here: Home Blog