Performance Management Plan Development Experience and Lessons Learned

The LMG Afghanistan Project (LMG-AF) improves the capacity of the Ministry of Public Health (MoPH) of Afghanistan to govern, oversee, plan, manage, supervise, monitor, and evaluate the scale of access to high-quality Basic Packages of Health Services (BPHS) and Essential Packages of Hospital Services (EPHS), particularly for those with the highest health risk. 

A team of program managers, project leadership, and USAID technical experts developed output-level indicators (e.g. number of people trained, number of reports distributed) to track project results. Health systems strengthening projects can find it challenging to develop outcome- and intermediate-level indicators because it can take a while to see the results of the interventions on the functioning of the system. In such situations, it becomes important to innovate in developing relevant indicators, but with a clear and technically sound approach. The LMG-AF team had to go this route for several reasons:

  1. Service delivery improvements are incremental. All LMG-AF activities contribute to the overall project goal, but through several intermediate steps.
  2. Time is short. The project is slated to end in 2014, and we had to select outcome indicators that we thought we could influence.
  3. Capacity building has a long trajectory. Gathering indicators that objectively capture the application of skills and competencies or the implementation of strategies is difficult.

 We developed composite indices to measure the outcome of the project activities. For each index indicator, a number of relevant (proxy) indicators were selected, and each indicator was given a score based on importance. For example, to measure Health Management Information System (HMIS) distribution and data use, the following sub-indicators were used.

HMIS Data Distribution and Use Index 1. Number of quarterly HMIS reports produced and disseminated to all 34 provinces (8)

2. Number of MoPH technical departments trained on data use (10)

3. Conduct a conference on results to provide recommendations to MoPH leadership and donors for health retreat (10)

4. HIS annual report is produced and distributed to all 34 provinces (6)

 

 

 

 

 

 

The rationale was this: based on the activities and outputs under this area, if quarterly HMIS and annual HIS reports were distributed to all provinces, and MoPH departments were trained in data use and brought together to examine the data and results, it would help them make decisions with greater clarity. This was assumed to contribute to data distribution, dissemination, and use of data by the MoPH. 

Each indicator’s total score measures to what extent the index indicator has been achieved. The assigned score is the ‘weight’ allocated to the sub-indicator. The calculation for this index indicator is done using the following table. It shows the baseline composite score for this indicator is 14 out of the total 34. This indicator can be used to provide information at different points in time to measure evidence-based decision making at the MoPH.

Sub-Indicator Target Assigned Score Result Score Achieved (Assigned score*result in proportion)
Number of quarterly HMIS reports produced and disseminated to all 34 provinces 4 8 1 8
Number of MoPH technical departments trained on data use 10 10 0 0
Number of results conferences held to provide recommendations to MoPH leadership and donors for health retreat 1 10 0 0
HIS annual report is produced and distributed to all 34 provinces 1 6 1 6
Total Score   34   14

 

 

 

 

 

 

 

Lessons learned:

  • Innovate and test. The field of health systems strengthening is relatively new and there are few globally endorsed indicators. We were able to use the data we had and combined existing indicators to come up with an innovative way to measure the outcomes of our work. 
  • Bring research methods into development of indicators. A careful and systematic approach helped us develop the appropriate level of indicators.
  • Review resources and consult with experts. We reviewed external resources and spoke to experts within the project, within the country, and globally to help develop these indicators.
  • Brainstorm outcomes as a team. Technical brainstorming with our team and with the global LMG Project resulted in a well-developed PMP that was accepted by everyone. The two months invested in this process, though a relatively long time, helped to generate questions as well as consensus.

The process of brainstorming on the indicators, composite scores, and data collection for baseline and revisions at different levels was a very rigorous exercise. However, these valuable discussions can help projects reach clarity on what kinds of results can be expected to be produced, and in what time frame.

One way to monitor the “use of data” would be to conduct follow-up surveys or interviews with the end-users of data. This was not possible due to time and budget constraints. We laud the efforts of our LMG Afghanistan team that smartly and creatively used existing data in a resource constrained setting to demonstrate the contribution of our work to intended project outcomes.


Also in this edition:

Building Local Capacity One Indicator at a Time by Donal Harbick and Dr. Paul Waibale

Capturing In-Country Specifics and Inter-Country Diversity in a Global PMP: Approaches and lessons by Meghan Guida and Dr. Reshma Trasi

Data for Design: Using Data to Improve the Virtual Leadership Development Program by Mariah Boyd-Boffa, Elizabeth Duncan McLean, and Dr. Reshma Trasi