Scroll Top

Articles

Education
Three recommendations for conducting effective system-level analysis

In my first article for Doab Development, “Three recommendations for improving USAID system-strengthening projects,” I wrote about the increase of USAID system-strengthening education projects in recent years and some of the differences in implementing these projects compared direct services projects.

One issue that I inadvertently neglected is baseline analysis.

Like any project, it is essential to collect information at the beginning of work. This is necessary for at least two reasons. First is getting a clearer and detailed understanding of what specifically needs to be done to achieve project goals. Second is having a starting point to chart progress over time.

However, my experience is that collecting baseline data about system-strengthening projects is often more fraught than direct-service projects because key stakeholders are often reluctant to share information about underlying problems or challenges that need to be resolved. It is one thing to conduct baseline data collection for a direct-service project in which reading and math scores in a country are low and need to be improved. There are always many explanations for low initial scores (for example, low teacher pay; lack of equipment and instructional resources; external shocks such as disease, inclement weather, or social unrest). In all these instances, the narrative about the cause of these problems is at least partially due to circumstances beyond the direct control of government officials.

At the same time, conducting baseline systems-level analysis can feel a lot more personal. Whether a ministry of education has good human resource policies, effective communications among departments, or can effectively execute a priority program is more directly tied to governmental leadership and management. Even when there is substantial ministerial buy-in for the overall project, implementing organizations must be sensitive to political dynamics and personalities when negotiating baseline studies with government partners.

Some educational officials have enough confidence and credibility to support introspective studies.

Government reformers and officials early in their tenure may champion such analysis and transparency in the release of study findings. The joke in U.S. public education is that new school district superintendents often commission new learning assessments because these will inevitably show relatively low student learning outcomes that then only improve over time (whereas the reality is that at least part of the low initial scores and improvements over time are based on increasing familiarity with assessment’s approach).

However, even with the support of ministers, permanent secretaries, or division heads, the success of baseline studies requires the support and forthrightness of other officials and functionaries throughout the system. This can be a real challenge when it is not only egos but also job tenure that can be at stake. A person who might want to be frank about the problems and challenges of her ministerial division might still choose not to share honest and direct feedback if the consequence is that her job is on the line. Other challenges can be working in cultures in which it simply is not polite or appropriate to speak about problems, particularly with outsiders. Yet another challenge can be actual denial, in which a government official, head teacher, or teacher simply does not see a problem where there may be one.

I have experienced government push-back on baseline system-level studies in a variety of ways throughout my career. As one example, I have tried to make the case for many years that the success of an educational system largely rests on its mid-level functionaries. These are the regional- and district-level officials who are responsible for the actual implementation of government policies and programs. They translate policies, regulations, and programs into training for head teachers, teachers, school management committees, parent/teacher associations, and others. They monitor and support implementation and report back up to central-level government officials about challenges and outcomes.

I have therefore promoted the implementation of baseline studies to examine the functioning of these mezzo-level actors. Who are they? How are they recruited? What are their responsibilities? How are they trained to do their jobs? How often do they visit schools and observe administrative or instructional practices? Examining these issues systematically, I argue, gives a great sense of the health of an educational system. So, conducting such a study should be a no-brainer. Right?

In a few instances over the last few years, in which I have included this type of study in the project design, the idea has been rejected. There could be a number of reasons for this, including a poor study design itself. But in probing further, I have learned that, indeed, these proposed studies have been rejected because of possible embarrassment that the findings could cause educational officials.

What can implementing organizations do to increase the likelihood of including meaningful baseline studies into system-level analysis? Following are three recommendations to achieve this goal:

First is to have open and honest discussions with funders and government partners about the importance of baseline system-level analysis. This can be done in private or in small groups, but it is essential to communicate the importance of such analysis for achieving expected project outcomes. The goal is neither to embarrass, police, or undermine a ministerial regime but, instead, to identify the pain points in the system that require support for the entire system to function smoothly.

Once high-level government officials buy into the spirit of the studies, they need to signal the same importance of the studies to their own staff. Ideally, this includes communicating to their own staff that they will not be punished or lose their jobs for being truthful. Future success depends on honest and open communication now.

Second is to negotiate the option of limited release of study findings, at least as a starting point. Although I believe strongly in extensive transparency and project implementation, it may be necessary to agree with concerned education officials to limit the exposure of study findings, at least until key officials have a chance to sign off. It is better to get accurate information about system-level challenges and opportunities than no information at all. As the relationship between the implementing partner and government deepens and trust builds over time, there could then be opportunities to re-negotiate the release of study results, even if some of the findings are not complementary.

Third is to share examples in which the results of system-level studies in other countries or contexts were key to longer-term project successes. It is always reassuring to know that short-term challenges are not unique, that others often experience similar issues, and that these challenges can be overcome in the longer-term if there is open and transparent communication early in the project period.

Please let us know your experiences in negotiating baseline system-level studies. What would you recommend to increase government trust and transparency in gathering necessary information to achieve longer-term project goals?

Website | + posts

Related Posts

Comments (1)

Thank you for this detailed and thought-provoking article, Cory. Your insights into the challenges of conducting baseline system-level analysis are both enlightening and crucial for the success of system-strengthening projects.

We had a similar experience with the USAID-funded Gateway to Education project in Yemen, where the Ministry of Education fully endorsed all the points you highlighted. However, we found that baseline data collection is not always fully effective, as the results may not be realistic. We faced significant difficulties in conducting the Institutional Capacity Assessment (ICA) for the Ministry of Education in Yemen, where employees were often reluctant and defensive.

Moreover, there was a failure in integrating project activities implemented by various INGOs. These organizations often duplicated assessment studies, despite already having access to existing reports. Over $100,000 was spent on the initial study, only to see repeated assessments that did not add value. The Ministry of Education opposed spending such amounts on studies when the funds could strengthen the ministry’s capacity and support education development directly.

It is worth noting that INGOs not only duplicate assessment studies concerning system strengthening but also replicate other activities. This reveals significant weaknesses in coordination and makes the Ministry of Education’s job more difficult. These INGOs often exploit system weaknesses to implement activities in a way that showcases their efforts to donors rather than ensuring integration, completion, and efficient expenditure of donor money. This approach undermines the country’s recovery and delays the handover to a strong education system.

I apologize if my comments seem to reflect your article directly. I understand that you are speaking in general, while I am sharing my specific experience in Yemen. Nonetheless, I fully endorse your thoughts.

Leave a comment