blog logo image

Archive for the ‘Community’ Category

Success is in the Eye of the Beholder

Thursday, May 21st, 2015
Bookmark and Share

See below for a guest post from MFAN Accountability Working Group Co-Chair, Diana Ohlbaum.

***

A good evaluation will tell you whether a program achieved its intended results.

A better evaluation will tell you why.

But the best evaluation will tell you whether those were the right results — for the people they were intended to help.

As the global development community has begun to get serious about conducting evaluations, they have tended to focus so heavily on accountability to funders that they have often overlooked accountability to beneficiaries.  And all too often, this means findings and results that may be technically accurate, but functionally invalid.

For instance, as Carlisle Levine and Laia Griñó explain in a thoughtful new paper for InterAction, “when practitioners and evaluators listen, they might hear that, yes, shelters were provided, but the materials used were inappropriate for the climate, or the placement of the shelters reinforced community divisions, rather than helping to bridge them.”  If you don’t ask the right questions, you won’t get the right answers.

That’s why locally owned programs demand locally owned evaluations.  Program participants must be involved from the very start in defining what counts as success, as well as in monitoring program implementation and evaluating results.  Yet until now, calls for greater country ownership have focused almost entirely on program design and implementation, while overlooking monitoring and evaluation.  At USAID, for example, there has been little integration of the robust evaluation policy with the Local Systems Framework, which embraces a vision of development that is “locally owned, locally led, and locally sustained”.

To the extent that there has been local participation in evaluations, it is generally limited to using local communities as sources of data once a project is complete, or including local staff on evaluation teams.  Only rarely are the evaluations led and managed by local evaluation professionals.  But most importantly, partner country government institutions, civil society organizations, and beneficiary communities are given little role in designing the questions that will be asked or the indicators by which success will be measured.  As a result, unintended impacts, both good and bad, are often missed.

One exception is an approach being piloted by USAID’s Office of Transition Initiatives (OTI), whose quick-response and “just do it” culture was once quite resistant to any kind of evaluations.  Now, rather than just hiring an outside evaluator at the end of a project, who may spend a couple of weeks in the country and know little about the place or the program or the context, OTI is working with local evaluators from the very beginning to monitor results throughout the program cycle.

Local participation in evaluations not only ensures that outcomes are measured properly; it can also improve those outcomes.  The InterAction paper cites a study comparing the use of standard “expert-developed” scorecards to community-developed scorecards to monitor schools in Uganda.  The researchers found that introducing a participatory monitoring process, in and of itself, helped improve pupil test scores and reduce pupil and teacher absenteeism because community members felt a greater stake in holding schools accountable.

It’s time for the international development community to move beyond the concept of local “buy-in” and embrace the principle of local authorship.  By giving participants a meaningful role in evaluation – and, as the InterAction report reminds us, by ensuring that those evaluation findings are used to inform decision-making – we can make our assistance more effective, and make development more sustainable.  Doing so will require a few adjustments to the program cycle, and InterAction helpfully provides guidelines on how to put these principles into practice.

U.S.-based NGOs Oppose Costly Changes to Cargo Preference That Cut U.S. International Food Aid Programs

Friday, May 1st, 2015
Bookmark and Share

The organizations listed below are extremely concerned about the potential negative impacts of Section 303 of H.R. 1987, the Coast Guard and Maritime Transportation Act of 2015, which would provide the Secretary of Transportation the exclusive authority to unilaterally apply cargo preference rules on programs run by other departments and agencies, and ignore the outcomes of important interagency consultations. We are concerned that Section 303 could have a further detrimental effect on food aid programs and could lead to additional inefficiencies and costs, in terms of wasted resources and greater risk to human lives.

The Department of Homeland Security has previously warned that similar language needlessly increases the risk for programmatic inefficiencies and on-the-ground operational problems.  We are concerned that the unilateral control proposed in Section 303 would expand the Maritime Administration’s (MARAD) authority, allowing MARAD to exercise exclusive authority over how that cargo preference must be applied within critical food aid programs. MARAD’s legally mandated mission is to “strengthen the U.S. maritime transportation system […]” – a mission that reflects neither the importance of cost efficiency nor the impact on critical humanitarian responses.

With natural disasters like the recent earthquake in Nepal and the ongoing crisis in Syria stretching humanitarian funding thin and 805 million people around the world going hungry every day, we must make every food aid dollar count.  We cannot afford to make U.S. food aid more costly or risk diverting more funding toward shipping costs instead of life-saving assistance. Legal authorities provided to the Administration should be ensuring transparent and effective use of taxpayer dollars so that resources are allocated to feeding more vulnerable people, not less.

U.S. food aid saves millions of lives each year.  Therefore, the undersigned organizations remain opposed to the content of Section 303, and we urge the Congress to reject any actions that hamper the reach and effectiveness of food aid programs by increasing transportation costs and eliminating transparency of the process that establishes implementing regulations for cargo preference.

  • American Jewish World Service
  • The Borgen Project
  • Bread for the World
  • CARE USA
  • Catholic Relief Services
  • Church World Service
  • Global Poverty Project
  • InterAction
  • Mercy Corps
  • Modernizing Foreign Assistance Network
  • ONE
  • Oxfam America
  • Presbyterian Church (USA)
  • Save the Children
  • World Food Program USA

logos

Want to Know What We Got for Our Money? MCC’s Telling Us.

Friday, April 24th, 2015
Bookmark and Share

See below for a guest post from MFAN Accountability Working Group Co-Chair, Diana Ohlbaum.

***

In this space back in February, I suggested a list of ways for the Millennium Challenge Corporation (MCC) to advance its thought leadership and boost its development effectiveness.  One of those ideas was to conduct “after-action reviews” to provide an honest look at what did (or didn’t) happen during a compact, why (or why not), and how to improve next time.

True to form, less than a month later the MCC responded by publishing its first “compact closed” page, in this case with respect to Mozambique.  The page has a lot to recommend it: a concise summary of the amounts promised and spent; the dates of compact signature, entry into force and completion; a break-down by project area; the total number of beneficiaries; the changes made during the term of the compact; the key compact indicators, targets, and results for each project; the policy conditions; and links to all the key documents, including constraints analysis, evaluations and scorecards.

This is an extremely useful way to look at the big picture of MCC’s results, particularly for Congressional staff who don’t want to have to wade back through years of notifications and justifications to understand how the project changed over time and what they got for their money.  It answers the important question of “What did this compact achieve?”, which is not ordinarily addressed by evaluations, Inspector General investigations, or Government Accountability Office reports.  My only quibble with the page is that it shows the results according to the adjusted targets rather than the initial goals, which gives an unfairly rosy picture of how the compact was implemented.  The Center for Global Development’s Sarah Rose also helpfully suggests that the page include information on policy impact.

I hope, as well, that this effort was not entirely an exercise in packaging information for the public, but also included a very detailed and substantive de-brief from those who worked on the program, with an eye to identifying best practices and lessons learned.  Although this should happen continuously throughout a compact’s duration, compact closure is an important opportunity to ask questions like:  What do you know now that you wish you had known at the start? What were the most burdensome processes and requirements, and how did you manage them?  If you were doing it all over again, what would you do differently?

Such a review would be different from an independent evaluation in that it would draw directly from the perceptions and experiences of the staff and local partners who were most closely involved, and not necessarily be designed for public consumption.  This type of learning is essential for an organization whose personnel are hired for their specific country knowledge, subject-matter expertise and language skills, and often leave when the compact is complete rather than assuming a new post within the MCC.

Some of these lessons may be too sensitive to be trotted out in public, or too context-specific to be of broader value.  But some of the feedback – from MCC’s local staff and partners as well as its direct hires — could be summarized in a way that is helpful to other organizations, inside and outside government, working in the same countries or on similar projects.

Although it may not be obvious to the user, the “compact closed” page required an enormous amount of effort from the MCC, with dozens of people and multiple departments involved in developing content, writing code, creating charts, designing new layouts and styles, and extracting data.  It’s the template for similar pages forthcoming on other completed compacts, which will be a useful resource for the entire development community.  From my perspective, this is a noteworthy step forward on transparency as well as a valuable tool for assessing overall results.

Broad Coalition Urges President to Nominate a Permanent USAID Administrator

Thursday, April 16th, 2015
Bookmark and Share

April 16, 2015 (WASHINGTON) – This statement is delivered on behalf of the Modernizing Foreign Assistance Network by Co-Chairs George Ingram, Carolyn Miles, and Connie Veillette:

Today MFAN, as part of a broad coalition of international development advocates and stakeholders, including four former USAID Administrators, is urging President Obama to expeditiously nominate a permanent Administrator to the United States Agency for International Development. Under the leadership of Administrator Rajiv Shah, USAID has made dramatic steps to strengthen its capacity to deliver results for the American people and for people in developing countries around the world.

Having a Senate-confirmed appointee at the helm of USAID is essential to advancing U.S. development goals and the aid effectiveness agenda. We are calling on the President to nominate a new Administrator as soon as possible to sustain strong U.S. leadership on the development programs that play a vital role in support of our foreign policy goals and are crucial to the lives and well-being of men and women around the globe.

Lessons From The Road To Transparency: Four Tips For Publishing To IATI

Thursday, March 19th, 2015
Bookmark and Share

See below for a guest post from Laia Grino, Senior Manager, Transparency, Accountability and Results at InterAction. This piece originally appeared on InterAction’s blog on March 19.

***

In honor of Sunshine Week – a weeklong celebration of open government – we’d like to share four lessons InterAction has learned in our own journey towards openness. Today, we join the more than 300 organizations that have published data on their activities according to the International Aid Transparency Initiative (IATI) standard (view our data on the IATI Registry or a visualization of our data on NGO Aid Map). This includes our counterparts in the U.K., Ireland, Netherlands and Nepal, and several InterAction members, including CDA, ChildFund International, GlobalGiving, Pact, and Plan International USA. In doing so, we have taken another important step in making our organization more open and accountable, in line with the open information policy InterAction adopted last October.

In the blog post announcing that policy’s launch, we explained our rationale for making a commitment to greater openness and transparency. Our reasons for publishing to IATI are much the same, so I won’t repeat those here. Instead, I’d like to share four tips:

  • Adopting an open information policy first can be helpful. Not every organization publishing to IATI has adopted an open information policy. For InterAction, however, I believe this was a critical first step for two reasons. First, in adopting the policy, InterAction’s senior management signaled their commitment – both internally and externally – to improving the organization’s transparency. Having this public commitment to point to is useful in ensuring we are continuously making progress on implementation. Second, the development of the policy prompted us to have important discussions about why transparency matters specifically for InterAction, and to come to an agreement about what type of information we would and would not make public (a list of exclusions is available in our open information policy). This laid the groundwork for identifying what data we would be publishing to IATI.
  • Identify/cultivate internal champions. The commitment to publish to IATI or to be more transparent in general should not lie within one person alone. Those responsible for leading an organization’s transparency efforts should do whatever they can to identify or cultivate other internal champions. Some people will become champions for normative reasons – because they believe in the value of transparency in and of itself. Others will do so for practical reasons – because they realize how publishing to IATI either helps the organization or helps make their own work easier. At InterAction, it has been important to have both types of champions.
  • Integrate IATI publication into existing (or needed) business processes. Just as the commitment to publish to IATI should not lie only within one person, neither should the responsibility for actually publishing. It would have taken just one or two days for one person to simply publish information on our existing grants to IATI. Instead, it took us five months. Why? To try to ensure that our publication to IATI will not be a one-off effort, we began by figuring out: (1) what information IATI calls for and what we could realistically publish based on our current systems; (2) when and where that information should be captured; and (3) who within the organization should provide that information. Based on this analysis, we’ve made changes to our grants management process to integrate the data we need for IATI publication, rather than set up an entirely separate process. An important lesson here is that, depending on how it is approached, IATI can be a very useful tool for improving an organization’s data management practices.
  • Be patient. Publishing to IATI will almost inevitably take more time than expected (especially since – at least at first –it is usually not part of anyone’s job description). But while improving an organization’s transparency does require consistent pressure, it is important to avoid turning IATI into just another reporting requirement or making the processes of openness seem like a burden. As one of my colleagues emphasized, ultimately this is about shifting organizational culture – something that takes time in any context.

InterAction is committed to publishing high-quality information on its grant-funded activities on a quarterly basis. As we work out the kinks in publishing what we’ve currently committed to, we will be thinking about how we can make the process easier and further improve the quality of our published information. As all IATI publishers should, we will also be looking at how InterAction itself can realize the full benefits of publishing. Hopefully these lessons help clear the path to transparency for other leaders (like you?!), too.