Please see below for a guest post from Diana Ohlbaum, Co-Chair of MFAN’s Accountability Working Group.
The State Department is accustomed to taking physical risks. Political risks, not so much.
Foreign Service Officers work in all the world’s most dangerous and difficult places. They promote judicial and security sector reform to prevent conflict, join with the international community to protect refugees during crises, and support reconstruction and stabilization once violence abates. But taking a close, hard look at how well their own programs are working, and making those findings public, has been a much harder pill to swallow.
Last week marked a big step forward in the State Department’s commitment to evaluations and transparency. With little fanfare, the Office of Foreign Assistance Resources announced on its website that it will publish full texts of unclassified foreign assistance evaluations on a rolling basis. This is a significant improvement from its most recent evaluation policy, issued in January, which had required only that summaries of evaluations be posted publicly, and that the site be updated only once a year. State also made the list of evaluations much easier to browse, and included a new link to PEPFAR’s evaluations.
Compared with USAID, or the Millennium Challenge Corporation (MCC), which have far more demanding evaluation requirements, these may be small steps. But State is still transitioning from a secretive, cable-writing culture to one of sharing and learning. The Quadrennial Diplomacy and Development Review (QDDR), released this spring, seeks to thrust State into the 21st century by, among other things, enhancing its use of data and analytics and bringing more rigor to its evaluations.
Even before the QDDR was finalized, however, State had conducted 138 foreign assistance funded evaluations, with 38 more in progress and 71 planned. It had revised its evaluation policy to ensure that it took account of the legitimate differences between evaluating foreign assistance programs and evaluating diplomatic operations. While some of the changes appeared to be steps backward from the 2012 policy, the very fact that the evaluation requirements were made permanent should be seen as a victory. Many State Department officials reportedly had believed, or hoped, that the mandate would be allowed to quietly expire.
Still, hard work remains to be done at State. The QDDR pledged that the Bureau of Political-Military Affairs would develop “a comprehensive approach to monitoring and evaluating security assistance,” which we at MFAN hope will conform to industry standards of scientific rigor, independence, and transparency. State should commission, as USAID has done, an assessment of the value and quality of its evaluations to date, as well as an analysis of how the evaluations are being used to inform policy and program decisions. More thought must be given to involving local participants and beneficiaries in deciding what counts as success and whether it has been achieved. Most importantly, the Secretary should give his blessing to legislation, now being developed in the House and Senate, to codify the evaluation requirements and ensure that security assistance is not let off the hook.
The fear of conducting evaluations and making them public is understandable, since some – particularly on Capitol Hill – see them as a political bludgeon instead of as a learning tool. But while audits and investigations tell us whether funds were properly spent, evaluations help us understand how and why a particular outcome was achieved. Without that knowledge, we are left to swing blindly at problems and hope for the best.
Now that the State Department has scored a base hit on evaluations, who’s next at bat? All eyes are on you, Department of Defense!