How To Make Crowdsourcing Disaster Relief Work Better

We must identify successes and mistakes to optimize usefulness of data collected during an emergency.

By SHARE

As a physician and public health provider who has worked on the ground with humanitarian missions, I learned in Haiti how important—and difficult—such assistance can be. Consider the postdisaster mapping of an area. Predisaster maps do not fully reflect a disaster, and sometimes actually get in the way when you're trying to respond on the ground. During my daily work in Haiti, I needed to know where and how quickly I could transfer patients from our hospital to other care centers. To do so, I needed an updated map, and one that could focus on a specific area. When there's no updated map of an area postdisaster it can be terribly difficult to plan how to give healthcare services, where to send trucks to deliver food, or what areas need urban search and rescue. Volunteers' digital assessments have made a tremendous difference on the ground.

But when those tools have been evaluated and improved based on that evaluation, they have been still more helpful to first responders. For instance, after the Deepwater Horizon oil spill in 2010, Jeffrey Warren at the Public Laboratory for Open Technology created a software called MapMill, an online platform that helps people crowdsource and categorize images to create maps from pictures taken of the Gulf. In 2012, a group of people from Humanitarian Open StreetMap, FEMA, the Civil Air Patrol, and the National Geospatial Intelligence Agency's Readiness Response Recovery Team took this platform, carefully customized it, and assessed how well it could connect aerial imagery with MapMill, allowing people to assess the pictures. The Civil Air Patrol took pictures as it flew over the Camp Roberts simulation site in Pasa Robles, Calif. Humanitarian Open Street Map then determined how well people could rate these images, and simultaneously improved the platform at that time to make efforts faster and more efficient.

[Read the U.S. News Debate: Should FEMA's Responsibilities Be Handed Over to the States?]

But such evaluation efforts need to be systematized, making ongoing assessment and feedback central to digital humanitarianism. Some volunteers prefer to dive directly into saving lives without building relationships with local or international responders—but that runs the risk of creating digital solutions that aren't what those on the ground actually need. When many different groups of digital volunteers pour out their different streams of information—as happened in Haiti and again with Sandy—it's extremely difficult for those on the ground to sort through what's available, how relevant it is, and how to use it.

In 2011 John Crowley of the Harvard Humanitarian Initiative and I coauthored Disaster 2.0, a report commissioned by the United Nations Foundation. In our interviews we spoke with humanitarian responders who often told us that they had trouble making sense of so much information coming from so many different groups of people they hadn't worked with before. And sometimes they didn't even know that digital volunteers were part of the response. What was missing, essentially, were humanitarian intermediaries: people who could identify the different efforts, match incoming information with those on the ground who needed that particular information, make sure it was presented in a manner useful to those who would receive it, and then deliver it appropriately.

Until there are credibility and relevance checks, the groups on the ground—and the people who need assistance—won't have the patience or trust to wade through the data pouring from those passionately dedicated volunteers.

[See photos of Cleaning Up From Superstorm Sandy.]

But even before we have intermediaries handling and passing along that information, we need better assessments of what's useful and what isn't. Ad hoc efforts need to be scrupulously researched as they are happening—so that we learn, next time, how best to use our volunteer resources, both human and economic. Which communication pathway is most effective? What digital information is most needed at each phase of an emergency? How can we prepare next time so that Web applications don't crash under the load of all that volunteer generosity? We need to be systematic about identifying our successes and searching for our mistakes, so that we can learn from them. If we don't know what the mistakes are, then we are going to keep repeating them. If we don't know what our success and "wins" are we may not know how to make them happen again. Before we can fix these emergency communications pathways, we have to evaluate our efforts—so that each time our response is faster, more efficient, and more useful to those on the ground.