Here’s one of my favorite false arguments: “There are some people who believe that crowdsourcing will solve all humanitarian challenges….” So said a good colleague of mine vis-a-vis crisis response at a recent strategy meeting. Of course, when I pressed him for names, he didn’t have a reply. I don’t know anyone who subscribes to the above-mentioned point of view. While I understand that he made the statement in jest and primarily to position himself, I’m concerned that some in the humanitarian community actually believe this comment to be true.
First of all, suggesting that some individuals subscribe to an extreme point of view is a cheap debating tactic and a real pet peeve of mine. Simply label your “opponent” as holding a fundamentalist view of the world and everything you say following that statement holds true, easily discrediting your competition in the eyes of the jury. Surely we’ve moved beyond these types of false arguments in the crisis mapping community.
Secondly, crowdsourcing is simply one among several methodologies that can, in some cases, be useful to collect information following a crisis. And as mentioned in this previous blog post entitled, “Demystifying Crowdsourcing: An Intro-duction to Non-Random Sampling,” the use of crowdsourcing, like any metho-dology, comes with advantages and disadvantages that depend both on goals and context. Surely, this is now common knowledge.
My point here is neither defend nor dismiss the use of crowdsourcing. My hope is that we move away from such false, dichotomous debates to conversations that recognize the complexities of an evolving situation; dialogues that value having more methodologies in the toolbox rather than fewer—and corresponding manuals that give us clarification on trade-offs and appropriate guidance on when to use which methods, why and how. Crowdsourcing crisis information has never been an either-or argument, so lets not turn it into one. Polarizing the con-versation with fictitious claims will only get in the way of learning and innovation.