I sense a little bit of history repeating, and not the good kind. About ten years ago, I was deeply involved in the field of conflict early warning and response. Eventually, I realized that the systems we were designing and implementing excluded at-risk communities even though the rhetoric had me believe they were instrumented to protect them. The truth is that these information systems were purely extractive and ultimately did little else than fill the pockets of academics who were hired as consultants to develop these early warning systems.
The prevailing belief amongst these academics was (and still is) that large datasets and advanced quantitative methodologies can predict the escalation of political tensions and thus impede violence. To be sure, “these systems have been developed in advanced environments where the intention is to gather data so as to predict events in distant places. This leads to a division of labor between those who ‘predict’ and those ‘predicted’ upon” (Cited Meier 2008, PDF).
Those who predict assume their sophisticated remote sensing systems will enable them to forecast and thus prevent impending conflict. Those predicted upon don’t even know these systems exist. The sum result? Conflict early warning systems have failed miserably at forecasting anything, let alone catalyzing preventive action or empowering local communities to get out of harm’s way. Conflict prevention is inherently political, and “political will is not an icon on your computer screen” (Cited in Meier 2013).
In Toward a Rational Society (1970), the German philosopher Jürgen Habermas describes “the colonization of the public sphere through the use of instrumental technical rationality. In this sphere, complex social problems are reduced to technical questions, effectively removing the plurality of contending perspectives” (Cited in Meier 2006, PDF). This instrumentalization of society depoliticized complex social problems like conflict and resilience into terms that are susceptible to technical solutions formulated by external experts. The participation of local communities thus becomes totally unnecessary to produce and deliver these technical solutions. To be sure, the colonization of the public sphere crowds out both local knowledge and participation.
We run this risk of repeating these mistakes with respect the discourse on community resilience. While we speak of community resilience, we gravitate towards the instrumentalization of communities using Big Data, which is largely conceived as a technical challenge of real-time data sensing and optimization. This external, top-down approach bars local participation. The depoliticization of resilience also hides the fact that “every act of measurement is an act marked by the play of powerful relations” (Cited Meier 2013b). To make matters worse, these measurements are almost always taken without the subjects knowing, let alone their consent. And so we create the division between those who sense and those sensed upon, thereby fully excluding the latter, all in the name of building community resilience.
Acknowledgements: I raised the question “Resilience for whom?” during the PopTech and Rockefeller Foundation workshop on “Big Data & Community Resilience.” I am thus grateful to the organizers and fellows for informing my thinking and the motivation for this post.