People-Centered Conflict Early Warning

Conflict early warning works. Indeed, current and historical cases of nonviolent action may be the closest systematic examples or tactical parallels we have to people-centered disaster early warning systems. Planning, preparedness and tactical evasion, in particular, are central components of strategic nonviolence: people must be capable of concealment and dispersion. Getting out of harm’s way and preparing people for the worst effects of violence requires sound intelligence and timely strategic estimates, or situation awareness.

The literature on nonviolent action and civil resistance is rich with case studies on successful instances of early warning tactics for community empowerment. What are the characteristics of successful early warning case studies in the field of nonviolent action? Nonviolent early response uses local social networks as the organizational template of choice, in a mode different from our conventional and institutional approach to early warning. Networks have demonstrated a better ability to innovate tactically and learn from past mistakes. The incentives for members of local networks to respond early and get out of harm’s way are also incalculably higher than those at the institutional or international level since failure to do so in the former instance often means death.

Nonviolent action is non-institutional and operates outside the bounds of bureaucratic and institutionalized political channels. Nonviolent movements are locally led and managed. They draw on local meaning, culture, symbolism and history. They integrate local knowledge and the intimate familiarity with the geography and surrounding environment. They are qualitative and tactical, not quantitative and policy-oriented. Not surprisingly, successful cases of nonviolent action clearly reveal the pivotal importance of contingency planning and preparedness, actions that are particularly successful when embedded in local circumstances and local experience.

The iRevolution question is how social resistance groups can most effectively use ICTs to gain an asymmetric advantage over repressive regimes.

Patrick Philippe Meier

Conflict Early Warning Systems: No iRevolution

Convention conflict early warning systems are designed by us in the West to warn ourselves. They are about control. These systems are centralized, hierarchical, bureaucratic and ineffective. And highly academic. Indeed, the vast majority of operational conflict early warning systems are little more than fancy databases used to store, retrieve and analyze data. The rhetoric is that these systems serve to prevent violence which is rather ironic since the vast majority of local communities at risk have never heard of our impressive sounding systems.

Lessons in this field are clearly not learned. Papers published by Rupensinghe (1988) and Walker (1992) could be published tomorrow with no changes and their recommendations would still be on target. Worst of all, the indicator of success for early warning systems is still the number of high-quality analytical reports produced.

Reports don’t protect people, nor do graphs. People protect themselves and others. And yet reports still get written albeit rarely read let alone ever acted upon. To be fair, however, those working on conventional early warning systems are constrained by political and institutional realities. The best that these systems can do is to build a paper trail of analysis and recommendations. In other words, convention early warning systems can be used for advocacy and lobbying, but to assume that they are appropriate for operational response is to be misguided (see Campbell and Meier 2007). Indeed the recent study by Susanna Campbell and myself showed that decision-making structures at the UN do not use analyses generated by formal early warning systems as input into the decision-making process.

In order for conventional early warning systems to engage in operational response, they would first require the paper trail, which would then be used to lobby the UN Secretariat and other member states, these actors would then have to place political and economic pressure on offending governments and/or non-state armed groups, and the latter would have to acquiesce. Now, exactly how often has this been successful? Exactly. The above process takes years and fails repeatedly.

It is high time we learn from other communities such as disaster management. The disaster community places increasing emphasis on the importance of people-centered early warning and response systems. They define the purpose of early warning as follows:

To empower individuals and communities threatened by hazards to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury, loss of life, damage to property and the environment, and loss of livelihoods.

The day our conflict early warning community adopts this discourse will be a good day. I hope to still be around to toast the breakthrough. Clearly, the discourse in disaster management shifts away from the conventional top-down division of labor between the “warners” and “responders” to one of individual empowerment. In disaster management, this means capacity building by training in preparedness and contingency planning. In other words, the disaster management community focuses on both forecasting hazards and mitigating their impact when they turn into disasters.

Question: Why are we in the conflict early warning community obsessed with forecasting despite our dismal track record? The disaster community is better able to forecast than we are, yet they allocate significant resources towards community-based preparedness and contingency planning programs. So when disaster does strike, the communities (who are by definition the first responders) can manage their own security environment without the immediate need for external intervention. There would be an uproar (and an escalation in disaster deaths) if the disaster community were to focus solely on prediction.

And what do we do? We work in conflict prone places and set up conflict early warning systems. When the violence escalates, we evacuate all international staff and leave the local communities behind to face the violence by themselves. How often do our conflict early warning systems fail? As often as we evacuate our staff. At the very, very least we should be preparing at-risk communities for the violence and engaging them in contingency planning so that when violence does strike, they at least have the training to get out of harm’s way and survive.

In a future blog, I will write about how some at-risk communities already do get of harm’s way, and effectively so.

Patrick Philippe Meier

Eyes on Darfur: 2 Villages Missing from Site

An update on Amnesty International’s (AI) “Eyes on Darfur” project based on my previous blog.

At least two of the protected villages monitored by AI using very-high resolution imagery provided by AAAS have been removed from the site after reported attacks in the area, with updated imagery still being processed. The attacks in question were summarized by this UNHCR Report.

This raises some important questions as noted by a colleague in a recent discussion: the bigger issue here is vital, all this geo-mapping is virtual, and while it may impact the real world that’s not a foregone conclusion; Would other NGOs, or perhaps a consortium, do better at the protective concept? And how? Namely, who can protect these villages and others like them?

I will write another blog this week on precisely these questions, i.e., civilian protection.

Patrick Philippe Meier

From Intellipedia, to Virtual Osocc to WikiWarning?

What can we in the humanitarian community learn from Intellipedia as described in my previous blog ?

Some thoughts:

  • Let go of our ego-centric tendencies for control
  • Decentralize user-generated content and access
  • Utilization of tagging, IM, online video posting
  • Use open source tools and make minimal modifications
  • Capture tacit and informal knowledge qualitatively via blogs and wikis
  • Keep user-interfaces simple and minimize use of sophisticated interfaces
  • Provide non-monetary incentives for information collection and sharing
  • Shift from quality control mindset to soap box approach

There are no doubt more insights to be gained from the Intellipedia project but do we have any parallel information management systems in the humanitarian community? The first one that comes to mind is Virtual Osocc:

There are currently 2,437 users. The site includes a bulletin board where discussions can take place vis-a-vis ongoing emergencies and/or issues. A photo library is also available as are sections on training and meetings. The site’s homepage points to breaking emergencies and ongoing crises. Users can subscribe to email and SMS alerts.

When I spoke with the team behind Virtual Osocc, I was surprised to learn that the project has received no official endorsement by any UN agencies. This is particularly telling since an indicator of success for humanitarian information systems is the size of the active user base. Other points worth mentioning from my conversations with the team since they relate directly to my previous blog on Intellipedia include:

  • Tensions between the UN and NGOs vis-vis information sharing is healthy since it keeps us honest;
  • Decision-making in disaster management is by consensus (so tools should be designed accordingly);
  • Our community is currently unable to communicate effectively with the beneficiaries themselves.

Another humanitarian information systems is of course ReliefWeb, which is very well known so I shan’t expand on the system here. I would just like to suggest that we think of ways to integrate more Web 2.0 tools into ReliefWeb; allowing a wiki and blogging space, for example. There’s also the Global Disaster Alert and Coordination System (GDACS) manged out of the Joint Research Center (JRC) in Ispra, Italy. See my recent blog on the JRC’s satellite imagery change detection project here. The JRC is doing some phenomenal work and GDACS is an excellent reflection of this work. I will leave a more thorough overview of GDACS for a future blog entry.

Then there’s the new information system which was launched this past October 2007 in collaboration with the JRC. The system is a new web portal for leading situation centers including those at UN DPKO, the EU Council and NATO. The purpose of the new system is to facilitate the exchange and storage of unique and relevant information on emerging and ongoing crises and conflicts.The portal facilitates the exchange of unique documents including satellite images. Users can subscribe to specific email and SMS alerts. The system also include a Wiki mapping section. Needless to say, the new web portal is password protected and the user base limited to an elite few. This initiative may benefit from more Intellipedia think.

The issue that I find most pressing in all of this is the lack of two-communication (not to mention one-way) communication with beneficiaries. I find this gap upsetting. So I set up Wikiwarning some two years ago in the hope of finding the time, support and expertise to fully develop the concept and tool. Any takers?

My next blog will address the issue of intelligence for the stakeholders.

Patrick Philippe Meier

Intellipedia for Humanitarian Warning/Response?

I just attended at a talk at Harvard given by Intellipedia‘s developers . Intellipedia uses the same software and approach as Wikipedia does and includes a Wiki space, a blog space and a multimedia space called iVideo, the intel version of Youtube. Intellipedia also includes a tagging tool that closely resembles del.icio.us, an instant messaging functionality as well as RSS feeds. Most of the tools used by Intellipedia are open source and the 2-person team behind the initiative deliberately limit the modifications and new features they add to these tools in order to benefit from the rapidly innovating information economy. “We cannot keep up with the Internet otherwise,” one of the presenters commented. See my recent blog on Twitter Speed versus Government.

Intellipedia embraces the three core principles of social software in enterprise: work at the broadest audience possible; think topically, not organizationally; and replace existing business processes. During their presentation, the team emphasized that Intellipedia serves to capture the informal dynamics and knowledge generated within the intelligence community. The Web 2.0 platform is particularly useful when contradictory information surfaces. In the past, deconfliction of intelligence reports typically meant choosing one report over the other, thus losing valuable information (particularly when intelligence becomes highly politicized).

With Intellipedia, the debate is documented and allowed to continue. This sometimes leads to agreement and other times not. The salient point here is that all views are allowed to compete and evolve. This is like depicting the probable path of a hurricane using a cone shape icon. Initially, all future paths within this event horizon are likely, but ultimately, only one point will be hit, or real.

Intellipedia seeks to facilitate a similar process albeit with intelligence information. (Incidentally, the UN Secretary-General’s Policy Committee specifically documents any differences that arise during meetings). There is no final product within Intellipedia, the wiki and blog entries are all live and evolving. Interestingly, there have been several incidents when high level personnel within the intelligence community have requested that some pages on the wiki be removed since they were too sensitive. What is stunning however is that these sites were exact copies of pages on Wikipedia. More than 90% of intelligence information is collated from open sources.

The templates used by Intellipedia are kept deliberately simple in order emphasize the focus on information and knowledge rather than form and display. This not only helps build institutional memory over time, it provides a foundation upon which future intelligence can be based. For example, an analyst began posting information on the Beijing Olympics some two years ago and continued doing so on weekly basis. While no one was particularly interested in the topic at the time, the wiki on the Olympics is now particularly active. Intellipedia was also used to support the rescue operations during the California fires, which may suggest that government speed may not be as slow as blogged about here.

The Intellipedia platform itself gets some 6,000 hits/edits per day and a hundred new registered users everyday. Users are provided with incentives to contribute to the platform, e.g., an exceptional contribution award presented the CIA director and an Intellipedia shovel prize which is particularly popular. Mini contests are also held and contribution to Intellipedia is increasingly incorporated in work performance plans. The most active contributer to Intellipedia is a 69 year-old retired intelligence officer who has worked within the intelligence agency for 40 years. He still comes to work on weekends in order to write as much as he can about his experience and lessons learned.

On the handle: “I dig Intellipedia: It’s wiki wiki baby”

In concluding the presentation, the team shared that the hardest part of Intillepedia was encouraging users to let go of control; that there was no ownership as such within Intellipedia. So for example many users wanted their contribution to wikis to remain unchanged. The team was steadfast however, and encouraged those users to vent about their disagreements with the changing text on their own blogs. This is precisely what users are doing now when they feel outvoted on the wikis. These users subsequently receive many comments on their own blogs. “When you let go of control, you unleash creativity… People want to contribute, want to have a say, want to do it right, so let them.” Wisdom of the Wikis?

The next step in the Intellipedia project is to use or develop new tools to crawl or mine the Intellipedia space to extract knowledge. The team ended the presentation with the following video which received Wired’s Rave Award for 2006:

In my next blog entry, I will parallel Intellipedia with the ICTs used by the humanitarian community to collect, share and analyze information.

Patrick Philippe Meier

UN World Food Program to use UAVs

I met with the World Food Program’s (WFP) Emergency Information Management team in Rome late last year and was pleasantly surprised when the term UAVs came up; Unmanned Areal Vehicles, otherwise known as drones and predators in different contexts. The fact that a leading field-based UN agency is actively engaged in a pilot program to use UAVs as early as this summer is particularly surprising and exciting at the same time.

Why surprising? UN Member States have been consistently touchy vis-a-vis issues of sovereignty. Indeed, much time has passed since President Dwight Eisenhower’s 1960 proposal for a “UN aerial reconnaissance capability […] to detect preparations for attack” to operate “in the territories of all nations prepared to accept such inspection.” Eisenhower had pledged that “the United States is prepared not only to accept United Nations aerial surveillance, but to do everything in its power to contribute to the rapid organization and successful operation of such international surveillance.” My, my how times have changed.

Why exciting? There is a notable albeit delayed “spill-over” effect between the use of ICTs by the disaster management and subsequently by the conflict prevention and human rights community. Furthermore, the occurrence of natural disasters amid complex political emergencies is an increasingly widespread phenomenon: over 140 natural disasters have occurred in complex political emergencies in the past five years alone.

The team at WFP is collaborating with ITHACA to build the UAV prototype Pelican. ITHACA is the Information Technology for Humanitarian Assistance, Cooperation and Action, a center of Excellence created by Politecnico di Torino (DITAG) and the Istituto Superiore sui Sistemi Territoriali per l’Innovazione (Si.T.I)

The main goal of the UAV project is to support disaster management through an innovative and effective tool for rapid mapping purposes in the early impact stage. The UAV is easily transportable on normal aircrafts and usable on the field, autonomously, by a couple of operators. The platform is equipped with the autopilot MP2128g, which allows an autonomous flight except for take-off and landing, and with digital sensors characterized by geometric and radiometric resolutions suitable for digital photogrammetry. […]

If satellite data are not available or not suitable to supply radiometric and geometric information, in situ missions must be foresaw. To this end the Pelican is equipped with a GPS/IMU navigation system and different photographic sensors suitable for digital photogrammetric shootings with satisfying geometric and radiometric quality. It can be easily transportable on normal aircrafts and usable on the field by a couple of operators.

The aircraft is equipped with the MP2128g autopilot that allows autonomous flights and provides a real-time attitude of flight. The software HORIZONmp provides flight path and current sensor values in real-time. The operator can also insert a flight plan (up to 1000 waypoints) on a preloaded map and upload them during the flight. Besides the system can be connected with the payload cameras, so it is possible to schedule an automatic shooting time. The operations of take-off and landing must be accomplished manually due to the insufficient GPS’s in-flight accuracy.

The Pelican uses the Ricoh GR commercial digital camera. The use of two Ricohs (stereo pairs) allows the Pelican to rapidly update existing maps and to perform 3D feature extraction devoted to the identification of areas that require further investigations.

When I spoke to the team at WFP, they quoted a price range of $12-$10K, which is definitely the cheapest price tag I’ve come across for a UAV with the Pelican’s specs. The folks in Torino are also working to push the range of the Pelican to 200km with longer endurance limits. One could then operate the Pelican from Thailand/Burmese border and fly the UAV into Burma to identify movement of soldiers.

Of course, the military junta could try and take the bird down, but even if the small Pelican took a hit, all the data would have been captured before impact thanks to the real-time video downlink made possible by the Ricoh. The potential for an iRevolution would be met if video footage could be beamed to individual mobile phones, perhaps using the video encryption technology I recently blogged about.

Patrick Philippe Meier

Blackberry Burried by India?

I just spoke with a colleague at RIM/Blackberry who mentioned that the Indian news is abuzz in response to the government’s demand that they be granted access to all calls and emails from every Blackberry in the country. RIM has some 400,000 Blackberry users in India which is an important source of revenue. Indeed, Blackberry users in India spend some $28/month; the global average is $22 and the average mobile phone user in India only spends some $5 a month.

BBerry

The government argues that these measures are a question of national security given fears of terrorism in the country. The security agencies have therefore asked that Blackberry deposit its decryption keys or allow communications to be intercepted. The latest news is that India’s ministry of telecommunications has requested RIM to put up servers in India, which would help securities agencies monitor the services.

In my opinion, all this does is send a message to potential terrorists not to use Blackberries.

Patrick Philippe Meier

UNHCR Refugee Google Earth Layer Released

UK Guardian: The new Google Earth layers weave together satellite maps, photos, videos and eyewitness accounts to give viewers a close-up look at the refugee crises in Iraq, Chad, Columbia and Darfur in Sudan.

They allow users to find out about UNHCR operations, locate refugee camps and discover the impact of the humanitarian crises on neighboring countries such as Sudan, Syria and Ecuador.

Users can explore the lives of those in exile by clicking on exact locations in the refugee camps to see photos of the facilities, such as health clinics, schools, water taps and sanitation. There are pop-up videos of specific operations and events, such as a visit to a Chad refugee camp by the actor and UN goodwill ambassador Angelina Jolie.

The UN deputy high commissioner for refugees, L Craig Johnstone, said: “Google Earth is a very powerful way for UNHCR to show the vital work that it is doing in some of the world’s most remote and difficult displacement situations. By showing our work in its geographical context, we can really highlight the challenges we face on the ground and how we tackle them.”

A UNHCR spokesman said the programme could soon develop further. “With the new generation of cameras with GPS, we can foresee taking photos of a place and uploading it directly to Google Earth. For our planning, mapping and communications unit, that would be an amazing tool.

“Over time, we can envision increasing the number of elements shown that will certainly increase the ‘live’ experience of the platform.”

The next step for an iRevolution is to enable refugees to access this information on a regular basis. This need not require high-technology. The information could be broadcast by radio, for example.

Patrick Philippe Meier

Human Rights 2.0: What’s in a Name?

A thousand thanks to Sanjana at ICT4Peace for his kind recommendation of my blog, I am truly grateful! Thank you Sanjana!

Sanjana also took the time to reflect on my recent blog entry entitled Human Rights 2.0 and raised some important questions and concerns. Indeed, Sanjana has a wealth of practical experience in securing fundamental rights of peoples and communities at risk with the use of technology. I very much value his insights and checks on reality, which our ICT working group at the OCHA 5+ Symposium greatly benefited from.

Sanjana asks for a definition of what I mean by Human Rights 2.0:

Perhaps the term requires a more precise definition that I encourage Patrick to provide. What would Human Rights 1.0 for example be in contradistinction to Human Rights 2.0? And what are the markers that one has upgraded to Human Rights 2.0? And say for example that initiatives similar to Eyes on Darfur are able to prevent wide-scale massacres, but are powerless to prevent the arbitrary violence against citizens by repressive governments or the continued violation of language rights (with significant implications on the larger human rights context). Would that still be Human Rights 2.0?

And concludes,

For me, buzzwords du jour are less important than the meaningful empowerment of those whose lives are on the line when it comes to HR protection and who don’t have time to become experts in ICT. That’s our job. We all get a high when we see HR activists use our technology – they simply trust the system to deliver results they could not have otherwise achieved, in a manner and media of their own choosing and design. The underlying technology is, for them, invisible and unimportant. What matters is not Human Rights 2.0, but about being as much of a pain in the arse as possible to those who violate human rights, by recording for posterity and with as much detail as possible, crimes against humanity and human decency.

I fully agree with Sanjana’s observations–indeed, who would not? Yes, Human Rights 2.0 is certainly a buzzword and I must confess (with head bowed in shame) that I enjoy the “creative writing” and entertaining analogies that Thomas Friedman is known for, e.g., “The Lexus and the Olive Tree“. That doesn’t mean I agree with most of his arguments.

In any case, yes, buzzwords are less important than the meaningful empowerment of those whose lives are on the line. Again, I hope that goes for all of us committed to civilian protection and human rights. Moreover, I fully share Sanjana’s conviction that what matters is to be as much of a pain in the backside to those who violate human rights as possible. Indeed, this is exactly how I answer questions from friends and colleagues regarding the topic of my dissertation: “Basically, I’m interested in how to [annoy] repressive regimes as much as possible using ICTs.”

So defining Human Rights 2.0 may really be more of an academic or theoretical exercise than one might care for. The purpose of my blog entry was simply to showcase a few hands-on projects that seek to employ technologies in innovative and practical ways. So while I would rather converse about the merits and challenges of those projects than seek a definition that meets the larger audience’s approval, here is my attempt nevertheless (with the understanding that I agree with all the qualifications articulated by Sanjana in his response).

My understanding of Web 2.0 is that it is a Social Web, and by that I mean a Read/Write Web, where user-generated content and peer-to-peer communication begins to eclipse traditional sources of information, ownership and communication architectures. (To this end, I’m a big fan of Yochai Benkler and his work on “The Wealth of Networks“). My use of Human Rights 2.0 is founded on the concept of people-centered human rights monitoring and protection. This approach is necessarily tied to my background in conflict early warning/response as well as my interest in nonviolent resistance and the potential of iRevolutions. To this end, I offer the following definition inspired from disaster early warning/response:

The objective of Human Rights 2.0 is to use ICTs to empower individuals and communities threatened by violence to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury and loss of life.

The next step, as I recommended to AI during my conversations, is to provide local communities with access to the information depicted in very high resolution satellite imagery.

My use of the term Human Rights 2.0 highlights the potential contribution that new means/access to technologies can bring to the field of human rights, and does not imply a significant and irreversible process, let alone a Hegelian dialectic. Web 2.0 technology and related ICTs are not available or widespread in many countries around the world. At least this has been my experience while working in Morocco, the Western Sahara, Tunisia, the Gambia, Congro-Brazzaville, the Sudan, Uganda, Kenya, Ethiopia and most recently in Timor-Leste.

This partly explains my frustration with Ivory Tower thinking, which has gotten me to vent on more occasions (here and here) than I’d like to recall.

Patrick Philippe Meier

Human Rights 2.0: What’s in a Name?

A thousand thanks to Sanjana at ICT4Peace for his kind recommendation of my blog, I am truly grateful! Thank you Sanjana!

Sanjana also took the time to reflect on my recent blog entry entitled Human Rights 2.0 and raised some important questions and concerns. Indeed, Sanjana has a wealth of practical experience in securing fundamental rights of peoples and communities at risk with the use of technology. I very much value his insights and checks on reality, which our ICT working group at the OCHA 5+ Symposium greatly benefited from.

Sanjana asks for a definition of what I mean by Human Rights 2.0:

Perhaps the term requires a more precise definition that I encourage Patrick to provide. What would Human Rights 1.0 for example be in contradistinction to Human Rights 2.0? And what are the markers that one has upgraded to Human Rights 2.0? And say for example that initiatives similar to Eyes on Darfur are able to prevent wide-scale massacres, but are powerless to prevent the arbitrary violence against citizens by repressive governments or the continued violation of language rights (with significant implications on the larger human rights context). Would that still be Human Rights 2.0?

And concludes,

For me, buzzwords du jour are less important than the meaningful empowerment of those whose lives are on the line when it comes to HR protection and who don’t have time to become experts in ICT. That’s our job. We all get a high when we see HR activists use our technology – they simply trust the system to deliver results they could not have otherwise achieved, in a manner and media of their own choosing and design. The underlying technology is, for them, invisible and unimportant. What matters is not Human Rights 2.0, but about being as much of a pain in the arse as possible to those who violate human rights, by recording for posterity and with as much detail as possible, crimes against humanity and human decency.

I fully agree with Sanjana’s observations–indeed, who would not? Yes, Human Rights 2.0 is certainly a buzzword and I must confess (with head bowed in shame) that I enjoy the “creative writing” and entertaining analogies that Thomas Friedman is known for, e.g., “The Lexus and the Olive Tree“. That doesn’t mean I agree with most of his arguments.

In any case, yes, buzzwords are less important than the meaningful empowerment of those whose lives are on the line. Again, I hope that goes for all of us committed to civilian protection and human rights. Moreover, I fully share Sanjana’s conviction that what matters is to be as much of a pain in the backside to those who violate human rights as possible. Indeed, this is exactly how I answer questions from friends and colleagues regarding the topic of my dissertation: “Basically, I’m interested in how to [annoy] repressive regimes as much as possible using ICTs.”

So defining Human Rights 2.0 may really be more of an academic or theoretical exercise than one might care for. The purpose of my blog entry was simply to showcase a few hands-on projects that seek to employ technologies in innovative and practical ways. So while I would rather converse about the merits and challenges of those projects than seek a definition that meets the larger audience’s approval, here is my attempt nevertheless (with the understanding that I agree with all the qualifications articulated by Sanjana in his response).

My understanding of Web 2.0 is that it is a Social Web, and by that I mean a Read/Write Web, where user-generated content and peer-to-peer communication begins to eclipse traditional sources of information, ownership and communication architectures. (To this end, I’m a big fan of Yochai Benkler and his work on “The Wealth of Networks“). My use of Human Rights 2.0 is founded on the concept of people-centered human rights monitoring and protection. This approach is necessarily tied to my background in conflict early warning/response as well as my interest in nonviolent resistance and the potential of iRevolutions. To this end, I offer the following definition inspired from disaster early warning/response:

The objective of Human Rights 2.0 is to use ICTs to empower individuals and communities threatened by violence to act in sufficient time and in an appropriate manner so as to reduce the possibility of personal injury and loss of life.

The next step, as I recommended to AI during my conversations, is to provide local communities with access to the information depicted in very high resolution satellite imagery.

My use of the term Human Rights 2.0 highlights the potential contribution that new means/access to technologies can bring to the field of human rights, and does not imply a significant and irreversible process, let alone a Hegelian dialectic. Web 2.0 technology and related ICTs are not available or widespread in many countries around the world. At least this has been my experience while working in Morocco, the Western Sahara, Tunisia, the Gambia, Congro-Brazzaville, the Sudan, Uganda, Kenya, Ethiopia and most recently in Timor-Leste.

This partly explains my frustration with Ivory Tower thinking, which has gotten me to vent on more occasions (here and here) than I’d like to recall.

Patrick Philippe Meier