My colleague Ankit Sharma at the London School of Economics (LSE) recently sent me his research paper entitled “Crowdsourcing Critical Success Factor Model” (PDF). It’s definitely worth a read. Ankit is interested in better understanding the “dynamic and innovative discipline of crowdsourcing by developing a critical success factor model for it.” He focuses specifically on mobile crowdsourcing and does a great job unpacking the term.
Ankit first reviews four crowdsourcing projects to inform the development of his critical success model: txtEagle, Ushahidi, Peer Water Exchange and mCollect. He then notes the crucial difference between outsourcing and crowdsourcing. The latter’s success is dependent on the scale of crowd participation. This means that incentives need to tailored to recruit the most effective collaborators while “the motive of the crowd needs to be aligned with the long term objective of the crowdsourcing initiative.” To this end, Ankit defines successful crowdsourcing in terms of participation.
Ensuring participation requires that the motives of the of the crowd be directly aligned with the long term objectives of the crowdsourcing initiative. “Additionally, to promote participation the users must use and accept the technology of crowdsourcing.” Ankit draws on Heeks and Nicholson (2004), Carmel (2003) and Farrell (2006) to develop the following model.
The five peripheral factors above “affect the motive alignment of the crowd which is the prime determinant of success of the crowdsourcing initiative. It is assumed to directly affect user participation. The success of the initiative is expected to bring in more participation. Hence, the relationship between motive alignment and crowdsourcing success is bidirectional in the model.”
- Vision and Strategy: “The coherence of the initiative’s vision and strategy with the aspirations of the crowd ensures that the crowd is willing to participate in it.”
- Human Capital: The skills and abilities that the crowd possesses is a determinant of successful crowdsourcing. The more skillful and able the crowd is, “the less effort required by the crowd to make a meaningful contribution to the initiative.”
- Infrastructure: “Crowdsourcing requires abundant, reliable and cheap telephone or mobile access for its communication needs in order to ensure participation of the crowd.”
- Linkages and Trust: Crowdsourcing initiatives all involve a time or information cost for the crowd, which is why developing the trust factor is critical. Proper linkages can also “add a substantial trust aspect to the crowdsourcing initiative.”
- External environment: “The macroeconomic environment comprising of the governance support, business environment, economic environment, living environment and risk profiles are important determinants of the success of the crowdsourcing initiative.”
- Motive alignment: “Motive alignment of the crowd may be defined as the extent to which crowd is able to associate with long term objective of crowdsourcing initiative thereby encouraging its wider participation.” The table below explains how the peripheral factors effect the motive alignment of the crowd.”
Ankit applies his matrix to the four case studies cited earlier. This yields the following summary:
Based on this analysis, Ankit argues that for crowdsourcing projects to succeed it is “critical that the crowd is viewed as a partner in the initiative. The needs, aspirations, motivations and incentives of the crowd to participate in the initiative must remain the most important consideration while developing the crowdsourcing initiative. The practitioners must understand the crowd motivation and align their goals according to it.” In an ideal scenario, Ankit notes that technology must be “optimally usable” without the need to provide training and assistance. Successful crowdsourcing initiatives also require an “aggressive marketing and public relations plan.”
The main question I look forward to discussing with Ankit is this: what level of crowd participation is sufficient for a crowdsourcing initiative to be deemed successful? Should this be a percentage? e.g., the % of a given population participating in the crowdsourcing project. Or should the number be an absolute number? This is not an academic question. Who decides whether a crowdsourcing project is successful and based on what grounds?
WHOA!!! Way to go man!!! good stuff!!!
The issue of defining a quantifying parameter determining the success of a crowdsourcing initiative was one of the most important issues I was concerned about while I was writing my paper. In the paper, “a crowdsourcing initiative is defined as successful if there are sufficient members of the crowd participating in it.”
As crowdsourcing is a collaborative problem solving mechanism and each initiative customises it to fit their social (or business) goals, the extent of participation ensuring success may vary on a case to case basis. I believe, the participation should be defined as sufficient when the crowdsourcing initiative has been able to achieve its goals, eg – livelihood promotion in case of txtEagle, clean water supply in case of peer water exchange etc. However, it leads us to another question – how to measure/quantify whether the crowdsourcing initiative has been able to achieve its goals?
Regarding your other question – I think the right to decide the success or failure of a crowdsourcing initiative lies with the crowd. Hence, the decision can be crowdsourced. If the initiative is successful the ‘collective voice of the crowd’ will definitely make it known that the initiative has been a success. Is this approach practical?
Hi Ankit, Many thanks for commenting! Looking forward to your joining our internship program next month.
Well put: “The participation should be defined as sufficient when the crowdsourcing initiative has been able to achieve its goals.” I think this raises an important question regarding organizations that deploy crowdsourcing platforms like Ushahidi: are the goals of deployment actually sufficiently well articulated (or articulated at all)? In the case of Haiti, when I called David Kobia to launch the platform 2 hours after the earthquake, I had not identified any goal(s) for doing so, it was simply a reflex, the feeling that I needed to do something. The same is true of the Chile deployment. I don’t know that our colleagues who deployed Ushahidi platforms in the Sudan, Lebanon, etc, had very clearly articulated goals for doing so, ie, goals that could then be evaluated post-deployment for impact evaluation purposes. If these goals are not defined, then people like [censored] can easily impose their own version of the goal(s) retroactively and call the project a failure even though they had bugger all to do with the project. Lame but true. Moral of the story: organizations need to develop clear M&E frameworks prior to deployment.
I also liked this idea: “The right to decide the success or failure of a crowdsourcing initiative lies with the crowd. Hence, the decision can be crowdsourced. If the initiative is successful the ‘collective voice of the crowd’ will definitely make it known that the initiative has been a success.” I spoke with a good friend of mine who is an expert in M&E and according to her this approach is highly problematic. Apparently, this has been the way that M&E has been done for decades and it is insufficient. It has to be more rigorous, ie, polling is not enough. But I still like the idea in principle because it democratizes the decision as to whether a crowdsourcing project is successful or not. That said, I do see her point. We could ask everyone who used/participated in the Ushahidi-Haiti project whether they thought it was successful but this would not necessarily mean the project had any actual impact.
Looking forward to discussing this further with you.
After having received the permission from the Ayatollah, I will stick to ONE post a day starting from today. 🙂
Very interesting paper, and very interesting topic. I have two comments on this.
First comment. “Organizations need to develop clear M&E frameworks prior to deployment”.
I agree there must be a M&E system, and this is as necessary as it is to have a clear objective when you start a project. In this way you can have a “proper management of the vision and strategy” which “primarily ensures sufficient crowd participation” and you can also be able to analyze the independent and dependent variables that will allow you to better understand the five peripheral factors explained in the article.
But there is a factor that needed to be taken into consideration if we speak about a M&E framework valid to evaluate crowd-sourcing projects. I will make the example of what I know, which is coming from my experience with Ushahidi-Chile.
As you said Patrick, we also had no clue when we start working on the platform of what we wanted to do. There was no other goal then populate the map (to quote a great member of the team about this: “we had no time to think about what we wanted to do, we were mapping”). We started thinking that our goal was to help people in need of course, but this is a pretty broad goal, and it doesn’t tell you how you will do it in practical terms. Interestingly enough for us the goal of the work become the fact of handling in the platform to Chileans, and it was a goal that basically came up from the need to write a proposal for the grant. But this cannot be the goal of a crowd-sourcing project per se: it can be very good for donors purposes, because, sadly enough, it also provide a way out (needed in the case of Ushahidi if run by universities in emergencies cases). To make a long story short, the M&E framework needed to be adapted to the forms that the crowd-sourcing project takes: I don’t know the other three projects analyzed in the article (will learn about them) but in the case of Ushahidi the goal will change over the time. And this will happened in emergencies but also in non-emergencies context. This is because if we start from the consideration that there must be a bi-directional relationship with the crowd, well then the crowd will change your project and your goal. And this is the good part of it: the bi-directional relationship implies an impact on both the project and the crowd, hence the project will transform parts of the goals or all of it over the course of the project. Does this means that we cannot do an evaluation of the crowd-sourcing project and fall again into the fallacy of the retroactive goal imposition? I don’t think so, but this factor should be taken into consideration in designing a successful crowd-sourcing M&E framework and decide what the goals will be: the elasticity of the platform/system you use, meaning the ability to redefine itself according to the inputs, affect the definition of the goal and consequently the imposition of a static framework of M&E. In short we are going back here to the old problem on how to shift from emergency to development, if we want to look at it in terms of crowd-sourcing applied to emergencies, or outside the emergency context on the sustainability of long term crowd-sourcing projects.
Second comment. “Crowd-sourcing has significant transformational power in the domains of collective action and content creation”
The role of infrastructures and human capital in the participation of the crowd in the crowd-sourcing project has also some interesting aspects to be considered.
The idea that pops in my mind here is that, drawing again from the bi-directional functionality of those types of projects, there is the risk of an intrusive modification of social dynamics, especially as related to traditional systems of communications and social behaviors. Again, I feel the need to pay particular attention when designing a project of crowd-sourcing to the impact on the crowd. To make again a long story short (in my mind very much longer of course), if information is power, and we give power to the crowd, we are also automatically making a distinction in the crowd based on the availability of different resources for different people inside it, i.e. education and availability of resources, basically what Ankit call infrastructure and human capital. I am looking here at the disparity in those two factors inside the crowd itself as elements that can heavily affect the existing equilibrium or disequilibrium in the social organization of the crowd, in a positive or negative way. How do we solve this issue? The idea of the code of conduct for SMS for example is interesting and seems to be a good answer to partially solve the side of the problem related to the impact of crowd-sourcing projects, but there is a broader spectrum that needs to be analyzed here. The M&E framework of a crowd-sourcing project needs to take into consideration also the not intended impact of the project itself, and those impacts I feel are very hard to find and to detect. We can take the real example of Unicef and its idea to give to children smart phone to ask them to report about their school attendance in Iraq. Are we getting crazy with crowd-sourcing and not considering at all the impact on the social dynamics of the areas where we implement projects? And how do we evaluate the secondary effects of a crowd-sourcing project in terms of shift of power inside the crowd and in between the crowd and the government, for example? Can a crowd-sourcing project be detached by its political implications?
Wow, you know Ayatollah? D’you think he might be interested in funding Ushahidi? 😉
Awesome comment, Anahi, many thanks for taking the time to share. May I recommend that you take the above and turn it into a blog post? I really like what you had to say–particularly: “there must be a bi-directional relationship with the crowd” which means that “the crowd will change your project and your goal.” Nice. Also totally agreed on secondary effects and shift in power.
“Can a crowd-sourcing project be detached by its political implications?”
No, not if information is power and crowdsourcing shifts the flow of information. Hmmm, I think you’ve got yourself an interesting PhD dissertation question!
Pingback: Towards the definition of a Crowdsourcing M&E framework | Diary of a Crisis Mapper
Pingback: The impact of crowd-sourcing projects | Diary of a Crisis Mapper
You might want to check out Nathan’s paper on txteagle that looks at some of the data accuracy issues: http://www.txteagle.com/hcii09.pdf
Inferring ‘accuracy’ from noisy user responses. Inferring the correct answer from
the responses of multiple error-prone respondents has been a problem addressed in
detail throughout a variety of academic literature. Dawid and Skeene approached the
problem in 1979 when attempting to infer a patient’s history based on potentially
biased reports from different clinicians . They introduce an expectation-
maximization (EM) model that simultaneously estimates the bias of these different
clinicians as well as the underlying latent variable, in this case the patent’s medical
record. Variants of this approach have been used for a variety of other applications
including linguistic annotations , image categorization , and biostatistics .
While these methods generally assume that all respondents complete all of the
available tasks, it is fairly trivial to adjust these models to a crowdsourcing scenario.
Snow et al. employ a similar EM model to infer respondent bias in categorical data
, while Sheng et al. discuss the problem of response uncertainty and methods to
estimate the number of samples required to achieve a given confidence of a correct
Pingback: Learning about Ushahidi | the hope and the hype of technology
Pingback: Think before you act! | the hope and the hype of technology
Pingback: Como fazer crowdsourcing : Ponto Media
Pingback: Seis fatores para êxito do crowdsourcing | Herdeiro do Caos
Pingback: La liberación de las ideas y el software abierto « Ricardo De León 1961's Blog
Pingback: P2P Foundation » Blog Archive » Family Educator Commons
Pingback: 如何运营一个成功的众包项目 | E惠社—专注于非营利领域互联网应用
Pingback: Top 10 Posts of 2010 | iRevolution
Pingback: The Importance of Trust « Ideavibes Blog
Pingback: Crowdsource Crowdsourcing « Social Media Class at HU
That is a really good tip particularly to those new to the blogosphere. Brief but very precise info… Many thanks for sharing this one. A must read post!
Glad that you found the blog useful. Patrick’s blog has given the paper great visibility and he summarized the paper really well. However, if you want to read the full paper, you can go to the link below and download the paper titled, Crowdsourcing Critical Success Factor Model: Strategies to harness the collective intelligence of the crowd.
Link – http://inferringvalue.wordpress.com/other-writings-3/
Pingback: The Best of iRevolution: Four Years of Blogging | iRevolution
Pingback: Crowdfunding Critical Success Factor Model: The Case of Singapore | the hope and the hype of technology
On the M&E – Goals and Measurables:
It seems to me that making the goal more complex than what it could have been is missing the point. For example, in the Haiti deployment, the goal can be simplified to “provide a platform that bridges between the needs and the resources”. Or maybe even simpler a tool that allows the Crowds Voice to be heard.
Therefore, the evaluation of should come down to did the “tool” improve the process? and measure its usefulness / usability of the tool in bridging the gap or provide information. The way I would interpret “The right to decide the success or failure of a crowdsourcing initiative lies with the crowd… ” would be did the community (responders or the affected community) use the tool? How did they use it? Did it contribute? and how can it be improved. Crowdsourcing is in a phase of trial and error – I urge us to ask questions and experiment.
On Anahi’s comment, specifically ““Can a crowd-sourcing project be detached by its political implications?” – They cant be detached. And therfore they do beg mindfulness. Its NOT about the numbers, its abouyt the composition of the crowd. Gender, Minorities etc. and always the big question “whos voice are we not hearing?”
Pingback: جمع سپاری؛ برون سپاری کارها به کاربران | رسانههای اجتماعی
The final question posed in the article is an interesting one for rural-low-population-areas (such as much of VT). “what level of crowd participation is sufficient for a crowd-sourcing initiative to be deemed successful?” Percentage vs Total number?
Ankit, I don’t know if you address this, but one reason government often hesitates to incorporate crowd-sourcing is the issue of “authoritative data”. The government often believes (rightly or wrongly) that it is in the business to gather and publishing authoritative data. How can crowd-sourcing fit into this? I think it can…the US Census is arguably just a big “old school” crowd-sourcing project!?!!? What can we learn from the US Census?
Hi, Thanks for raising pertinent questions related to the research. Thinking about the level of crowd participation that leads us to decide, whether an initiative has been a success or not.
I think the success of an initiatve is more dependent on whether the initiative has been able to achieve the goals that it was initially set up for. In my opinion, certain initiative may not require us to base their success on the level of crowd participation as the level of crowd participation required might vary from initiative to intiiative. This leads me to point to an important fact that organisation must try to set up clear M&E objective prior to deploying a crowdsourcing initiative.
With regards to your second question – Fitting crowdsourcing into government run processes, I think that its just a question of time. As you say crowdsourcing has been, in principle, been used since historical times (see my blog) but adopting it & using technology as the enabler for it, will take its time. Once that time passes, I am sure people (and organisation) will start trusting it then.
Happy to discuss this further if you’re interested.
Link to the blog which talks about how crowdsourcing was used in 16th century India – http://inferringvalue.wordpress.com/2012/11/05/the-more-things-change-the-more-they-remain-the-same-similarities-between-social-media-led-citizen-engagement-and-citizen-engagement-in-16th-century-mughal-india/