The Case for Capacity Development in Community-Based Conservation Efforts

Capacity Building for Community-Based Conservation

Village Earth community mapping workshop. Ucayali Region – Peru

In the protection and management of natural resources, it is widely agreed that human social, political, cultural and economic systems must be part of the equation (Berkes 2004; Kates et al. 2001; Gunderson & Holling 2002). In recent years, community-based and collaborative conservation have been increasingly recognized as alternatives to the dominant paradigm of top-down, expert-driven management (Berkes 2002). However, the literature suggests that collaborative and community-based conservation efforts should be cautious about moving forward too quickly since low levels of organizational capacity at the community-level may pose a challenge to rapidly developing institutions capable of managing complex natural system (Barrett 2001). This is especially true in collaborative efforts that involve multiple stakeholders (Berkes 2004). In such efforts, taking into account historical and contemporary relationships of power among stakeholders will help ensure greater equity as well as the promotion of local livelihoods and sustainability (Gruber 2006: Berkes 2004; Brosius and Russell 2003). The literature also suggest that special emphasis should be placed on empowerment at the community-level and especially with traditionally marginalized groups. According to Agrawal (1999):

“local groups are usually the least powerful among the different parties interested in conservation. Community-based conservation requires, therefore, that its advocates make more strenuous efforts to channel greater authority and power toward local groups. Only then can such groups form €effective checks against arbitrary actions by governments and other actors.”

If, as Agrawal suggests, empowerment at the local-level should be our priority, where do we start? Brown (2002) has identified a set of internal and external challenges that community-based organizations face. Externally, community-based organizations are challenged by a lack of legitimacy and accountability with the general public; relating with institutions of the state, such as government agencies; relating with institutions of the market, such as businesses; and relating with international actors, such as development agencies that provide funding support. Internally, they face challenges of amateurism, restricted focus, material scarcity, fragmentation, and paternalism. However, efforts by governments and NGO’s to build the capacity of community-based organizations without destroying what makes them so unique in the first place (e.g. local focus, their spirit of volunteerism and solidarity) is not easy (Powers, 2002; Brown, 1989). Powers et al (2002) offers the following advice:

“We believe it may be most effective if INGOs go beyond decentralising their operations and cease being operational in the field. This can be done by forging ties with autonomous local NGO’s which have a proven commitment and track record in handing over controls in the development process to the communities where they are working. To the degree that terms for partnership can be negotiated equitably, the imperative for standardised and impersonal mass reproduction of one strategy, which ironically is often only magnified (rather than adapted) in the process of decentralization, can be significantly curtailed.”

According to Berkes (2004) the specific approaches to building capacity of community-based conservation organizations is a current area of interest for the conservation community. Furthermore, the success of community-based natural resource management has lead to an explosion in support from international agencies and subsequently, the number of new local natural resource management organizations (Gruber, 2010; Armitage 2005). According to Gruber (2010):

“[w]hile CBNRM has proven to be a successful model in numerous cases, this approach may be outpacing a critical analysis of the key characteristics of effective community based environmental initiatives which can ensure long-term successful and sustainable programs in a variety of settings.”

Village Earth offers several courses focused on supporting community-based conservation efforts, this includes: Participatory Water Resource ManagementBuilding Climate Change Resilient CommunitiesCommunity Participation and Dispute Resolution, & Agroecology for Sustainable Communities.

References Cited

Agrawal A, Gibson CC (1999) Enchantment and disenchantment: the role of the community in natural resource conservation. World Development 27:629–649

Armitage, D. 2005. Adaptive Capacity and Community-Based Natural Resource Management. Environmental Management 35:703-715

Barrett, C.B.,K. Brandon, C. Gibson, and H. Gjertsen. 2001. Conserving tropical biodiversity amid weak institutions. BioScience 51:497-502

Berkes, 2004 Rethinking Community-based Conservation, Conservation Biology, Volume 18, No. 3 July 2004: 621-630

Berkes, F. 2002. Cross-scale institutional linkages: perspectives from the bottom up. Pages 293-321 in E. Ostrom, T. Dietz, N. Dolsak, P.C. Stern, S. Stonich, and E.U. Weber, editors. The drama of the commons. National Academy Press. Washington, D.C.

Brosius, J.P. and D. Russell. 2003 Conservation from above: an anthropological perspective on transboundary protected areas and ecoregional planning. Journal of Sustainable Forestry 17 (1/2):39-65

Gruber, 2010 Key Principles of Community-Based Natural Resource Management: A Synthesis and Interpretation of Identified Effective Approaches for Managing the Commons. Environmental Management 45:52–66
Kates, R. W., et al. 2001. Sustainability science. Science 292: 641-642

A Brief History of International Development Theories and Practices

History of Development Theories

What is “Development”? Where did this concept come from? Why do community workers need to understand this history?

As Americans, development is always something going on somewhere else.  We don’t usually hear much about it. What little we do know about it often comes from movies or books where we are placed as the protagonists – the actors- in development: the “Visionary Peace Corps Volunteer,” or the “Aid Worker Who Struggles Against Impossible Odds.” Even though these images may not be as pervasive in American culture as, say, the cowboy, the athlete, or the rock star, nonetheless, they have become part of the pantheon of heroes in our national mythology.

Despite the fact that a large number of Americans can identify with this mythology in some way or another, we often lack a understanding of the deeper-level political, social and cultural dynamics behind this mythology and where it comes from.  

So, to answer my own question, “Why do we need to understand the history of development?” I would argue that it is to better understand how we are perceived  by others as the so-called actors in development and, while at the same time, so we can begin to unpack these perceptions and become more self-aware.

Where do we start? While the notion of “Development” per-se, isn’t necessarily something new, our modern conception of development was really shaped after the end of World War II … in particular by the Truman Doctrine, which became the defining doctrine of US Foreign Policy until the collapse of the Soviet Union in 1989. The Truman Doctrine was a response to the concentration of global power into two heavily armed superpowers, the United States and its allies, and the Soviet Union and its allies.

The roots of the Truman doctrine was fear, fear that if Soviets invaded Europe, which was in shambles after the war, that their power would spread and pose an existential threat to the United States. This was a lesson learned from the first World War and the rise of Nazism. This idea was later referred to as the Domino Theory, and it was the idea that if the Soviets took over one country, they would soon take over others. In response, the Truman doctrine sought to contain this spread through a policy referred to as Containment.

As part of the strategy to contain the spread of the Soviets, the Truman Doctrine included another policy to help Europe regain its strength through an active program of development referred to as the Marshall Plan. The plan cost nearly $13 Billion dollars from 1948 to 1951; the majority of which went to purchase goods from the United States. $3.4 billion had been spent on imports of raw materials and semi-manufactured products; $3.2 billion on food, feed, and fertilizer; $1.9 billion on machines, vehicles, and equipment; and $1.6 billion on fuel. Only 1.2 billion was in the form of loans.

The invasion of South Korea by the Communist North signaled the end of Marshall plan and ushered in a new era for containment doctrine. On the one hand, the United States became more focused on rebuilding the military strength of its European allies and the internationalization of its foreign development assistance. The connection between foreign assistance and imperial strength was not just a strategy of the United States. In 1947 the Soviet Union launched its own version of the Marshall Plan referred to as the Molotov plan (named after the Soviet foreign minister). Later, in 1949, the Soviet’s founded COMECOM or The Council for Mutual Economic Assistance which became the Soviet’s foreign assistance agency.

The Marshall Plan formally ended in 1951 with the Mutual Security Act. It was replaced by the Mutual Security Agency and later the Foreign Operations Administration which consolidated US Foreign Assistance. Its purpose was to quote “centralize all governmental operations, as distinguished from policy formulation, that had as their purpose the cooperative development of economic and military strength among the nations of the free world.”

Traditionally, when we think of the Cold War, we think of the nuclear arms race and proxy wars in places like Korea, Vietnam, and Nicaragua but “development” was also a widely used tool in the arsenals of the U.S. and its allies as well as the Soviet Union and its allies. Of course, they just had different visions of what development meant but ultimately, both had their own strategic and economics interests at the forefront. In 1961, our Department of Defense sought to promote this strategy to the American public with the film “The Challenge of Ideas.”


So the modern conception of Development really had its origins in the post-war period as part of a larger strategy to thwart the expansion of communism but also to promote the interests of the U.S. and it’s allies. Prior to WWII, poverty in the global south was not a concern of the north. During the colonial era, poverty was understood more on racial terms – According the Anthropologist Arturo Escobar:

“In colonial times, the concern with poverty was conditioned by the belief that even if the “natives” could be somewhat enlightened by the presence of the colonizer, not much could be done about their poverty because their economic development was pointless. The natives’ capacity for science and technology, the basis for economic progress, was seen as nil.”

This all changed after World War II.  For the first time, the destinies of the rich countries now seemed inextricably intertwined with that of the poor ones. This, combined with expanded faith in the ability of technology and social engineering to solve humanity’s long-enduring problems like disease and malnutrition, became the foundation of the modern era of development. Simultaneously there was a breakdown in the old colonial system and shift towards a more “Developmentalist” approach to managing tensions arising in the colonies.



For the U.S. and its allies, the problem in the south seemed pretty simple, especially after its success with a rather rapid reconstruction of Europe, where less than 5-years later many of the European economies were back up-and-running at their pre-war levels. From this experience, it seemed clear that poor countries must be poor because they hadn’t yet developed the technological and social infrastructure for industrial development like the United States and Europe. Thus, the solution, would be to give these countries a little push to help “leapfrog” them from primitive to industrialized capitalist economies.

This strategy, referred to as Modernization Theory, was later crystallized by Walt Rostow, an advisor to both Kennedy and Johnson. His seminal 1962 book, “The Stages of Economic Growth: A non-Communist Manifesto” clearly, was positioned as an alternative development strategy to the Soviet model. In his book,  Rostow laid-out the blueprint for a country’s “Modernization.”  They needed to pass through five stages in order to become developed. The role of Development was to help usher this along.

The first stage was Traditional Society, which had three characteristics.  1. Subsistence agriculture or hunting & gathering (this is almost wholly seen as a “primary” sector economy);  2 Limited technology; and 3. a static or ‘rigid’ society with a  lack of class or individual economic mobility, and wherein stability is prioritized and change is seen negatively.

The second stage was Pre-conditions to “Take-off.” This was characterized by 1. external demand for raw materials initiates economic change; 2. development of more productive, commercial agriculture & cash crops that were not consumed by producers and/or largely exported; 3. widespread and enhanced investment in changes to the physical environment to expand production (i.e. irrigation, canals, ports); 4. increasing spread of technology & advances in existing technologies; 5. changing social structure, with previous social equilibrium now in flux; 6. individual social mobility begins; and 7. the development of national identity and shared economic interests.

“Take off” was stage three when 1. manufacturing begins to rationalize and scale increases in a few leading industries as goods are made both for export and domestic consumption; 2 the “secondary” (goods-producing) sector expands and ratio of secondary vs. primary sectors in the economy shifts quickly towards secondary; and 3. textiles & apparel are usually the first “take-off” industry, as happened in Great Britain’s classic “Industrial Revolution.”

Stage four was the “Drive to Maturity” characterized by 1. the diversification of the industrial base; multiple industries expanding & new ones taking root quickly; 2. manufacturing shifts from investment-driven (capital goods) towards consumer durables & domestic consumption; 3. rapid development of transportation infrastructure; and 4. large-scale investment in social infrastructure (schools, universities, hospitals, etc.),

The fifth and final stage was named the “Age of Mass Consumption” when 1. the industrial base dominates the economy and the primary sector is of greatly diminished weight in economy & society; 2. we see the widespread and normative consumption of high-value consumer goods (e.g. automobiles); and 3. consumers typically (if not universally), have disposable income, beyond all basic needs, for additional goods.

The focus of Modernization is on particular countries and assessing which stage they’re in. Modernization is where the terminology of First World, Second World, and Third World comes from. First World being the free-market industrialized countries, the Third World being the so-called non-industrialized or developing countries, and the Second World are the lesser Industrialized Communist Countries. The strategy for modernization is to help countries progress to the stages of growth. The primary measure of this growth was GDP (Gross Domestic Product – the value of all goods and services generated within a particular country).

Rostow’s “Stages of Growth” was pretty-much the principle behind much of US foreign assistance until the 1980s, when Ronald Reagan took a more hard-line approach towards Soviet expansion.  He expanded the support of anti-communist opposition with cash and arms, and depressed Soviet commodities on the global market by flooding it with highly subsidized U.S. commodities.  During this period, aid was primarily used to influence countries to open their markets to artificially cheap US commodities.



By the 1960’s another theory of Development started to gain traction, not necessarily among U.S. and European Policy makers but rather, among an emerging group  of Third World scholars. It was called Dependency Theory and had its roots in nationalist thinking in India from the turn of the century. It gained traction as the promise of Modernization seemed less and less achievable, and as many in the Third World began to realize that this so-called “aid” from the rich countries came with a price. In many cases, the price was the loss of control over their economies and political systems.

Dependency Theory challenged the very premise of Modernization Theory arguing that the poverty in the south was NOT because their cultures were primitive and inherently non-scientific, or that their economic systems were backward but rather, these scholars argued that if you want to understand poverty in the south, you have to analyze their colonial and neo-colonial relationships with core countries. They argued that yhese relationships not only explain the great poverty in the south, but also the great wealth in the north. They argued that the rich countries got rich in the first place by exploiting the wealth and labor of poor countries … and that the new “development policies” and foreign investment were just a new form of colonization or “neo-colonialism.”

Rather than maintaining a colony, the United States or Britain were now using their economic influence and often-clandestine programs to manipulate elections and  install puppet governments.   Rather than controlling the economies of the South directly from London, Paris, or Washington, these governments of the North were now using the IMF and World Bank to entrap the South in debt and impose structural adjustments.  Rather than taking people as slaves, First World practices were pushing them off their lands and giving them no alternative but to move to the overcrowded urban areas and working in sweatshops.

Another example can be found in Haiti.  Decades of manipulation of their economic and political system, including a U.S. backed coup of a democratically elected president, destroyed their ability to feed themselves and forced millions of people to move into overcrowded and poorly built slums in search of work. This created a catastrophe in 2010 when a 7.0 magnitude earthquake leveled the slums and left millions of people without access to food and shelter.


Where the focus on Modernization is looking at individual countries and “internal” constraints to “modernization,” the focus of Dependency Theory is on long-term colonial and neo-colonial relationships between poor countries (now referred to as the periphery) and rich countries (now referred to as the core). The terminology refers to the flows of resources, labor and wealth from periphery to core.

“Underdevelopment” is now used by dependency theorists in the place of “Developing” or “Third-World” to describe the outcome of these relationships, and to explain that poor countries were internationally “underdeveloped” to facilitate the extraction of resources and labor. In terms of strategy in this line of thinking, if the problem is the extractive relationship between core and periphery, than the solution would be to cut those relationships through revolution or “de-linking.”   De-linking might include nationalizing foreign owned companies, refusing loans or other trade agreements, and building local self-reliance. As part of this strategy, underdeveloped countries have historically created policies to protect their industries, for example, subsidizing their industries and/or imposing tariffs on imported goods. They might also choose a path of import substitution where countries analyze what products or commodities are being imported and start supporting industries that can produce them locally. These have all been strategies employed by countries such as Brazil, South Korea, and at a more extreme level, Venezuela and Bolivia.



In 1962, Rachel Carson wrote the book “Silent Spring,” which focused on pollution and pesticides in the United States and eventually lead to the banning of DDT.  Then, two pivotal events happened in 1973: the first pictures were sent back from the Apollo missions depicting the earth as a tiny blue dot floating in space, a single planet lacking geo-political boundaries, and the Oil Crisis, which planted the seeds for the modern environmental movement.

For the first time, many people started to think about the finite nature and abuses of the earth’s resources. Neither Modernization nor Dependency theorists really took the environmental issue seriously. In fact, in 1972, the United Nations Conference on the Human Environment held in Stockholm was the first real international conference that addressed these issues.  It did not, however, address the gross inequalities in consumption and pollution between industrialized and non-industrialized countries.  In April 1987, the Brundtland Commission, as it came to be known, published its groundbreaking report, “Our Common Future,” which introduced the concept of sustainable development into the public discourse. It defined sustainable development in terms of both protecting resources and ensuring equality in distribution.

“Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”

“A world in which poverty and inequity are endemic will always be prone to ecological and other crises. … Sustainable development requires that societies meet human needs both by increasing productive potential and by ensuring equitable opportunities for all.”

According to the United Nations: “The wide-ranging recommendations made by the Commission led directly to the holding of the United Nations Conference on Environment and Development, which placed the issue squarely on the public agenda in a way it had never been before.  Meeting in Rio de Janeiro, in 1992, the “Earth Summit”, as it came to be known, adopted its “Agenda 21”, a blueprint for the protection of our planet and its sustainable development.

Agenda 21 represented te culmination of two decades of focused attention, which began with that 1972 United Nations Conference on the Human Environment.  Based on its conclusions, the UN Environment Programme (UNEP) was created to become the world’s leading environmental agency.  By 1992, the link between environment and development, and the imperative need for sustainable development was seen and recognized worldwide.

In Agenda 21, governments outlined a detailed blueprint for action that could move the world away from its present unsustainable model of economic growth towards activities that would protect and renew the environmental resources on which development depends. Areas for action included: protecting the atmosphere; combating deforestation, soil loss and desertification; preventing air and water pollution; halting the depletion of fish stocks; and promoting the safe management of toxic wastes.

But Agenda 21 went beyond these purely environmental issues and addressed patterns of development causing stress to the environment. These included: poverty and external debt in developing countries; unsustainable patterns of production and consumption; demographic stress; and the structure of the international economy. The action program also recommended ways to strengthen the part played by major groups — women, trade unions, farmers, children and young people, indigenous peoples, the scientific community, local authorities, business, industry and NGOs — in achieving sustainable development.”

According to a 2010 report from the UN Environmental Program, under a business-as-usual scenario, 2 planets would be required by 2030 to support the world’s population. This assumes a continued unequal world with 15% of the population using 50% of the resources. World Wildlife Fund (WWF) estimates that three planets would be needed now if every citizen adopted the UK lifestyle, and five planets if they adopted the average North American lifestyle. This poses a serious challenge to convention thinking about development.

These statistics raise the question: Is it even possible, without major advances in technology, to raise the standard of living of everyone on the planet to the that of the so-called developed countries? Do we need to redefine what it means to be “developed”?



At this time, I would like to shift gears slightly and consider another, often marginalized perspective in the development discourse. That is, the roles and perspectives of indigenous peoples.

Today, there are nearly 370 million people classified as Indigenous Peoples. While there’s no universally accepted definition, indigenous people are generally defined as ethnic groups that have historical ties to groups that existed in a territory prior to colonization or formation of a nation state. They also have generally preserved a degree of cultural and political separation from the mainstream culture and political system of the nation state within the border of which the indigenous group is located.

They exist today, and historically, as the poorest and most vulnerable sector of the global society. Traditionally they have been ignored by both the Modernization and Dependency discourses. While Modernization and Dependency theory are largely contrary theories, they are both based on the modern conception of the Nation State.

In the historical creation of State boundaries outside of Europe, during the colonial period, State boundaries rarely took into account the social and political organization of the indigenous inhabitants. Thus, State boundaries and associated laws were, by all means, super-imposed upon indigenous peoples. These groups, more or less, resisted this imposition, but were often-times forced to flee to environments where the colonizers weren’t willing to occupy. As resources become harder and harder to find, indigenous people’s get pushed further to the margins environmentally, politically and socially.  

In terms of development, indigenous peoples have traditionally been viewed as obstacles, or “in-the-way” of progress. They didn’t fit into Socialist or Capitalist notions of development, do not traditionally pay taxes because of their reliance on production for consumption vs. consumption for cash. Because of this, the primary project for Capitalist and Socialist States has been the destruction and/or assimilation of indigenous peoples.  

While they have historically resisted colonization, the modern Indigenous Rights Movement has its origins in the 1970s, growing in parallel with the democracy movements around the globe. But also, later in the 1970s and 1980s, indigenous peoples became co-opted by the environmental movement, used as a symbol for the Noble Savage who doesn’t litter or who protects the Amazon rainforest. This partnership between environmentalists and indigenous people was often at odds, since Western environmentalists originally saw protection of the environment as separating humans from it, which is a practice that often further marginalizes indigenous peoples from the resources upon which they have historically relied.

Today the Indigenous Rights Movement has served as a model for a decentralized movement which has influenced the anti-globalization movement and more recently, the Arab Spring. It has been theorized as injecting new ideas into a global system that can’t be saved by Capitalism or Communism. Indigenous people have lived close to the earth and have developed unique social and economic systems that have endured since time-immemorial  Quite possibly they hold the answer, or at least part of it, for how human can live more sustainably and equitably.

Lastly, I would like to explore some of the most recent trends in the Development Discourse. The first is Globalization and later we will look at Development in a post 9/11 world.

Globalization is theorized by some to have had its origins in 1492, when Columbus finally connected the eastern and western hemispheres into one interlocking, capitalist global system. A process begun centuries before. Others see it as more of a phenomena arising after the collapse of the Soviet Union, which allowed for a rapid expansion of the Western model.  Still others would argue that its origins are technological, with the advent of the airplane, telephone, and later the Internet and more flexible production systems.

Regardless, it’s generally agreed that globalization has meant a lessening of the importance of the State in the global political economy and the rise of corporations as the dominant actors. Lower-cost communication and travel combined with more flexible production systems including, global outsourcing and lower-cost containerized shipping has meant that corporations and capital are no longer bound by place or loyal to a particular country.

The theory advanced by David Harvey in 1979 argues that capital will flow into areas of greatest flexibility, areas where the labor force can quickly adapt to the demands of production, where environmental, health, and safety regulations are the most lenient, where assembly lines can be quickly retooled for new products, etc. The ability of firms to quickly outsource production to Manila, Bangkok or Haiti means that firms are no-longer bound by the regulations of the State. To the contrary, their flexibility has made it so corporations can pit country against country in a bidding war for the lowest regulations, best tax incentives, lowest work standards, etc. If workers organize or the Stage decides to impose more regulations, these firms can quickly pack-up shop and relocate. This was the mantra in the 1990s and “Development “became the mantra of “free-trade,” “trade-liberalization” and the creation of free trade zones and it was this thinking that led to trade agreements like NAFTA and FTAA, agreements largely drafted by corporate lobbying groups.

Capitalist-driven globalization has encountered resistance on many fronts including organized labor fighting to protect their work standards, pensions, and safety standards, environmentalists who are fighting to maintain or establish global environmental standards, indigenous people fighting to protect their resources and way of life, and women and children who are disproportionately affected by globalization, as they are largely the ones working on the assembly lines or who are affected most from the loss of resources.
Finally, I would like to look at the post 9/11 world. For the most part, this period has been defined by the US-led War on Terror, the wars in Iraq and Afghanistan and more recently, the Arab Spring. In many ways, the attacks on 9/11 reinvigorated U.S. foreign policy. With the loss of the Soviets as justification for U.S. imperial pursuits, 9/11 opened up a path for what Bush and the neo-conservative  members of his administration called, “The New American Century.” According to a Foreign Affairs Article written by the then newly appointed Secretary of Defense, Donald Rumsfeld, The goal of the project was to strengthen U.S. hegemony around the globe by restructuring the military away from a heavy cold-war era to a lighter more mobile force that could respond to asymmetrical threats like terrorists, strengthening our alliances and more directly confronting enemy states, what Bush referred to as the axis of Evil. According to a report by the Brookings Institute and the Independent development watchdog, DARA, during this time, NGO’s either served a greater role in carrying out U.S. foreign policy through contracting or were scrutinized greater for their possible ties to terrorists. Aid work also increasingly shifted from agencies like USAid and NGO’s to the military. Thus, making aid more directly an instrument of the foreign policy of governments.

Ideas discussed in this post are also discussed in the following Village Earth online courses: Approaches to Community Development as part of our online Certificate Program in Sustainable Community Development.