Payments for Ecosystem Services (PES) have emerged as a potential tool to achieve conservation in diverse socio-economic and cultural contexts. Empirical research with examples from the global North and South is booming. Many studies focus on how payments translate in additional service provision (i.e. effectiveness) and/or if they contribute to benefit targeted land users whilst not undermining theirs and others’ livelihoods (i.e. equity, benefit sharing). Some have also started to analyze if PES are crowding-out existing conservation motivations and resulting in “pay me or I burn” type of behavior.These efforts are laudable and interesting. However, a troubling thought has been growing in me since I started studying PES more than ten years ago: does it matter what “researchers” say or propose for PES improvement? Are there limits to what implementers can learn and to what research can actually offer?
I have seen how well intentioned implementers have been caught up in administrative, financing and service accounting issues, and how understaffing and a “rush toward service provision” results in insufficient understanding and unsustained engagement with targeted land users. In these cases, I found that, despite rhetoric, the main reason behind such outcomes was that so-called “service buyers” were more interested in the environmental performance of their payments than in their equity implications. Implementers would not bother much either, “we prefer not messing up with local politics”, they would often say… In government-led programs, I found implementers to quickly adopt ecosystem services and land users’ targeting recommended by scientists to improve additionality. However, programs rather focused on increasing PES coverage and less so on analyzing social and environmental consequences over time and space. An example of these issues is Mexico, where science-policy interaction in program design has been effective but has been haphazard when it comes to ecosystems and social panel data monitoring.
This uneasy feeling I had about “nobody listening” or that PES learning could be too slow to avoid unintended consequences on ecosystems and people’s livelihoods was reinforced, for example, by Suiseeya and Caplow’s recent article in Global Environmental Change (2013, in press). Their findings suggest that, despite good intentions, carbon offset project design often fails to meet standard-based social justice criteria. This resonates with findings from evaluations of ICDP programs, CDM projects, etc. Therefore, will the adoption of guidelines and standards be sufficient to guarantee positive social outcomes of PES projects? Researchers may be also blamed for not getting PES design and the local context right. Many of us fail to engage over time with a given project, to “get dirty” at village level (beyond a few weeks/months), due to time and administrative constraints.
PES research may thus simply not be useful if “doers” are not listening, if “service buyers” are not committed to pay the costs for evaluating PES in the long-term, and if donors do not support PES action research beyond 3 or 4-year periods. This makes me wonder if I should continue caring about PES evaluation studies… May a kind reader please act as a psychologist and respond with evidence for hope? Or may someone instead reaffirm what I said and suggest me an extended vacation? Guidance is appreciated.
(This reflection has previously appeared online in the Sinergia e-bulletin)