5 Avoided Deforestation Myths: Certain Loss

November 21, 2025

Debunking the myth that deforestation in avoided-conversion carbon projects is guaranteed. An analysis of why probability modeling—not assumptions—is essential for credibility and high-integrity forest carbon credits.

See an Example

Five Myths About Avoided Conversion Credits: Part 1

The Credibility Crisis and Why Deforestation Isn't Certain

Reading time: ~3 minutes

The Carbon Market's Wake-Up Call

The carbon market has a problem: many projects claim credit for stopping emissions that were never really going to happen. Major investigations by The Guardian and Bloomberg exposed what industry insiders already knew: some forest protection projects, like the infamous Kariba project in Zimbabwe, were protecting forests that weren't actually at risk.

The result? Trust collapsed. Buyers abandoned "avoided emissions" projects and rushed toward carbon removal projects instead. Companies building carbon portfolios found themselves facing deep scrutiny from executives, investors, and customers about their carbon investments. Even Science-based Targets, a major climate initiative, avoided “Avoided Deforestation” projects as a path to net zero.

But ask those same carbon market veterans today, at the end of 2025, and you'll hear something different (at least off the record). The latest updates to Science-based Targets are starting to make room for avoided emissions in a net-zero portfolio. Most sustainability leaders admit that carbon removal alone can't scale fast enough to meet our climate needs. When done right, nature-based avoidance projects can deliver more climate impact per dollar than many removal projects. The key phrase: when done right.

The core challenge: proving the counterfactual. What would actually happen to this forest if your carbon project didn't exist?

This is the additionality question, and it's harder to answer than it sounds.

Myth #1: Deforestation is Certain (Probability = 100%)

Why This Myth Exists

When many avoided conversion (Grasslands, Forests ACR, Forests CAR, City Forest) carbon protocols are implemented without modification, they generally work like this:

  1. Identify the "highest and best use" of the land (logging, farmland, development), ideally securing documented deforestation plans
  2. Calculate expected emissions if that activity happens
  3. Use those emissions as your baseline
  4. Get paid to prevent them (or wait a decade to be repaid as a dynamic baseline pans out)

Sounds logical. If the land would obviously be logged or developed without intervention, preventing that activity creates real value. This approach has merit—after all, market forces and economic pressures are real. Many forests genuinely face conversion pressure.

The problem? This formula assumes the baseline scenario will definitely happen. It treats probability as 100%, even though the deforestation activity hasn't occurred yet.

A naive carbon accumulation graph of a proposed carbon project site. The mistake is in assuming the baseline scenario that results in the red line is 100% likely. As a result, the total expected net emissions overestimates reality.

Why This Myth Fails

Here's a thought experiment: If deforestation were truly certain, why hasn't it happened already?

When you track a dynamic baseline of sites (similar forests that didn't get carbon project protection) over 40 years, reality shows up. Some get logged or developed. But many don't. If only 1 out of 3 baseline sites actually experiences development, the true probability was only 33%—not 100%.

The math problem: If you bought carbon credits assuming 100% probability, but the real probability was 33%, then two-thirds of your credits are worthless.

Verra's traditional REDD+ protocol actually prescribes an approach to modeling this probability, which they call "Likelihood of Deforestation", by using a the deforestation rate of the trailing 5 years from a suitable representative sample of sites. However, the single biggest structural vulnerability in Verra's system has been manipulation of the “suitable representative sample” (the reference region). This is widely recognized in independent academic reviews and was one of the main reasons Verra is replacing these methodologies with risk-map–based baselines (VM0048).

Probability of a deforestation event is only 100% when it happens.

What Ideal Carbon Projects Should Do

A credible project should seek to model the dynamic baseline before the project starts. That means:

  1. Calculate probability - Calculate the probability of deforestation happening in the project area based on the percentage of baseline sites where deforestation is expected to happen over the project period
  2. Use dynamic baselines - Monitor baseline sites to measure real-world conversion rates
  3. Apply conservative estimates - When uncertain, err on the side of lower probability
  4. Create risk cohorts - Group sites by similar characteristics and risk exposure
  5. Show the math - Show which sites are in the baseline before the project starts, and provide data-driven justification about how sites were selected

Think of it like weather forecasting. Saying "it will rain" is different from "70% chance of rain." Every carbon project should provide a probability as context.

The carbon project should predict how the dynamic baseline will perform and issue credits accordingly.

What questions should I ask the Carbon Ratings Company or Carbon Developer?

How do you calculate the probability of the baseline scenario occurring?

🔍What to look for: Data-driven statistical models with documented methodology, not assumptions or landowner intentions.

🚩Red flag: Statements like "The landowner has drafted plans to develop this site" without probability modeling.

✅ Best practice: Projects should use binary classification models, matched site comparisons, or peer-reviewed threat assessment models with quantified probability ranges (e.g., "15-22% deforestation probability based on 500 matched sites over 10 years").

What percentage of baseline sites actually experience the activity you're preventing?

🔍What to look for: Specific percentages with sample sizes and observation periods (e.g., "18% of 200 reference sites experienced conversion over 5 years").

🚩Red flag: No empirical tracking of similar sites over time, or inability to provide historical data on reference site outcomes.

✅Best practice: Dynamic baselines that monitor control sites throughout the project period and adjust crediting accordingly.

How is baseline site data distributed over the project period?

🔍What to look for: Regular monitoring schedules, transparent methodology updates, and temporal distribution of baseline events showing when activities occur.

🚩Red flag: Retroactive baseline creation, cherry-picked reference sites, or baselines calculated only at project start without ongoing validation.

✅Best practice: Annual or biannual monitoring reports showing baseline site conditions, with methodology adjustments documented and credit calculations updated accordingly.

How do you account for baseline sites where the threat never materializes?

🔍What to look for: Probability adjustments in credit calculations (e.g., "Credits discounted by 40% to reflect sites where no conversion occurs").

🚩Red flag: All credits assume 100% certainty that the threat would have occurred without the project.

✅Best practice: Conservative crediting that applies empirically-derived probability factors, ensuring credits represent only actual avoided emissions rather than worst-case scenarios.

What methodology did you use to identify and select reference sites?

🔍What to look for:

🚩Red flag:

This is Part 1 of a 5-part series examining common mistakes with additionality calculations for avoided emissions projects like Avoided Deforestation. Next: Black Box Baselines