Article

Climate Futures FAQ

What is gridded climate data?

Gridded data is a spatial data format that consists of a matrix of cells organized into rows and columns, where each cell contains a value for each grid point across a two-dimensional surface. Gridded climate interpolates all climate station data spatially and temporally, allowing long-term analysis in areas where historical station data may not exist or may be unreliable. Gridded climate data are evaluated and cleaned, allowing analyses for all parks, even those where stations are not present.

Why are you using gridded climate data instead of the station in/near the park for historical trends?

Many parks either do not have stations nearby, or there are data-quality issues with the station-based data. Algorithms applied to develop gridded historical datasets have already addressed data-quality issues and corrected for complex geographies. Long-term gridded products used for this analysis (Vose et al. 2014; spatially interpolated from the Global Historical Climatology Network [GHCN]) weight stations with long-term, reliable records more heavily to provide a more accurate record. This dataset uses climatologically aided interpolation to address topographic and network variability and provides monthly precipitation and temperature in a 5x5 km grid, making it suitable for long-term analysis.

Why are historical trends presented since 1970?

We present climate trends for the whole 20th century as well as since 1970. Although temperatures increased overall during the 20th century, a cooling trend is seen from about 1940 to 1975 resulting from a cooling phenomenon, ‘global dimming’, caused by airborne pollutants (Wild et al. 2007). Industrial activities following World War II, in the absence of pollution control measures, led to a rise in aerosols in the lower atmosphere, which reflected incoming solar energy back into space, leading to cooling. Observations of daily maximum and minimum temperatures during this time show that the ‘global dimming’ masked the warming effects of greenhouse gases (GHGs). The introduction of pollution control measures (e.g., through the Clean Air Act in the U.S. and other measures taken elsewhere in the world to reduce aerosol loading) reduced aerosol emissions. Gradually the cumulative effect of increasing GHGs started to dominate in the 1970s as warming resumed and accelerated from rising GHG concentrations. More information can be found at Why did climate cool in the mid-20th Century? (skepticalscience.com).

What is ‘downscaling’?

Projections of future climate were derived using global climate models (GCMs) that simulate the earth’s climate. They are designed to capture continental-scale climatological patterns to test the effects of different GHG emissions scenarios on Earth’s climate. Depending on the GCM, a pixel ranges from 1-2 degrees on a side (60-120 miles), making them both too coarse to assess local risk and poorly suited to capture climate patterns in a particular location. Coarse-resolution, global models can be “downscaled” to incorporate finer-scale features and processes that significantly influence local climate (e.g., topography, large water bodies, etc.) so the GCM projections are more useful for granular impact assessments.

There are two main approaches to downscaling, statistical and dynamical, that each have specific methods and tradeoffs (see Lukas et al. 2014 for more details). For these analyses we use statistical methods, which develop a statistical relationship between a GCM and observed data for a historical period, and then apply that model to generate future projections. Systematic biases in the GCM output must then be corrected by calculating the difference between the historical GCM simulation and fine-scale observations and adjusting the projected GCM output accordingly. Experts have produced a variety of downscaled projections that are useful as input into impact models for risk assessment, as illustrated in the figure below (see Figure 5).

Figure 5. Flow diagram illustrates that emissions pathways drive GCMs. When combined with historical observations, downscaling algorithms (e.g., MACA, LOCA, BCSD, and others) can be applied to develop downscaled datasets. These can then be used to inform impact models such as water balance, fire, flooding, etc. These results are used in vulnerability assessments.
Figure 5. Downscaling is an intermediate climate data processing step that transforms coarse-scale GCM data into a finer scale that is appropriate for assessing local impacts. By combining the GCM output with a reference dataset including historical observations, downscaling creates climate projections that are better suited for use in impact models.

Why are MACA climate data used instead of other downscaled data sets?

There is no single correct downscaling method and other datasets may be more suitable for some applications. However, tradeoffs exist among different climate datasets, and selection of the best climate data for any particular application requires balancing these tradeoffs. We used the Multi-variate Adaptive Constructed Analogs (MACA; MACAv2-METDATA developed by Abatzoglou and Brown [2012] and Abatzoglou [2013]) because it is available for all of CONUS at a fine resolution (4km) with a broad range of global climate models and emissions scenarios. It includes climate metrics that are often useful to evaluate climate impacts at daily intervals for the entire 21st century. The multi-variate, constructed analogs approach incorporates larger regional patterns and interdependence of climate metrics, making it better suited for areas with complex topography, for simulating extreme events, and for capturing hydrological events that are highly important and consequential for parks, relative to other statistical methods (Wooten et al. 2017).

Why are the data for a different location in the park than where my resource or facility is located?

While topography strongly affects local climate (Daly et al. 2008) and many parks may need explicit consideration of topographically rich terrain for specific planning processes (Lawrence et al. 2021), for these analyses, CCRP used climate metric values from a single grid cell, located at the centroid of the park (Figure 2). Experiences from previous engagements with parks (e.g., Schuurman et al. 2019, Runyon et al. 2021, Benjamin et al. 2021) have shown that a single site can be sufficient to evaluate changing trends across a park for broad understanding of how climate change may impact park resources. However, at the level of identifying and modeling specific resource responses or impact thresholds, each with unique climate sensitivities, more spatially precise climate information may be warranted to address the climate concerns, balancing tradeoffs between information and complexity (see Lawrence et al. 2021 for considerations of spatial extent and resolution of climate futures).

Why are only two climate futures typically used?

The NPS Climate Change Response Program (CCRP) typically uses two to four divergent climate futures that bracket the range of climate uncertainty relevant to management decisions of interest (Lawrence et al. 2021). For most resources, the objective is to use those that represent contrasting projections. In most instances, the contrasting climate futures are “warm wet” and “hot dry”, where warm wet generally means more water availability and a hot dry climate future may mean more drought or fires. Depending on the resource, either could be a best- or worst-case scenario. In limited instances where increasing temperature and precipitation could lead to degradation of cultural resources, “warm dry” and “hot wet” climate futures would be presented.

Why are you not using emissions scenarios as your climate futures?

Managing for uncertainty requires understanding the sources of uncertainty (Hoffman et al. 2015) and explicitly engaging with the uncertainty to identify strategies robust to the widest possible potential range of futures (Lawrence et al. 2021; Brekke et al. 2009). USGS scientists have identified three major sources of uncertainty in climate projections: natural variability, model uncertainty (due to imperfect knowledge of the climate system), and socioeconomic uncertainty (how human actions and decisions affect global GHG emissions) (Figure 6; Terando et al. 2020, Wooten et al. 2017, Hawkins & Sutton 2009). Over the multi-decadal timescale that matters for most NPS decisions (30-50 years), differences between GCMs are the main source of uncertainty and would be missed by using climate futures based solely on emissions scenarios. Emissions-based scenarios are only appropriate for late-21st-century uses but still leave about 30% of the model uncertainty unaccounted for (Lawrence et al. 2021).

Figure 6. Plot with time on the x-axis, from 2000-2100, and fraction of total variance (0-100%) on the y-axis. Prior to 2020, natural variability accounts for most of the uncertainty. After 2020, natural variability tapers off and model uncertainty accounts for up to 70% of the uncertainty until about 2050, when socioeconomic uncertainty is responsible for most variance. Mid-century (2035-2065), model uncertainty represents about 40-65% and socioeconomic uncertainty represents about 25-55% of the overall variability.
Figure 6. The relative importance of three components of uncertainty in decadal average temperature predictions across the 21st century, expressed in terms of fraction of total uncertainty, for the contiguous United States. Green regions represent uncertainty caused by natural variability of the climate system, blue represents model uncertainty, and orange regions represent human or socioeconomic uncertainty (adapted from Figure 4.5 of Wuebbles et al. [2017], which was based on Hawkins and Sutton [2009]). The relative importance of these uncertainties varies with the climate metric and spatial scale of analysis (Hawkins and Sutton 2009, Rangwala et al. 2021).

Why are you using single models rather than averaging all models?

All projections considered in this study represent plausible future conditions. Selecting the best method to characterize climate futures requires tradeoffs between resources available, decision-making application, and timeframe of the decisions. There are many sources of uncertainty when working with climate data across various timescales (see Why are you not using emissions scenarios as your climate future?), and projections derived from individual models helped capture the broadest range of uncertainty across all timescales (see Lawrence et al. 2021 for detailed explanation).

Why is CMIP5 used instead of CMIP6?

This analysis was developed before the release of CMIP6 and was based on careful evaluation of data quality. The number of high-resolution, downscaled CMIP6 products available for the U.S. is limited and there are few studies comparing the skill of each data set. When this information becomes available, the analysis will be updated with newer data sets.

Why are these summaries only produced for CONUS parks?

The geographic extent of these climate future summaries was limited by availability of high-quality, downscaled climate data. We are working to acquire climate data for other regions and will produce summaries for those regions as soon as possible.

How do I get climate futures for my park?

CCRP produced park-specific climate future summaries for CONUS parks in FY24. Climate information for Alaskan and Pacific Islands parks will follow as data are made available. These summaries present divergent climate futures of key metrics that correspond to resource and facility sensitivities and provide basic interpretation. CCRP has the ‘raw’ climate futures and underlying data as well and can provide the data or assist in further analysis upon request.

For access to the climate futures data or for further analyses, please submit a technical assistance request through the NPS System for Technical Assistance Requests.

Last updated: July 8, 2024