GISS Surface Temperature Analysis
The Elusive Absolute Surface Air Temperature (SAT)
(Updated Mar. 18, 2022)
The GISTEMP analysis is based on calculating temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to our base period 1951-1980. The reason we work with anomalies, rather than absolute temperature, is that absolute temperature varies enormously over short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km. This makes it significantly more accurate to estimate anomalies between stations and in data-sparse areas.
Q. What exactly do we mean by SAT?
A. In meteorology, SAT generally refers to the 2m screen air temperature. This is the air temperature that would be measured 2 meters (around six feet) above the ground in a weather monitoring station with a Stevenson Screen. For most weather stations since the 1890s, this is what is reported. However, SAT varies quite substantially and so this number is not necessarily reflective of a broader area average.
Q. What do we mean by daily mean SAT?
A. Operationally, the daily mean SAT has varied in definition as technology has evolved. When max/min thermometers were used, the daily mean was given as the average of the two extremes. With more modern equipment, the average of the hourly data, or even minute by minute data can be calculated. The different approaches can give systematically different answers depending on the specifics of the weather that day. However, the anomalies are generally more robust, regardless of the methodology. If the methodology changes at a specific station, however, this can impart a non-climatic bias to the long-term trend that needs to be corrected, for instance, as was discussed with respect to the US temperature record in Hansen et al. (2001).
Q. What SAT do the local weather forecasters report?
A. Weather forecasters often report the reading of particular thermometers at a nearby station, often at the airport or city center. This is a point measurement and may not reflect a broader average. Forecasts use weather model data, sometimes downscaled statistically to specific stations, but which does not reflect the small-scale heterogeneity in SAT in the real world.
Q. If the reported SATs are not the true SATs, why are they still useful?
A. The reported temperature is valid for the weather station at the moment the reported temperature is measured. However, in addition to the SAT, the reports usually also mention whether the current temperature is unusually high or unusually low, i.e. how much it differs from the normal temperature at this location and time of year, and that information (the anomaly) is meaningful for the broader region.
Q. If SATs cannot be measured, how are SAT maps created?
A. SAT maps can only be created using a model of some sort. This could be a statistical approach using input from ground cover maps, terrain, and altitude or, more usually, a computer model, such as those used to make weather forecasts. Due to the differences in how different weather models are developed and what data they ingest, different models will produce slightly different estimates of the SAT. In the global average, this variation is around 0.5°C, and can be significantly larger at a regional scale. Statistical approaches (such as used by Jones et al. (1999)) have a similar uncertainty.
Q. What do I do if I need absolute SATs, not anomalies?
A. In most cases you'll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you will likely be better off using nearby station data, or reanalysis output. While the changes will be compatible with the GISTEMP data for the region, the absolute values may vary significantly across the different sources.
Q. I've found old articles and media reports that give the annual mean absolute temperature to two decimal places, why can't you do that now?
A. There are indeed many historical reports that discuss the annual mean temperature results in terms of the absolute temperature. Pre-2000, these reports generally took the anomalies and added them to a baseline temperature of 15°C, which was a commonly used average. After 2000, they often used a baseline of about 14°C (following Jones et al, 1999). However, these baselines were only approximate, as evidenced by the fact that they were changed by a degree Celsius after further research! Comparisons of pre-2000 and post-2000 reports of the absolute temperature can then give the misleading impression that temperatures had cooled dramatically, as opposed to the clear evidence that they have warmed. This situation would have been avoided if people had paid more attention to how they combined numbers with different error estimates. If you add two numbers with different errors, the error in the sum will be dominated by the largest one. Thus, if the uncertainty in the absolute baseline global temperature is around 0.5°C, and the uncertainty in the annual anomaly closer to 0.06°C, the error in the sum is still 0.5°C! In recent years, we have raised awareness of this issue, and this is much less common than previously. To reiterate, if you need to know how one year or period compares to another, use the anomalies.