Here is part 2 of my interview with myself on global warming.
So, then how do climatologists conclude that the increased warming is due to increased levels of carbon dioxide?
Simply stated, climatologists run computer models that simulate the physics of our atmosphere and oceans, initializing the models at some time in the past where the data history is known. Computer simulations are run both with and without increasing the CO2 concentrations from the starting point. Results show that when increasing carbon dioxide amounts over time the temperature output in the models is warmer and also in line with observed readings in the past century. Results also show that the warming did not occur as the result of any known natural source.
So tell me more about these climate models.
Climate models are very similar to the numerical weather prediction (NWP) models that are run every day to make short term (0-7 day) forecasts. They each start with an initial state where conditions are known and use a complicated series of physical equations to determine how the initial conditions will change over time. The main differences are that climate models use a more course resolution (that's so forecasts can be made decades in advance instead of just a few days), and climate models contain additional physical processes that are germain for long-term prediction, especially those for handling ocean circulations, land/sea ice, vegetation, aerosol chemistry, and so forth.
Since the computer models say the warming is due to human activity then the science is settled, right?
It increases the probability, but the model simulations don't prove anything conclusively. Unfortunately the devil is in the details, and there are many problems that need to be addressed. First and foremost, when dealing with direct output from short-term NWP models, every experienced forecast meteorologist knows that those forecasts contain significant biases and errors even on forecasts just a couple of days out. This is especially true of surface weather parameters like temperature and precipitation. NWP models also suffer from "model drift" (or climate drift), which means they get artificially hotter or colder or more wet or more dry over time. As a result of these deficiencies, scientists at the NOAA's meteorological development lab apply statistical corrections to short-term (0-7 day) NWP model forecasts (called MOS for Model Output Statistics). These statistical models correlate the NWP model predictions to actual observed surface weather. The improvements of the statistical post-processing are substantial, and in terms of temperature forecasts the error rates are cut in half. For example, see the 2nd chart in the link below comparing temperature forecast errors directly from NWP models (DMO) vs. statistically post-processed forecasts (MOS):
Unfortunately, no such statistical post-processing is applied to climate models, and that is a huge problem. If short-range NWP models have significant biases and errors on just a 1-2 day forecast, then I can't imagine that long-term climate models running out 50-100 years in advance wouldn't have even deeper issues. Not only does this mean we're dealing with potentially sub-standard and biased predictions from climate models, but without the statistical post-processing there is no way to estimate the certainty level in the climate predictions. As a result, there is a good chance that some climate scientists may be attaching way too much confidence in the simulation results and predictions that are generated by climate models.
But at the top didn't you say that climate models successfully simulated observed temperatures the last 100 years? Doesn't that mean they are good enough to conclude that humans were responsible for making the planet warmer?
Not so fast. First of all, even though the climate models successfully simulated the global mean observed temperatures over the last 100 years, when the historical temperature & precipitation records at individual stations are compared with the climate model backtest simulations for those stations, the errors are grossly large. Since the global average historical temperatures were successfully simulated that means all the large errors at the individual stations cancel out. That there are such large errors at individual stations obviously means something significant missing in the model physics. Certainly if I was a forecaster and predicted a high temperature of 80* in New York and 60* in Denver but the observed ended up 70* in both places I'm not so sure I'd be bragging about my success even though the average of the two forecasts matched the average observation.
Secondly, there is a huge difference between simulations on historical data (called hindcasts) and forecasts on new independent data. In climate prediction models (and in short-term NWP models as well) there are literally thousands of tunable parameters that can be tweaked to calibrate that parametrization to match the past observed data. These tunable parameters exist because we may not know the exact physics involved or we may need to approximate the physics due to resolution scale of the model (i.e., certain radiative transfer processes). However, with so many tunable parameters it's pretty easy to find at least one configuration that will simulate history quite accurately. In statistics, this is a condition called "over-fitting". The real test for the quality of a model is how well it performs on new independent data that was never previously considered when calibrating on the historical (training) data set.
So how accurate are real-time climate predictions on new data?
Well, so far not that good. Climate models continue to significantly over-forecast the amount of warming compared to observed future temperatures, and that is a big problem which has been going on the last 15-20 years. See link:
Moreover, the places that were to receive the most amount of warming according to the climate models (mid levels of the atmosphere in the tropics) haven't received any warming. So the climate models are getting that wrong too.
Researchers have been scrambling trying to find a cause for the errors and, while a number of explanations have been proposed, nothing has been shown conclusive. Even the IPCC (Intergovernmental Panel on Climate Change) has conceded in their latest report that they will likely have to adjust the models to make them less sensitive to CO2. The implications of this are enormous. If real-time climate forecasts over-predict the warming on independent data, then that *could* mean that humans have contributed less to the amount of warming observed over the past 100 years than what was previously thought by climate scientists, and that would also imply natural variations have played a more important role in our recent past climate than figured by climate models. It also reduces the certainty of future climate predictions which has all sorts of implications regarding political policy and what future actions are required.
To be continued ...