We all want accurate models so that we can see what we are doing to the environment and how that environment will adapt. However, how do we test these models? Are the models we have accurate? Rastetter answers the first question. He states that "The essence of the debate[over model testing in the earth sciences] is the so called problem of induction, that is, the problem of extrapolating from the specific to the general." Rastetter goes on to say that "Because of the problem of induction, no specific tests can establish the general validity of any model."
The author is trying to test ERCC models. ERCC stands for ecosystem response to changes in climate and carbon dioxide concentration. He gives four reasons why these ERCC models cannot be rigorously tested. The first reason is Sufficiency. He asks "Is corroboration by short-term data sufficient to justify confidence in long-term projections made with the model?" The answer, Rastetter says "is no". So what does he mean by sufficiency? Simply put, Rastetter says that slow-responding mechanisms that dominate long-term behavior can often be ignored in the short term. He uses an example of the Farquhar model which models Net photosynthesis. He uses results from a test in which leaves of a plant were exposed to double the normal CO2 concentration, or roughly 680 ppmv (Tissue and Oechel(1987)). The Farquhar model predicts that photosynthesis will increase and indeed Tissue's test says that it does and the model does very accurately predict this rise. However, after a few weeks of exposure, the net photosynthesis fell back to it's initial level. The model does not predict this and the reason is simple. "The mechanism responsible for a decrease in the maximal rate of carboxylation are not represented." Rastetter goes on to say that "The potential always exists that long-term behavior is affected by slow-responding mechanisms whose effects can be neglected in the short term. Short-term data are never sufficient for testing how well these mechanisms are represented in the model."
The next reson is Necessity. Rastetter poses the question: "Is corroboration by short-term data necessary to justify confidence in long-term projections made with the model?" Again Rastetter answers "no". Necessity is more or less the flip side of sufficiency in that fast-responding mechanisms that dominate the short-term are not important in the long term. To support this, Rastetter uses an example of a model used to simulate the responses of arctic moist tundra to the combined effects of a 5C increase in temperature and a 10% decrease in soil moisture. Rastetter assumes that the 9 year old data(short term) used to calibrate and test the model is sufficient. He asks the model to make a 50 year projection of the response. "The model indicates that for approximately 20 years, the rate of nitrogen uptake by vegetation is equal to the mineralization rate of N2." In other words, no nitrogen is lost from the system. However the response showed that after 20 years the two processes become decoupled and nitrogen is lost from the ecosystem. This is the key because "Unanticipated slow-rsponding mechanisms could easily chnage the predicted result."
Thirdly, we have a space-time substitution method. This method is widely used in studying succession of a forest or any other type of succession. For example, one would study an old agricultural field abandoned 75,40,25,10,and 1 year ago and observe the results. Rastetter says that there are two problems with this method. One is that is difficult to find a region with CO2 levels as high as those predicted for the next century. The second problem is that "applying space-for-time substitutions to changes in climate invloves a comparison with ecosystems that have come into balance with the local climate." No one can find a place on this planet where the climate has changed by a large magnitude within the last 20-200 years. Rastetter states that even if you could, space-for-time substitution can't predict how the ecosystem characteristics change through time.
The fourth reson why these ERCC models can't be tested is that the method of reconstructing the past is that the responses may not be nonlinear. Reconstructing the past involves just that. Using data that show changes in carbon dioxide and climate in the past 200 years. One would think that once this is done we would have an excellent test for ERCC models. However, as I stated above and as Rastetter stated in the article, "the most obvious problem is the potential for nonlinear responses...". Meaning that the range of change in the past may not be the range expected in the future. The second problem is that the "magnitude of the response of ecosystems to changes in climate and CO2 thus far is too small to be discernible given the quality of available data." He states that even though CO2 data of very high quality can be obtained, temperature measurements from the very best sources produce large error..
The fifth and last reason is that you can't compare ERCC models with other models. Rastetter poses two rhetorical questions: "If the models are based on the same underlying concepts, would they not be expected to make similar predictions?" and "If two models do not agree, which of the two is false, or are both?" Rastetter admits that if two models that model the same thing but have different underlying principles agree, then the degree of confidence in both models is high and therefore could be used..
I mentioned in the introduction that there were only four reasons so why are there five? Necessity and Sufficiency are flip flops of one another and therefore are one reason. The question that runs throughout is why don't we compare models to measured responses. Rastetter answers that question by saying that when modeling a response to a potentially devastating change, we would prefer not to wait and see what the response is so that we may test the ERCC. So why bother even running these simulations if we cannot even be sure as to their accuracy? We can no longer afford to go on blind. These models may not be accurate but do give us, at least in the short-term, a warning sign of what's to come.
Reference