I’m pretty happy that our paper on finding the potential value of observations for constraining climate models, was published today in Geoscientific Model Development. I posted about the discussion paper here, and you can download a presentation describing the work here.
The paper demonstrates a method for working out how useful a new observation might be for constraining (tuning, calibrating etc.) your climate model, if all you have is an ensemble of output. You basically use each member as if it were an observation of the system, and see how closely you can find the true input parameters, in a perfect model experiment.
The neat thing is, that as you control the experiment, you get to pretend that you have as much uncertainty in the observations (or errors in the model) as you like. You can see how that impacts on your constraint.
We used and ice sheet model, but this technique should be useful for just about any computationally expensive model with some uncertain parameters.
I’d recommend GMD as a journal to publish in – the process was pretty quick, they found knowledgeable reviewers who I thought were fair and thorough, and of course you are welcome to disagree with that last statement, as you can read their reports.
Citation: McNeall, D. J., Challenor, P. G., Gattiker, J. R., and Stone, E. J.: The potential of an observational data set for calibration of a computationally expensive computer model, Geosci. Model Dev., 6, 1715-1728, doi:10.5194/gmd-6-1715-2013, 2013.