Visualising weird input spaces

I’ve been working on a fairly large (~500 member) perturbed-parameter ensemble of the land surface model JULES. The model simulates the global historical land surface, and each ensemble member is forced by the same global reanalysis on the HadGEM2 grid scale. Differences in the model output are therefore caused by the different values of the parameters.

One of the tasks I have is to come up with a design for the next round of model runs – that is, an ensemble of input parameter settings where we think the model will probably run, and produce a carbon cycle that looks *kind of* like the carbon cycle in the real world. It doesn’t have to be too accurate – in this case our tolerances for what constitutes a carbon cycle are pretty wide. That’s so we can explore the very outer edges of parameter space, and see what the model does among as wide a range of parameter setting as possible. We wouldn’t want to choose parameter settings too narrow, and potentially underestimate an important uncertainty.

The 500 member ensemble was therefore based on some large perturbations to 32 of the internal parameters of JULES, largely those that affect the carbon cycle. At this point we’re not trying to find anything very realistic, just finding out what settings might break the model. Many of the standard parameter values were simply halved-and-doubled, and a space-filling design was produced within this large hypercube. As a result, many of the ensemble members don’t have a functioning carbon cycle – they fall outside even our low expectations.

Is there anything we can learn about the input space by looking at the values of the input parameters of the ensemble members that fall inside or outside the basic expectations of having a functioning carbon cycle? Is the input space constrained by placing constraints on the model outputs?

One of the challenges is that we have a very high dimensional input space, which defies intuition and makes visualisation difficult. On reading Jonty’s useful paper on “Setting up your Simulator”, I was again inspired to make some parallel coordinates plots of our ensemble (I’ve used them for visualising input space before, for example in this paper).

Initial (level 0) constraints

Parallel coordinates plots work well for visualising small ensembles, but here our ensemble is ~500 members. I’ve split the ensemble into those that pass a “level 0” constraint (top) and those that don’t (bottom). A level 0 constraint is pretty much the most fundamental constraint you can have. In this case it’s “does the model run, and have runoff (or other simple indicator) above zero?” Further, I’ve split those ruled out into “failed to run” (red) and “ran but with no runoff” (black). Here’s the plot (click for big):

Each vertical axis represents the range of a particular parameter and an ensemble member traces a line along the plot, with the height on each axis representing it’s position in input space. For example an ensemble member that was exactly in the middle of the ensemble (0.5, 0.5, 0.5, 0.5 …) would show up as a horizontal line across the middle of the plot.

The top plot isn’t all that informative – it’s a mass of nearly 400 black lines (in fact, they’re semi-transparent, to make patterns a little easier to see). There is a small hint that there are no ensemble members that satisfy the level 0 constraint that have a low value of parameter b_wl_io, or a high value of f0_io. The lower plot of those ensemble members that failed the constraint is a little easier to interpret – there are only around 100 ensemble members (lines) this time. You can still see how important b_wl_io and f0_io are.

Perhaps more useful is a pairs plot of the ensemble members ruled out by a level 0 constraint. Each subplot represents the 2-dimensional subspace of the entire 32-dimensional hypercube. A point is a ruled-out ensemble member’s position in that subspace.

Well, that is much more useful. The square to focus on is the intersection of parameters 4 and 8 – b_wl_io and f0_io. Here’s a bigger version of the region:

The darker lines along the margins of those dimensions show that there is a greater chance of being ruled out if the ensemble member has a low value of b_wl_io, or a high value of f0_io. That nearly-empty square however indicates that only a single member which doesn’t have both those attributes is ruled out. If an ensemble member has a high enough value of b_wl_io *and* a low enough value of f0_io, it’s pretty much guaranteed to run.

Level 1 constraints

The next step is to be a little more fussy about what the carbon cycle in our model looks like. The level 1 constraints demand that various global indicators of a functioning carbon cycle are present in an ensemble member. These histograms show the required ranges in red, and the number of ensemble members that meet them in the grey histograms.

Quite a few ensemble members fall outside the ranges – there are nearly 100 members that have close to zero runoff, for example. Overall, there are only 53 out of the 500 ensemble members (~10%) that pass all the demands of level 1 constraint. Here are the parallel coordinates plots for those that pass the test, and those that fail.

Again, the smaller ensemble is easier to interpret, and there are promising “gaps” at the edges of the axes, suggesting that constraining the outputs might constrain the input space. However, only a small number of ensemble members pass, and so these gaps might just be a coincidence. The next phase of the design will aim to put more ensemble members into the input space that passes the level 1 constraint, exploring the edges of that space more efficiently.

This time, we look at a pairs plot of the ensemble members that pass the level 1 constraint – those that are “Not Ruled Out Yet” (NROY, a concept borrowed from history matching) .

Again, this looks promising. There are possible hard boundaries beyond which no model meets the constraints – notably for parameter 1 (alpha_io). Other parameters that have very few successful runs beyond a limit are parameter 4 (b_wl_io), 17 (lai_max_io) and 25 (r_grow_io).

Conclusions

1) It’s very hard to marginally constrain parameters.

In an ensemble with a fairly large input space (here 32 input parameters), a “bad” value of a parameter can often be made up for by changing one (or more) other parameters. This makes it very difficult to put hard limits on any individual parameter, especially using fairly weak constraints. This is very annoying when it comes to informing model developers about constraints on the parameters – somehow we’ve rejected about 90% of the initial input parameter space, seemingly without constraining individual parameters.

We could always use more demanding constraints, but there is a danger that a model error leads to an overconfident (and wrong) constraint on the input parameter, or simply a very small number of ensemble members passing the constraint.

2) High dimensional spaces are hard to visualise, especially when they’re weird.

Parallel coordinates plots are not very useful when there are more than around 100 ensemble members (unless there are vary clear patterns). Pairs plots are handy but limited to 2-dimensional projections when interesting shapes in the parameter space may well be of much higher dimension.

Next

We’ll try and build an emulator of the relationship between the model inputs and outputs, and see if that can help with visualisation. But remember, as Jonty points out “The key thing to appreciate is that building an emulator is usually an act of desperation”.

Notes

I’ve altered the standard R code for both parallel coordinates plots (to stop it normalising everything), and the pairs plots and made most things semi-transparent. Feel free to email or tweet me if you’d like to know how I did it (spoiler: mostly by googling for code).

Leave a comment