Friday, March 31, 2006

Lookalikes

Just got back from my trip yesterday. I noticed on touchdown that the weather was surprisingly similar in the UK and Japan. Here are two pictures, one taken in the North Wet of England, and one snapped on my commute to work this morning. Can anyone identify which is which?








Land of the rising sun
Land of the falling rain
There was supposed to be a solar eclipse on Wednesday morning across the UK, but how could anyone tell?

RealClimate article on climate sensitivity

There's a nicely-written puff piece by Gavin about our recent GRL paper here on RealClimate. I've added a couple of brief comments here and here. Virtually none of the readers' comments actually touched on the content of our paper...

Saturday, March 25, 2006

Every bloody train!

I'm on a brief trip to the UK now. At the Japanese end, all the transport went as smoothly as ever - bus, train (with change from local to express) and plane.

Within 3 hours of landing in the UK, we'd used 3 different train lines, and had delays on each one of them! Firstly the Heathrow Express was 10 minutes late (on a 15 minute journey), despite being perhaps the most expensive train journey in the world (more fool us for not just getting the tube - but we were hoping for a quick journey...). Then on the tube, the Circle line was delayed, so we were sent on a detour involving another change at which some "helpful" staff sent us back onto the Circle line, apparently unaware of the delay in the first place. Back onto the main lines at at Euston, there was a row of machines for ticket purchases, which seeems great until you try to buy a ticket and are faced with the helpful message "some restrictions may apply" on all the reasonably-priced options. We did manage to find a human face and found out that the cheapest option was to buy a weekend return (despite it being a Friday afternoon, and even though the return portion was unusable for us - this was still 24 quid cheaper than a standard single). Then when travelling up to Lancaster, the train sat in a field for just long enough to turn our 15 minute connection into a 2 minute dash across Preston station.

In the end, we were only about half an hour later than we would have been if everything had run as well it does in Japan. So no great disaster overall. And there are sometimes delays in Japan too - due to such things as suicides on the line, earthquakes or typhoons. But here in the UK it seems to be a way of life...I'm looking forward to getting back home!

Thursday, March 23, 2006

Comment on Frame et al

As I hinted recently, I've got a few things to say about Frame et al: "Constraining climate forecasts: The role of prior assumptions", GRL, 32(L09702) (F05). In fact, jules and I submitted them as a comment to GRL a couple of weeks ago.

I'll start off with the bits I like: F05 give a nice demonstration of how the choice of prior can have a significant influence on the resulting pdf in a probabilistic estimation, especially when the observational evidence is quite weak. Furthermore, markedly different results can be generated by different, but apparently natural and plausible, ways of describing initial ignorance. Therefore the same evidence could be used to draw quite different conclusions due to what are ultimately fairly arbitrary choices. It's obviously an important point that doesn't appear to have been adequately considered in at least some previous work.

If they'd stopped there, I'd have had nothing to complain about. But now on to the bits I don't like so much. Firstly, they present what they see as the solution to this problem - they assert that we should choose the prior to be uniform in the variable which we are trying to estimate (ie uniform in climate sensitivity, if we are wishing to estimate this). This, in their words, "resolves" the "arbitrariness" and allows them to generate what they describe as "objectively determined" estimates. "Objective" is a dangerous word to use here, as the probability cannot be objective in the sense of a frequentist probability - what they presumably mean is merely that they are providing an automated rule that removes this element of choice from the procedure (or perhaps, imposing their own subjective judgement in place of anyone else's). But that's not the biggest problem I see in their suggestion. Where their method really falls down is that it generates results which are not self-consistent. As we demonstrate in our comment, their method will generate the pair of results P(X>3)=2.3% and P(X4>34)=7.8% from a single observation of X=2+-0.5. Under any standard definition of Bayesian probability, P must be a function, which (again by definition) means it must be single-valued. But X > 3 and X4 > 34 are precisely the same proposition (there's no sleight-of-hand with negative values here: X is positive definite, and I could equally have used X3 or X5). Therefore their P cannot be a probability at all!

There are some generalisations of probability (Dempster-Schafer theory) in which probabilities are defined as taking a range of values. Elmar Kriegler is the only person I know who's gone any distance down this path within climate science. Arguably, this provides a better framework in situations of deep uncertainty, but handling these issues correctly is far from trivial (note that a uniform prior in X does not actually represent a state of true "ignorance", but rather the specific belief that 10 < X < 20 is ten times as likely as 2.5< X < 3.5, for example). It is not at all clear to us that F05 have provided adequate theoretical justification and underpinnings for what is in fact a rather drastic challenge to the standard view of Bayesian probability, and they certainly haven't (IMO) drawn sufficient attention to the radical implications of their work.

The other complaint we have about their paper is in their description of the sort of problems that we are all attempting to answer. They say:
Unless they are warned otherwise, users will expect and answer to the question "what does this study tell me about X, given no knowledge of X before the study was performed?"
(and they use "no knowledge" to justify their choice of a uniform prior). In context, this could reasonably be re-written as (A) "what would our estimate of climate sensitivity be, if we had no data and knowledge other than that directly considered by this study?"

However, it seems clear to us that what users really want to know is (B) "what is our estimate of climate sensitivity, using all of our data and knowledge?"

The answer to question A will necessarily have greater uncertainty than the answer to question B. If someone wants to generate an estimate of climate sensitivity, they should use all of the data, either by explicitly considering it, or by the use of a prior which encapsulates (as accurately as possible) the information which the study doesn't directly look at! This is precisely the issue that our recent paperaddresses, so perhaps it is a bit harsh to pick on F05 in this respect (rather than the numerous other papers that have apparently mixed up the two questions in a similar way). On the other hand, these guys are specifically presenting a theoretical analysis of probabilistic estimation, together with recommendations as to how we should all go about it in future (rather than just having a go at producing an estimate themselves), so it's surely more important that they get it right. We certainly don't think that their opinions should be accepted by default, without some meaningful debate over the issues.

Inevitably, given the space constraints of a 2 page comment, it is hard to get the points across clearly without running the risk of appearing overly hostile. That's life, and I'm sure they have thick enough skins to cope. Indeed, depending how they reply, our comment might end up in the bin anyway - unlike most papers, where I only have to convince some neutral referees and can therefore be pretty confident of publication, in this instance there is (at least potentially) an opponent who will try their best to point out weaknesses in our case.

Both these comments, and the contents of our recent paper, are summarised on our poster for the EGU in a couple of weeks. It will be interesting to see how they go down. Unfortunately I'll not be there, so jules will have to face the angry horde by herself :-)

Wednesday, March 22, 2006

Yes I *DO* pay Road Tax

It's a commonly-heard complaint in the UK that "cyclists don't pay Road Tax". This is apparently considered justification for all manner of dangerous, aggressive and incompetent behaviour on the part of car drivers - you don't pay tax, so it's ok to drive into you. There are numerous reasons why this argument is invalid a priori, and of course the sociopathic conclusion doesn't follow from the premise anyway, but I can't be bothered going into that in more detail here (see here and here for some of the arguments).

What I can be bothered blogging about is the fact that the Chancellor the Exchequer (and Prime Minister in waiting) has just introduced a new banded system of Vehicle Excise Duty ("Road Tax" in common parlance) which specifies a nil rate for vehicles with sufficiently low emissions. There aren't many cars in the bottom band ("A"), but a bicycle would certainly slot right in there!

I expect an innovative clothing supplier or cycling activist group to start offering t-shirts for sale with a "Band A Vehicle" logo :-)

[Of course the pedants may still point out that technically, we still don't pay this tax on bicycles, they are exempt. But in practical terms, it's good enough for me (um...or would be, if I was in the UK).]

Tuesday, March 21, 2006

Simulator 2 news

Looks like good progress is being made on the new technologies that are expected to bring 10 petaflop computing by the year 2010. The Earth Simulator, deposed a year or two ago from the top of the Top 500 list, comes in around 40 teraflops, which means that the upcoming machine will be faster by a factor of 250. It will be interesting to see if the new machine grabs the top spot as comprehensively as the ES did - when the ES was first switched on, it was 5 times faster than its nearest rival, and it held on to the top spot for more than 2 years.

Interesting article on immigration in Japan

In the JT today:
Sakanaka recently poked his head above the bureaucratic barricades, suggesting that Japan allow in 20 million immigrants over the next half century.
The author of that opinion is not some radical revolutionary, but the ex-head of the Tokyo Immigration Bureau. And there's lots more:
"It's almost taboo to raise the issue of mass immigration here," he says.

"Japan has no experience of this, only of sending people abroad. Modern Japan almost totally shuts out foreigners and the only people who debate the issue are specialists. Nobody is even researching it."
Meanwhile, the politicians talk up the dangers of terrorism and re-introduce policies to fingerprint all foreigners (and keep the data for 80 years)...

Saturday, March 18, 2006

Tallahassee trip

I've just been to Tallahassee for a few days. I've already written about the workshop that motivated the trip. Usually I try to fit a bit of a holiday around long-distance conference trips, but a combination of Japanes rules, and the fact that the organisers were paying, made it a bit harder than usual this time, and I had not made any plans before I flew. In fact, I'm embarassed to say I didn't even know where Tallahassee was before I landed there :-) Nevertheless, I still managed to see a little bit more than the inside of a lecture room.

Tallahassee seemed a very pleasant town, at least in March (I don't think summer can be much fun). On the short walk from the hotel to FSU,


I wandered past several real live frat houses - yes, they really exist outside of Animal House!



FSU has a green and pleasant campus, with lots of oaks which all had beards of Spanish Moss dangling off. Wisteria and dogwood (pictured) were flowering, along with azaleas and rhododendrons.

Lunches were provided on-site, but in the evenings I managed to sample a few pints of the surprisingly good Blue Moon beer and some pleasant enough but not exceptional cooking at "Andrews".

The meeting ended at lunchtime on the Wednesday, and some people weren't flying until the evening (I had a whole extra night), so a group of us drove out to Wakulla Springs for the afternoon. This is a large deep spring in a limestone region, a little like the Fontaine de Vaucluse except with a much bigger, flatter, river. And alligators. Lots of alligators.


Last time I went to Florida I didn't see any, and it's the one question people ask, so it's nice to have encountered them outside zoos (not that they looked different, of course). There were ospreys and vultures and lots of other birds, familiar and unfamiliar.


There are supposed to be fossilised mastodon bones 150m down at the bottom, but the water was a bit too dark to see that far.

The travelling was uneventful, and wasn't even too painful, considering Florida is about as far away as it's possible to get. In fact some people coming all the way from Colorado took longer than I did, spending a night in a hotel lobby in Memphis with there apparently not being a free bed in the whole city! I spent the time mostly reading the amazing "Number 9 dream", a Christmas present from my sister which I'd been saving up for this trip (I think she read it on her trip to visit us last year). Also, I wrote this (but couldn't actually post it).

Wednesday, March 15, 2006

Florida

I suppose I should show off by blogging live from this Workshop on Predictability, observations and uncertainty in geosciences using my Zaurus via wifi. I haven't had much time over the last couple of days as it's been back-to-back interesting talks without any boring sessions.

It's been great to get back re-acquainted with the bleeding edge of numerical weather prediction (NWP). This is really where it's at in prediction science IMO - these guys have to make forecasts twice a day and mistakes stick out embarassingly. There is no making up stuff and hiding behind the plausible deniability of "our model suggests..." Climate scientists who think there is nothing to learn from this field are really missing out.

The more eminent speakers include (in speaking order) Lorenz, Le Dimet, Kalnay, Krishnamurti, Toth, Anderson. Some perhaps are merely brilliant scientists, several are legends. Exalted company indeed. I understand that the talks will shortly be made available from the website for those who are interested. My talk seemed to go down ok, but I'm not sure that we are quite close enough to the NWP problems to be giving something back to this community yet. We are working on it!

The final session is thinning out, with people drifting off home. I've come away with at least 2 copper-bottomed ideas for future work which will substantially advance on our existing methods. That ceraintly makes it a success in my book...now all I need is a few more hours in the day to work on them.

Saturday, March 11, 2006

Sedimentary, my dear Watson

Had a visit from Andy Ridgwell and his postdoc Jenny Brauch last week. They weren't visiting me, but rather jules and the other paleoclimatologists around here. Andy is doing biogeochemical and sedimentary things with the GENIE model, and our ensemble Kalman Filter is proving to be a very powerful model tuning tool. I went along to some seminars to learn a little...

One thing that fossil fuel combustion will mean is a lot more CO2 disolved in the ocean, and this leads to less calcium carbonate (CaCO3) deposition in sediments (and even dissolution of what is already deposited in the upper sediment layers). Evidence of this can be seen in the past (eg PETM), when there is also a spike in carbon isotopes suggesting a big release of some old carbon (potentially methane clathrates). Andy's been simulating what might have happened, using a biogeochemical and sediment model embedded in the atmosphere/ocean system.

He's got some really impressive results in the pipeline (being written up), and here is a sneak preview:


The RH lines are plots of carbonate concentration from actual cores, and the two LH plots are simulations under slightly different conditions. One interesting detail in the far LH plot is that the spike appears to start at a different time in the purple core. This is actually because the carbonate is eroded down further in the other cores, eating back in time and erasing the sediment record. It does make me wonder at the precision of core dating, since generally spikes in different cores are assumed to be simultaneous and so are lined up (of course the scientists concerned are well aware of the problems). This is the sort of thing that can only really be investigated by "forward modelling of proxies" (increasingly fashionable these days) in which the proxy creation is embedded into the model itself, rather than the old traditional approach of inverting the proxy into climate variables, and then comparing these variables to your model.

In the next few centuries, I suppose we'll erase the past 10-100,000 years of carbonate history. I wonder what the androids alive 50 million years from now will think of it. In terms of climate change it is somewhat helpful, as the carbonate ions neutralise the aqueous CO2, increasing the ocean's capacity to dissolve CO2 (somewhat counterintuitive to numpties like me that the carbonate erosion helps things, but even though the total load of dissolved carbon increases, the CO2 and CO3(2-) together with an H20 make two HCO3(-) ions which do not exchange with the atmosphere). It's not a massive effect, though, and the bugs that try to make CaCO3 might not like the increased acidity much.


It wasn't all work and no fun - as well as a trip to Tokyo (which we had to do anyway, for this), we showed them round Kamakura. Andy is a vegetarian, and Japan isn't the easiest place to find good vegetarian food. So he was happy to be taken to the shojin-ryouri restaurant just outside Kenchoji temple. I'm not sure that the beer is strictly Buddhist, but neither are we :-)

(T-side also does a lot of veggie food - we went there for dinner)

Thursday, March 09, 2006

On being a foreign researcher in Japan

A couple of surveys have plopped into my pigeonhole recently, asking for my opinions as a "Foreign Researcher in Japan". Both are fairly official, being organised by the Japan Science and Technology Agency and Ministry of Education, Culture, Sports, Science and Technology respectively. The latter is taking place as part of the UNESCO "Careers of Doctorate Holders" project. The accompanying letters talk up their intentions for such things as "expanding opportunities for excellent foreign researchers" and "to start appropriate policies with regard to highly qualified people in order to ensure their careers development all over the world" and "examining measures to enable non-Japanese scientists to ... play a more active role at universities and research institutes in Japan." So I was disappointed by the content of the surveys.

In fact the surveys were both (especially the latter, which is uppermost in my mind) little more than rather tedious data-gathering exercises asking for details of education and every job I have ever had (yes, really). While I can see this being of some interest to bureaucrats around the world, I don't see it addressing the matter of "expanding opportunities" or "ensuring career development" or "playing a more active role", since there was precious little interest in what we thought of our situation here or how these matters could be addressed. Out of about 10 pages on the second survey, there was one multi-choice question on this subject with boxes to tick to indicate degree of satisfaction.

The big elephant in the room that neither of the surveys talked about is the difficulty of getting tenure if you are foreign. Many universities limit all foreigners to short-term contracts as a matter of policy (although not all). At my institute, there is virtually no career development or promotion system at all, that I am aware of (you can move from "post-doc" to "researcher", but not beyond). There are some staff who were appointed to permanent positions as "Group leaders" but apparently no way of contract staff moving to this status. (These aren't actually foreign v Japanese issues. But they matter, and no-one seems to be doing anything about them, despite the fact that the lack of career development for more junior staff was specifically criticised in the very first 5-year assessment of the institute, way back in 2001.)

As a contrast, in NERC labs like where I used to work there is a well-defined promotion system, based on levels of achievement (primarily, but not entirely, published papers and grant income) and your importance to the mission of the institute. Evaluation is performed across (almost) all NERC institutes at the same time by a panel drawn from more senior scientists from the different labs. It's not a perfect system, but at least it exists!

The vast majority of academic visitors to Japan are short-term post-docs, and I guess for them, such issues as tenure and career development aren't a high priority. I would have no hesitation in recommending that such a person come here, and enjoy an interesting time, so long as they are happy to take a year or two treading water in terms of their career. I'm sure that most of them have a great time here and suffer no ill-effects as a result. However, Japan is considering introducing some sort of 10-year visa, aimed at people coming to do a PhD and then spending some time as a post-doc afterwards. Bear in mind that 10 years is not usually enough for someone to even qualify for permanent residency, let alone citizenship (unless you happen to marry a Japanese citizen). So at the end of 10 years, they will be vulnerable to being unceremoniously chucked out, and unless they have been particularly careful and/or lucky, they might find it hard to find a decent job anywhere else. 10 years as a graduate/postdoc (which potentially means slave labour in any culture) with no chance of tenure sounds like a recipe for disaster to me.

Well, that's my opinion. If they didn't want it, they shouldn't have asked :-)

Wednesday, March 08, 2006

Japanese press coverage

We had a press conference last week about our paper. It was all in Japanese, expertly presented by Seita Emori (who did a great job firstly in writing a nice talk, and then in presenting it). Since none of the journalists who attended showed any any inclination to speak any English, and our Japanese is still regrettably poor, it was a rather relaxing experience, and we had little to do other than sit there. I was a bit disappointed that no-one wanted to take pictures of us all saying "chiizu", but you can't have it all :-)

Most of the subsequent questions actually related to the difference between sensitivity and a projection under a specific scenario, and there wasn't much asked about the work itself. But then the next day, three minor papers ran short articles on the story:





























The names of these papers roughly translate as the "Nikkei Industry Newspaper" and "Nikkan Industry Newspaper" respectively, and I get the impression they have a strong leaning towards science and technology, rather unlike any UK newspaper I've ever heard of (perhaps closest to NewScientist, I'm not sure). They both more-or-less faithfully report the press release (at least an abbreviated version of it), and one of them even goes as far as mentioning Bayes Theorem! When's the last time you saw something like that mentioned in The Times?

And here's the Denki Shimbun ("Electricity Newspaper", click for a larger version):



Monday, March 06, 2006

Forthcoming workshop

I'm off to Florida next week for a Workshop on Predictability, Observations, and Uncertainties in Geosciences. It looks like they have a good bunch of invited speakers - mostly :-)

The poster is quite pretty too.

If I'm not heard from again, that probably means that Immigration decided that I was an International Tourist from the Al-Gebra organisation carrying Weapons of Maths Instruction.

Friday, March 03, 2006

Press coverage

We're in the in the news again! Yes, the heady heights of Al-Jazeera. Really.

(OK, it is nothing more than a cut-n-paste of the Guardian article. But I found it funny that this was the first [um, only] foreign news outlet to mention the story.)

A new improved estimate of climate sensitivity!?

Someone (thanks Sven) recently drew my attention to Forster and Gregory, 2006: The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data.

Using results from the ERBE project, they have estimated climate sensitivity to lie in the range 1.0-4.1C (95% confidence interval). This looks like a very impressive headline result, but on closer examination, it's not quite what it seems (the authors are very up-front about this, I'm not for a minute accusing them of trying to mislead anyone). In fact their paper is a rather technical one that presents a method to investigate the different contributions to overall sensitivity, and it does not make a big noise about this overall result.

So, why is this 1.0-4.1C range not an exciting new result? A recent paper by Frame et al (including Gregory) gives a nice explanation and illustration of the point, and I'll give a simple reprise of it here. In order to use Bayes' Theorem to generate a probabilistic estimate from observational data, we need to start from some prior distribution - typically, we choose this to be something fairly ignorant (like a uniform distribution) in order to minimise the risk that this will strongly distort our results.

However, climate sensitivity S is related to something called the feedback parameter L via the equation

S=3.7/L

(where 3.7 is the radiative forcing of doubled CO2, in Wm-2). S and L are both unknown, of course. This inverse relationship means that a uniform prior in S is a strongly biased prior in L, and vice-versa. Whichever prior we choose has strong implications for our result, one way or the other. There really is no way round this problem, short of a whole new treatment of probabilistic inference such as imprecise probability (which IMO is not really justified in this case, although it's an interesting idea). Frame et al recommend that if you want to estimate S, you should choose the prior to be uniform in S. I disagree with some aspects of their presentation (about which more later) but agree that this is probably a sensible convention. However, F&G chose their prior to be uniform in L, for reasons which they explain and justify - basically, they are primarily interested in L and its components, and do not wish to bias those results.

I've attempted a simple reanalysis of their results to illustrate the importance of their decision. Again, this is rather similar to (but simpler than) the example that Frame et al present. I'm not sure that all details of my analysis are strictly valid because I'm not starting from F&G's original data but merely working backwards from their result, but I think it shouldn't be far off.

Case A, the top row, are the results presented by F&G (click the pic for a larger version). The blue lines give their prior distributions and the red is the posterior. You can see that the top left (feedback) has a nice flat prior, which gives a perfectly Gaussian posterior with 95% CI of 0.9-3.7 (cos that's what I assumed as an input based on F&G). The top right, however, has a strongly skewed prior, which favours low values. The 95% CI of the posterior here is 1.0-4.1C, as F&G state.


The lower row (Case B) are what they would have obtained if they had chosen a uniform prior in S. In this case, the left plot has a strongly skewed prior and the posterior is shifted to lower values compared to Case A - in fact it has a bimodal distribution. The LH peak would have shot off to infinity but I truncated the calculation. The bottom right graph shows the effect on climate sensitivity - the posterior has a long tail that just keeps on going (it's clearer on the larger version below). In this calculation, the 95% confidence interval of the posterior runs from 1-15C, but that's only because I artificially truncated the calculation at 20C. Without that limit, I think it would not have had a finite upper bound at all.

So, that's why F&G didn't trumpet their results as having solved the climate sensitivity estimation problem.

However, their analysis does clearly favour a moderate value for S over an extreme one, and it appears to be just about entirely independent of all the constraints that we used in our recent paper (hmm...I wonder if anyone will argue that the overlap in obs with the recent temperature record makes this not strictly valid?). Therefore, it can (in fact should) be added into the mix in order to generate an improved overall estimate. I've done this below:


For context, I've left on the original 3 constraints (blue) that we used to generate the main result (red). F&G's constraint (technically a likelihood function, the same shape as the posterior in the lower right plot above) is the green dashed line. You can see how it stubbornly refuses to drop to zero as it leaves the right hand edge of the plot, but that doesn't matter as we already know those values are implausible.

Multiplying the three blue curves and the green dashed one, we get the green solid shape. This is now a new improved estimate of climate sensitivity, which takes account of F&G's work. The upper limit has dropped to about 3.9C. The median is lowered to about 2.5C. You can also see how the peak is slightly higher, indicating the lower uncertainty that arises as a natural consequence of adding more information.

I don't intend to write another paper presenting this result :-) I think it is reasonable to err on the side of overestimating uncertainty, to make some allowance for the "unknown unknowns". And we'd already made it as clear as we could in the paper that we didn't really think there was as much as a 5% chance of S exceeding 4.5C. But in any case F&G's work does, I think, add some significant extra support and credibility to what we've already presented. I await the forthcoming IPCC draft with interest...

Thursday, March 02, 2006

Climate sensitivity is 3C

Plus or minus a little bit, of course. But not plus or minus as much as some people have been claiming in recent years :-)

So, our paper has now been accepted, and should be published in a week or two. We think it poses a strong challenge to the "consensus" that has emerged in recent years relating to observationally-based estimates of climate sensitivity, both in terms of the methods used, and the value itself. Remember that climate sensitivity is generally defined as the equilibrium globally-averaged surface temperature rise for a doubled concentration of atmospheric CO2 - so it's a simple benchmark to describe the sensitivity of the global climate to the sort of perturbation we are imposing. Here is what we did...

As you might have noticed, over recent years there have been a number of papers using observational data in an attempt to generate what is sometimes called an "objective" estimate of climate sensitivity. Of course, as you will hopefully realise having read my previous posts about Bayesian vs frequentist notions of probability, there isn't such a thing as a truly objective estimate, since in a situation of epistemic uncertainty, observations can only ever update a subjective prior, and never fully replace it. Moreover, subjectivity goes a lot deeper than merely choosing priors over some unknown parameters - in all scientific research, we always have to make all sorts of judgements about how to build models and analyse evidence. But still, we'd all like to have an estimate of climate sensitivity which can be traced more directly to the data, to replace the old IPCC/Charney report estimate of "likely to be between 1.5-4.5C".

A common approach is to use an ignorant prior (generally, although not always, uniform in climate sensitivity) and look at how observations of the recent (say 20th century) warming narrows the distribution. The unfortunate answer is that it doesn't actually narrow it much, mainly because we don't actually know the recent net forcing (suphate aerosols have a highly uncertain but probably cooling effect which offsets the GHG forcing - if the net forcing is low, then sensitivity much be high to explain the observed warming). I've discussed that further here, and see also the RealClimate posts here and here. Our best estimates give a value of around 3C for climate sensitivity, but values in excess of 6C and perhaps even 10C cannot be ruled out. As a result of numerous studies of this nature, it has been frequently written that we cannot rule out a climate sensitivity of 6C or even substantially more, which is widely regarded as an essentially disastrous situation.

There are some other approaches that can be tried. The cooling effect of a volcanic eruption such as Mt Pinatubo in 1992 also provides some evidence about climate sensitivity. If climate sensitivity is very low, we would expect a modest short-term cooling, but if sensitivity is high, a greater cooling is expected, and it should take longer to recover. We can't get a precise value from this method, since this forced cooling isn't much greater than interannual variability in surface temperature. Wigley et al analysed several recent volcanic eruptions with a simple energy-balance model and found that a value of about 3C looked pretty good in each case, but values as high as about 6C (and as low as 1.5C) could not be completely ruled out. Yokohata et al got broadly consistent results with two versions of a full AOGCM.

We can also look to the paleoclimate record for evidence from our planet's past climate. During the last ice age, the total radiative forcing was roughly 8Wm-2 lower than today (mostly due to lower CO2 and large ice sheets, with dust and vegetation changes also contributing). 8Wm-2 is roughly twice the forcing of doubled CO2 (although in the opposite direction), so with the global temperature at that time being about 6C cooler than at present, a climate sensitivity of about 3C looks pretty good again. However, again there are significant uncertainties in all of these values I've quoted, and it's also not clear that one value of climate sensitivity will necessarily apply both to doubled CO2 and to this rather different forcing. In fact model results (such as our own) show a fair amount of uncertainty in the response to these different scenarios.

There have been some other ideas, based on how well a model reproduces our current climate (say the magnitude of the seasonal cycle) or other quasi-steady climate states with significantly different forcing, such as the Maunder Minimum. Again, these analyses point towards ~3C as being the best answer, but the uncertainties in these approaches mean that none of these approaches can rule out 6C or thereabouts as an upper limit.

So all these diverse methods generate pdfs for climate sensitivity that peak at about 3C, but which have a long tail reaching to values as high as 6C or beyond at the 95% confidence level (and some are even worse). As a result, it's been widely asserted that we cannot reasonably rule out such a high value.

So, what did we do that was new? People who have read this post will already have worked out the answer. We made the rather elementary observation that these above estimates are based on essentially independent observational evidence, and therefore can (indeed must) be combined by Bayes' Theorem to generate an overall estimate of climate sensitivity. Just like the engineer and physicist in my little story, an analysis based on a subset of the available data does not actually provide a valid estimate of climate sensitivity. The question that these previous studies are addressing is not
"What do we estimate climate sensitivity to be"
but is instead
"What would we estimate climate sensitivity to be, if we had no information other than that considered by this study."
The answers to these two questions are simply not equivalent at all. In their defence - and I don't want people to think I'm slamming the important early work in this area - at the time of the first estimates, the various distinct strands of evidence had not been examined in anything like so much detail, so arguably the first few results could be considered valid at the time they were generated. However, with more evidence accumulating, this is clearly no longer the case.

When we combined some of the most credible and solidly-grounded (in our opinion) estimates arising from different observational evidence, we found that the resulting posterior pdf was substantially narrower than any of the observationally-based estimates previously presented. It's inevitable that such a narrowing would occur, but we were surprised by how substantial the effect was and how robust it was to uncertainties in the individual constraints. I suppose with hindsight this is obvious but we admit it did rather take us by surprise. As recently as last summer, I was happily talking about values in the 5-6C region as being plausible, even if the 10C values always seemed pretty silly.

The paper didn't exactly sail through the refereeing process, but has now been seen by a lot of researchers working in this area. Although many of our underlying assumptions are somewhat subjective, our result appears very robust with respect to plausible alternatives (this was rather a surprise to us). No-one has actually suggested that we have made any gross error (well, some people are rather taken aback at a first glance, but they have all come round quickly so far). It's important to realise that we have not just presented another estimate of climate sensitivity, to be considered as an alternative to all the existing ones. We have explained in very simple terms why the more alarming estimates are not valid, and anyone who wants to hang on to those high values is going to have to come up with some very good reasons as to why our argument is invalid, coupled with solid arguments for their alternative view. A few nit-picks over the specific details of our assumptions certainly won't cut it.

As for the upper limit of 4.5C - as should be clear from the paper, I'm not really happy in assigning as high a value as 5% to the probability of exceeding that value. But there's a limit to what we could realistically expect to get past the referees, at least without a lengthy battle. It is, as they say, good enough for Government work :-)

What should journalists do?

Someone asked me recently how I thought journalists should cover science stories. I'm not sure I answered very well at the time, but since then I came up with a couple of thoughts.

I think the golden rule to remember is that a new item of research, even if it's appeared in a prestigious peer-reviewed journal, is not in itself a new "truth" about the world. It is only the current opinion of a couple of researchers working in a particular area. It's always worth bearing in mind that the researchers themselves might change their minds in a few months and even if they don't, it is quite possible that the rest of the scientific community will decide they are talking nonsense and either criticise or (what is perhaps worse) ignore the study. Peer review is only the first step of the evaluation process by which new ideas become part of the established body of knowledge. It merely means that another couple of researchers - who may know the authors quite well on a professional or even social level - have read at the paper, seen no glaring errors and are prepared to take most of the details on trust. So the whole process amounts to someone saying "this is our idea, what do you all think?" with a filter that helps to keep out the most obvious errors. It's rare that a paper will be both highly significant (in terms of overturning an established consensus), and accepted without a murmur of disagreement from all quarters. So if it's hot off the press, it's perhaps best thought of as an opinion more than a fact, even though the opinion may (and hopefully in most cases does) have some strong grounding in reality. In time, it will generally become clear that a significant majority of researchers either accept the work or reject it (or perhaps understand its strengths and limitations).

I dont mean to denigrate the contribution of the reviewers - I've been grateful for the many helpful comments I've received through the process - but I once saw an estimate that each reviewer takes about 1hr per paper (this does seem low to me, and I suspect it may be different across fields, but then again I don't get asked to review much myself). So I think it's important to keep a sense of perspective about things.

This limited review is potentially a bit of a problem with the IPCC reports, which may use hot off the press results which have not yet really stood the test of time. Of course, ignoring the most recent stuff would be just as easy to criticise. I guess the authors have to use their judgement as to how trustworthy the 2006 papers are.

Frogblog

I've been wondering what to say about this Nature paper for some time. It made the strong statement that global warming was driving widespread extinction of amphibians, and it attracted a lot of media attention.

The extinctions are clearly linked to a virulent fungal disease, but the paper finds a correlation between temperature and extinction events, and hypothesises that the warmer climatic conditions are favouring the fungus. Now climatic conditions certainly can have an influence on the virulence of fugal pathogens - anyone who's had mildew on their roses or a case of athlete's foot will recognise the influence of the local microclimate in modulating fungal growth - but the evidence for this actually having been the major cause in wiping out the frog populations seems slim to me. There is, as far as I can tell, no real epidemiological evidence that it is likely to be the case - indeed the authors themselves state (as does the Wikipedia page above) that warmer conditions should in principle favour the frogs, so they hypothesise further that the warmer conditions bring more cloud which results in less sun and lower peak temperatures. It's all very interesting, and I've nothing against interesting ideas, but it seems rather more tenuous than you'd think at a first glance.

What's clear is that this disease is a recently introduced one, which has been spread by human activity over recent decades. Whenever the disease reaches a new area, it has devastating effects. This paper makes a very strong case that the disease originated in Africa, where it happily coexists with its main host the African Clawed Toad. After 1934, it started to be spread around the world (by human activity, initially due to the ACT's use in pregnancy testing and other biomedical research, more recently the pet trade and perhaps even tourism) at an increasing rate, leaving a trail of devastation in its wake as it infected species which had no naturally-evolved resistance to it. I spotted this news item more recently, which talked about the same disease affecting a nearby area:
Researchers from Southern Illinois say the fungus, which causes the infectious disease chytridiomycosis that affects amphibians, arrived in Panama in 1993 and was detected in El Cope, an area near the Caribbean with many frogs, in October 2004.

Within four months, it had wiped out 57 out of a total of 70 frog, toad and salamander species, including many golden frogs, in the area.

"The golden frog is already endangered because of habitat loss and collecting for the pet trade," Lips said.

That's a 70% extinction in one area in 4 short months! And the researcher mentions another two threats, one of which must be fairly generic (habitat loss) even if most species are not directly collected as pets.

The Nature paper doesn't attempt any balanced consideration of the various factors which might have been contributing to the extinctions they studied. The obviously dominant factor is the recent introduction of the disease, and it's hard to see how a 0.5C warming can have a large effect compared to the other threats. Note that these frogs live in mountainous areas where their natural range covers a vastly greater range of temperatures, and a 100m elevation change would more than compensate for the warming. I bet that if the climate had been cooling in recent decades, there would still have been many extinctions - I've really no idea if it would be more or fewer, but I'm sure there would have been researchers concerned about it in any case.

By the way, the disease now seems to be getting established in the UK, which is obviously very bad news for frogophiles.

For two more takes on it, have a look here and here. I should make it clear (before anyone accuses me of turning septic) that I'm not much of a fan of either of those authors, and indeed I've previously had a couple of run-ins with them myself (eg here and here). But that doesn't mean I will automatically reject everything they write, even though I do view their comments with a rather sceptical eye. It seems to me that their criticisms are generally well-founded in this case.

I've tried to encourage RealClimate to write something about this story, without success. My guess would be that they have similar reservations to me, but are too circumspect to voice them openly. RC authors are of course welcome to confirm or deny this either in the comments, or directly by email if they prefer :-)