Tuesday, November 29, 2011

[jules' pics] Let's pink

In case anyone was wondering who on earth buys those brightly coloured SLRs... young women are powerful consumers in Japan... 

Let's pink

"The real world is the one on the back of my camera"  

[Hachimangu, Kamakura, Japan]


--
Posted By Blogger to jules' pics at 11/29/2011 03:21:00 PM

Monday, November 28, 2011

[jules' pics] Vitamin C overdose


garden citrus
I suppose it might be the same in all those exotic countries in which citrus grow easily, but I find it odd that Japanese people don't eat the fruits of their own trees. Instead, like in the photo, the fruits remain on the garden trees as decoration all winter long. A British person did once tell me that he walked the lanes and successfully gained access to his neighbours' trees in order to make marmalade. Japanese people do eat a lot of citrus at this time of year. There are many different varieties of delicious satsumary things available, and a family may have a big boxful to work on during the New Year holiday. I'm not sure if what my Japanese friend told me can really be true - that children sometimes turn orange from eating too many!? I also don't know why the Japanese do not grow grapefruit - they are available but imported from the USA. The grapefruity thing in the picture is probably a yuzu, which is generally smaller and sweeter and less juicy than grapefruit. The tree was growing in someone's garden in Kamakura, with the boughs overhanging the road. I could have picked it if I was James' height.


--
Posted By Blogger to jules' pics at 11/28/2011 12:49:00 PM

Friday, November 25, 2011

More on Schmittner

OK, so the Schmittner paper is out, along with a commentary in Science, and I've had a few days to digest it more thoroughly. What I said before about past v future asymmetry still holds true, but there is another point which may be more interesting.

The model results actually don't fit the land data very well, being generally too warm. A key plot is the sensitivity analysis where they compare results when land and ocean data were used separately, versus together. Clearly, the combined analysis looks almost identical to the ocean-only results, and the land-only results are radically different. In fact, they barely overlap with the ocean-only results.


Of course, there is no reason why these results should match exactly, or even closely - remember, they are not estimates of "the pdf of sensitivity" but rather, probabilistic estimates of the sensitivity - but they do need to overlap in order to be taken seriously (if they don't, at least one has to be wrong). The true value has to lie in their intersection, which is rather narrow in probabilistic terms - the 90% range of the land-only pdf is 2.2-4.6C, that of the ocean-only is 1.3-2.7C.

The explanation for this near-disjoint pair of distributions is that the model does not represent the land-ocean temperature contrast well (this is a characteristic behaviour of this sort of model, as the authors acknowledge), so can only fit one set of data at a time. When faced with both, it prefers the ocean, partly because these data are more plentiful, and partly because it is given the prior belief that the land data are less accurate (which they probably are, to be fair). The poor fit to land data then results in the statistical method assigning even less weight to these data through the spatial error term mentioned in the supplementary on-line material, and in the end result they are almost ignored. In the final analysis, the cooling over land (and perhaps also the polar amplification) seems to be significantly underestimated, leading to their rather warm LGM state which is only 3C cooler than the modern (pre-industrial) climate. One might reasonably expect that their future simulations also underestimate the temperature change over land, meaning the sensitivity estimate is on the low side, too.

Jules has also been looking at some of these data recently, particularly in comparison to the PMIP2 experiments - that is, simulations of the last glacial maximum by several state of the art climate models, most of which also mostly contributed to the CMIP3/IPCC AR4 database of modern/future projections. One telling point is that several of the PMIP2 models actually appear to fit the data better than Schmittner's best model, even though these were not specifically tuned to fit the data. Moreoever, these models are all clearly colder, in terms of global mean temperature anomaly, than the -3C value obtained in this latest paper. We haven't done a thorough analysis of this yet but I think it is safe to say that there is a significant bias in the Schmittner fit and that the LGM was really more than 3 degrees colder than the present. The implication of this for climate sensitivity is not immediate (since there are also well-known forcing biases in the PMIP2 simulations), but this line of argument also seems to suggest that it may be reasonable to nudge the Schmittner et al values up a bit.

It is still hard to reconcile a high sensitivity with the LGM results, though.

UPDATE: similar comments from RC.

Wednesday, November 23, 2011

New leak?

Some people might be surprised to hear me say it, but I think this new leak provides damning evidence of shoddy behaviour. There is clearly inept leadership at the heart of the organisation, plenty of back-biting, and the way in which junior and more conscientious colleagues who refused to toe the party line were bullied and ridiculed is shameful. Many of these people who I had trusted to do their honest best are clearly motivated far more by money than the desire to do their jobs properly. There certainly isn't much evidence of the sort of ethos that we are entitled to expect from people in their position.

I find the whole thing truly shameful, and call upon all those involved to resign. It's time for a new broom.

More details can be found here.

Tuesday, November 22, 2011

Cancer survival: Macmillan hails major improvement

The first thing I thought of when I saw this story was the talk given by Gerd Gigerenzer last year at this odd but interesting workshop. The gist of it was that "survival time" as a measure of performance in medical science could be very misleading, as it does not necessarily indicate any increase in lifespan or reduction in death rate. Increasingly aggressive and sophisticated screening and diagnosis procedures will automatically result in increased "survival time" even without any improvement in treatment, simply through spotting the cancer earlier in its progress. This isn't a purely theoretical point, he had plenty of statistics to back it up too. That's not to say there haven't been genuine advances too, but 5y survival rate doesn't necessarily measure them correctly.

Sunday, November 20, 2011

[jules' pics] Spot the difference

For those who found the last puzzle too difficult...
Egret
sparrow


--
Posted By Blogger to jules' pics at 11/20/2011 09:21:00 PM

Saturday, November 19, 2011

Parmesan cheese

There's a lovely fusion of mad scientists and bonkers bureaucrats in the Torygraph today:
EU bans claim that water can prevent dehydration: A meeting of 21 scientists in Parma, Italy, concluded that reduced water content in the body was a symptom of dehydration and not something that drinking water could subsequently control.
I wonder if it's actually true?

Friday, November 18, 2011

[jules' pics] Spot the odd one out.


jungle crow
crow
jungle crow
jungle crow
jungle crow
Unless I have got my birds all mixed up, andrewt will get this in a trice.
Photos made posisble by Lan's lovely Nikon (now sadly returned to him). Great crow cam...

--
Posted By Blogger to jules' pics at 11/18/2011 01:48:00 PM

Thursday, November 17, 2011

Through the Looking Glass

Some explanation of these two posts is due.

There is a good book called "Japan Through the Looking Glass", which seems a particularly fitting title at present. Cognitive dissonance and things happening backwards in time don't seem to concern the locals, who just get on with whatever they are told to do.

It is a bit hard to explain what happened without Lewis Carroll's superior writing skills. On a Friday we were told, by a white knight, that one of us (as yet unspecified!) had agreed to lead a sub-theme of a new large five-year project, focussing on a topic which we didn't think credible or interesting. We had not, but we did know of the general existence of the large project (and in fact currently work on its predecessor). We had even arranged to be informed about this new project in a seminar next February. The White Queen told us to prepare carefully and think of some good ideas over the weekend, which of course, was impossible as we had no idea what we were preparing for. On Monday we met with the knight and Queen, were told what the project was about, voiced our strident disagreement with the underlying premise, developed a workable compromise, and then we had 48 hours to write a proposal for our sub-theme. That same day we all also managed to meet with the White King, in transit between meetings, for 20 minutes in a coffee shop in the middle of Tokyo. There are no other candidates for the funds, and it had already been decided that the proposal was to be successful, but in a rare act of temporal sense, it had been decided that this time it would be nice to plan in advance what each sub-theme was going to do, rather than to plan it afterwards, as usually happens. There was the trifling detail of one official form being required from our employer that takes two weeks to obtain but which had to be submitted within a week, but it turns out that time can in fact be warped when it really matters. Slightly more worrying is the fact that actually James and I do not yet have jobs for next year, because they depend on the other large project (organised by the shogi pieces) which does not yet have any budget at all, even provisionally. It is almost impossible to extract any useful information from the leaders of that project and we don't even understand the sort of moves they make. This makes doing any sort of planning something of a struggle. But we did it, and the knight very kindly did the translation as well as his own proposal. The most stressful part was done by a pawn, quite new on the chess board, who was given the task of calculating the budget, getting the form through JAMSTEC, and then submitting it on-line, on Monday afternoon. Right at the last minute, we suddenly discovered that our pre-ordained budget was 30% larger than we had catered for, which caused a bit of a panic until we arranged for someone else to take the excess off our hands.

On Tuesday we took the day off and enjoyed a lovely relaxing fun-filled day in Kamakura. Now everything seems so much better.

Wednesday, November 16, 2011

[jules' pics] Affording the new Nikon

I have a plan! Having seen that these Rhine horizontals recently fetched more than 4 million USD (that's 4000 Nikons!) at auction...
horizontals
How about some Kamakura horizontals...
horizontals
Anyone like to buy a print? I'll do you a great deal.

Actually, after looking on Wikipedia I found out that the expensive picture may actually be quite different from how it appeared at that first link above. Looks like I need not only more horizontals but also brighter colours before I hit the big time for real...
horizontals
I suppose one would actually have to see the original for real to actually appreciate the true work of art uninterpreted by pixels. 

--
Posted By Blogger to jules' pics at 11/16/2011 02:00:00 PM

Tuesday, November 15, 2011

How not to compare models to data again

It's been bugging me for some time, the way many people talk about the CMIP3/IPCC ensemble not spanning the "full/true" (both terms appear in the literature) range of uncertainty. For example, the IPCC AR4 says "the current AOGCMs may not cover the full range of uncertainty for climate sensitivity". The first error here is in the apparent belief that there even is such as thing as a "full" or "true" range of uncertainty that the models ought to represent. Within the Bayesian paradigm, uncertainty is an indication of the belief of the researcher(s) involved, and is not intrinsic to the system being considered. So, at best, it might be legitimate to say that the models do not represent my/our/the IPCC authors' uncertainty adequately. This could perhaps be dismissed as a pedantic quibble, were it not for the way that this category error concerning the nature of the uncertainty underpins the justification of the claim. I recently got around to writing this argument up, and it was recently accepted for publication. So here is the outline of the case.

The IPCC statement is actually a summary of a lot of probabilistic estimates, as presented in their Box 10.2 Fig 1:


The top left panel has a lot of pdfs for the equilibrium climate sensitivity, generated by various authors, with their 90% probability ranges presented as bars on the right. In the bottom right panel, we have the range of sensitivity values from the CMIP3 ensemble (pale blue dots). Clearly, the spread of the latter is narrower than most/all of the former, which is the basis for the IPCC statement. There are numerous examples of similar statements in the literature, too (not exclusively restricted to climate sensitivity).

Many of the pdfs are based on some sort of Bayesian inversion of the warming trend over the 20th century (often, both surface air temp and ocean heat uptake data are used). This calculation requires a prior pdf for the sensitivity and perhaps other parameters. And herein lies the root of the problem.

Consider the following trivial example: We have an ensemble of models, each of which provides an output "X" that we are interested in. Let's assume that this set of values is well approximated by the standard Gaussian N(0,1). Now, let's also assume we have a single observation which takes the value 1.5, and which has an associated observational uncertainty of 2. The IPCC-approved method for evaluating the ensemble is to perform a Bayesian inversion on the observation, which in this trivial case will (assuming a uniform prior) result in the "observationally-constrained pdf" for X of N(1.5,2).

It seems that the model ensemble is a bit biased, and substantially too narrow, and therefore does not cover the "full range of uncertainty" according to the observation, right?

No, actually, this inference is dead wrong. Perhaps the most striking and immediate way to convince yourself of this is to note that if this method was valid, then it would not matter what value was observed - so long as it had an observational uncertainty of 2, we would automatically conclude that the ensemble was too narrow and (with some non-negligible probability) did not include the truth. Therefore, we could write down this conclusion without even bothering to make this inaccurate observation at all, just by threatening to do so. And what's worse, the more inaccurate the (hypothetical) observation is, the worse our ensemble will appear to be! I hope it is obvious to all that this state of affairs is nonsensical. An observation cannot cause us to reject the models more strongly as it gets more inaccurate - rather, the limiting case of a worthless observation tells us absolutely nothing at all.

That's all very well as a theoretical point, but it needs a practical application. So we also performed a similar sort of calculation for a more realistic scenario, more directly comparable to the IPCC situation. Using a simple energy balance model (actually the two-box model discussed by Isaac Held here, which dates at least to Gregory if not before), we used surface air temperature rise and ocean heat uptake as constraints on sensitivity and the ocean heat uptake efficiency parameter. The following fig shows the results of this, along with an ensemble of models (blue dots) which are intended to roughly represent the CMIP3 ensemble (in that they have a similar range of equilibrium sensitivity, ocean heat uptake efficiency, and transient climate sensitivity).

The qualitative similarity of this figure to several outputs of the Forest, Sokolov et al group is not entirely coincidental, and it should be clear that if we integrate out the ocean heat uptake efficiency, the marginal distributions for sensitivity (of the Bayesian estimate, and "CMIP3" ensemble) will be qualitatively similar to those in the IPCC figure, with the Bayesian pdf of course having a greater spread than the "CMIP3" proxy ensemble. Just as in the trivial Gaussian case above, we can check that this will remain true irrespective of the actual value of the observations made. Thus, we have another case where it may seem intuitively reasonable to state that the ensemble "may not represent the full range of uncertainty", but in fact it is clear that this conclusion could, if valid, be stated without the need to trouble ourselves by actually making any observations. Therefore, it can hardly be claimed that this result was due to the observations actually indicating any problem with the ensemble.

So let's have another look at what is going on.

The belief that the posterior pdf correctly represents the researchers' views, depends on the prior also correctly representing their prior views. But in this case, the low confidence in the models is imposed at the outset, and is not something generated by the observations. In the trivial Gaussian case, the models represent the prior belief that X should (with 90% probability) lie in [-1.64,1.64], but a uniform prior on [-10,10] only assigns 16% probability to this range. The posterior probability of this range, once we update with the observation 1.5±2, has actually tripled to 47%. Similarly, in the energy balance example, the prior we used only assigns 28% probability to the 90% spread of the models, and this probability doubles to 56% in the posterior. So the correct interpretation of the results is not that the observations have shown up any limitation in the model ensemble, but rather, that if one starts out with a strong prior presumption that the models are unlikely to be right, then although the observations actually substantially increase our faith in the models, they are not sufficient to persuade us to be highly confident in them.

Fortunately, there is an alternative way of looking at things, which is to see how well the ensemble (or more generally, probabilistic prediction) actually predicted the observation. This is not new, of course - quite the reverse, it is surely how most people have always evaluated predictions. There is a minor detail which is important to be aware of, which is that if the observation is inaccurate, then we must generate a prediction of the observation, rather than the truth, in order for the evaluation to be fair. (Without this detail, a mismatch between prediction and observation may be due to observational error, and it would be incorrect to interpret this as a predictive failure). One important benefit of this "forward" procedure is that it takes place entirely in observation-space, so we don't need to presume any direct correspondence between the internal parameters of the model, and the real world. It also eliminates the need to perform any difficult inversions of observational procedures.

For the trivial numerical example, the predictive distribution for the observation is given by N(0,2.2) (with 2.2 being sqrt(1^2+2^2), since the predictive and observational uncertainties are independent and add in quadrature). That is the solid blue curve in the following figure:

The observed value of 1.5 obviously lies well inside the predictive interval. Therefore, it is hard to see how this observation can logically be interpreted as reducing our confidence in the models. We can also perform a Bayesian calculation, starting with a prior that is based on the ensemble, and updating with the observation. In this case, the posterior (magenta dotted curve above) is N(0.3,0.9) and this assigns a slightly increased probability of 92% to the prior 90% probability range of [-1.64,1.64]. Thus, the analysis shows that if we started out believing the models, the observation would slightly enhance our confidence in them.

For the more realistic climate example, the comparison is performed between the actual air temperature trends of the models, and their ocean heat gains. The red dot in the below is the pair of observed values:


This shows good agreement for the energy balance models (blue dots - the solid contours are the predictive distribution accounting for observational uncertainty), and also for the real CMIP3 models (purple crosses), so again the only conclusion we can reasonably draw from these comparisons is that these observations fail to show any weakness in the models.

The take-home point is that observations can only conflict with a probabilistic prediction (such as that arising from the simple "democratic" interpretation of the IPCC ensemble) through being both outside (in the extreme tail of) the model range, and also precise, such that they constrain the truth to lie outside the predictive range. While this may seem like a rather trivial point, I think it's an important one to present, in view of how the erroneous but intuitive interpretation of these Bayesian inversions has come to dominate the consensus viewpoint. It was a pleasant surprise (especially after this saga) that it sped through the review process with rather encouraging comments.

Saturday, November 12, 2011

[jules' pics] Quackers


Male Mallard

Northern Pintail Duck


After our quackers week being Alice Through The Looking Glass at work, Lan was kind enough to lend me his lovely new camera for recovery and relaxation. I think it is amazing, but am having a hard time convincing James to support Nikon this Christmas. 


[Northern Pintail and Mallard ducks at Hachimangu, Kamakura]


--
Posted By Blogger to jules' pics at 11/12/2011 07:29:00 PM

Friday, November 11, 2011

More WCRP OSC

Presentations are available on-line, for those who are interested. They can be found via this page, though there is still no decent search function or index as far as I can tell. Here is the Kalnay talk, for example. And here is the Taylor talk I grumbled about (see p30). Some posters may be available too, but I haven't uploaded mine - one is all published work, one hopefully should be accepted shortly, at which point I'll blog it.

Happy Pocky Day

For those of you who don't know what Pocky is, I bring you:



and also:



and just in case you still haven't worked it out:



(Well if climate scientists can go on about the need for a "Manhattan Project" for climate science, I can't really blame the Japanese for doing their own thing on Armistice day.)

Thursday, November 10, 2011

EGU Autumn 2011 election now open

The EGU election for President may be of a bit more importance than usual, given the somewhat disturbing behaviour of the EGU in recent years (eg here and here). One of the candidates, Denis-Didier Rousseau, has done a good job running the Climate Division over recent years (re-elected last year with an extremely high level of support, IIRC). He's done plenty of worthy things as detailed on his CV and statement, but of particular interest to me, he's also a strong supporter and promoter of the EGU journals which have brought a welcome breath of fresh air to scientific publication.

It's quite rare that I write a non-cynical and straightforward post, but in this case I'll make an exception as I think he's a good candidate and worth supporting.

Wednesday, November 09, 2011

Schwartz spanked again...

...though of course he might not see it that way.

The original paper is here which I discussed here. Now there is a comment from Knutti and Plattner, and reply from the original authors. To be honest, I'm a little bit surprised they bothered, since (as I said originally) the paper wasn't, for the most part, actually wrong, just misleadingly presented ("Why has the earth warmed just as expected" might have been more accurate a title). Actually, Knutti and Plattner do find a genuine error, in the way that Schwartz et al extrapolate their results to consider the case of committed climate change (ie due to emissions to date), in that they ignore that the atmospheric CO2 level would actually fall significantly if emissions were to cease. I must admit I hadn't bothered to wade through the paper sufficiently carefully to see that. So maybe it was worth correcting.

Tuesday, November 08, 2011

Schmittner on sensitivity

Yet another interesting paper, this time on climate sensitivity estimated from the Last Glacial Maximum. What makes this particularly novel and significant is that they have used two recently-developed and rather comprehensive spatially-resolved data sets, for ocean and land temperatures respectively, rather than relying on large spatial averages that most people (including myself twice) have relied on in the past. They conclude that sensitivity is "likely" to lik lie in the range 1.7-2.6K, very much towards the low end of most estimates and with very low uncertainty.

A weakness of the paper, however, is that the authors may not have adequately considered nonlinearity in the equilibrium response of the climate system to different combinations of negative and positive forcings. A number of papers (eg here, here and here) have shown that the degree of nonlinearity can vary significantly between different models, and although I have not used the energy-balance style model that Schmittner et al use, I suspect it will not represent this range of uncertainty well. What this means is, that even though they may be able to accurately estimate the "sensitivity" at the LGM, in terms of the ratio of temperature response to net radiative forcing, we cannot be sure how this will translate into "sensitivity" for 2xCO2. A possibly more statistically sophisticated and comprehensive attempt to account for uncertainties can be found here, for example.

That said, it's a useful antidote to the exaggerated uncertainty estimates that have been prevalent over recent years, and I certainly applaud the intentions and effort underlying this substantial piece of work. In any case, I expect the merchants of doubt to do their worst on it when they cite it in the IPCC report.

Monday, November 07, 2011

The null hypothesis in climate science

Three papers have just appeared in WIREs Climate Change (here, here and here) discussing the role of the null hypothesis in climate science, especially detection and attribution.

Trenberth argues that, since the null (that we have not changed the climate) is not true, we should try to test some other null hypothesis. He sounds like someone who has just discovered that the frequentist approach is actually pretty useless in principle (as I've said many times before, it is fundamentally incapable of even addressing the questions that people want answers to), but although he seems to be grasping towards a Bayesian approach, he hasn't really got there, at least not in a coherent and clear manner. Curry is just nonsense as usual, and beside noting that she has (1) grossly misrepresented the IAC report and (2) abjectly failed to back up the claims that Curry and Webster made in a previous paper, there isn't really anything meaningful to discuss in what she said.

Myles Allen's commentary is by some distance the best of the bunch, in fact I broadly agree (shock horror) with what he has said. If one is going to take a frequentist approach, the null hypothesis of no effect is often an entirely reasonable starting point. It is important to understand that rejecting the null does not simply mean learning that there has been some effect, but it also indicates that we know (at least at some level of confidence) the direction of the effect! That is, it is not only an effect of zero which is rejected, but all possible negative (say) effects of any magnitude too - this generalisation may not be strictly correct in all possible applications of this sort of methodology, but I'm pretty sure it is true in practice for the D&A field. Especially when we are talking about the local incidence of extreme weather, there really are many cases when we have little reason for a prior belief in an anthropogenically-forced increase versus a decrease in these events, so a reasonable Bayesian approach would also start from a prior which was basically symmetric around zero. The correct interpretation of a non-rejection of the null here is not "there has been no effect" but rather "we don't know if AGW is making these events more or less likely/large". Much of Trenberth's complaint could be more productively aimed at the routine misinterpretation of D&A results, rather than the method of their generation. Trenberth also sometimes sounds like he is arguing that we should always assume that every bad thing was caused by (or at least exacerbated by) AGW, but this simply isn't tenable. Even if storminess increases in general, changes in storm tracks might lead to reduction in events in some areas, with Zahn and von Storch's work on polar lows an obvious example of this. On the other hand, there are also some types of event where we may have decent prior belief in the nature of the anthropogenically-forced change (such as temperature extremes) and in these cases it would be reasonable for a Bayesian to use a prior that reflects this belief.

I can find one thing to object to in Myles' commentary though, and that's the manner in which he tries to pre-judge the "consensus" response to Trenberth's argument. Noting that he (Allen) is in fact a major figure in forming the "consensus" in these private meetings where the handful of IPCC authors decide what to say, it sounds to me rather like a pre-emptive strike against anyone who might be tempted to take the opposing view. I would prefer it if he restricted himself to arguing on the basis of the issues rather than that he holds/forms the majority view. His behaviour here is reminiscent of the way he (and others) tried to reject our arguments about uniform priors, on the basis that everyone had already agreed that his approach was the correct solution. All that achieved was to slow the progress of knowledge by a few years.

[jules' pics] Let's bizarre


Let's red hippo

The best thing about Japan is the way it keep surprising with its bizarreness, even after a decade of continuous study. Today was particularly bizarre. I won't explain - you wouldn't believe it anyway. But here is James, with his hippo, hoping that the internets will make it all better.


--
Posted By Blogger to jules' pics at 11/07/2011 06:58:00 PM

Sunday, November 06, 2011

You only lose once

Prompted by RC's post on some oil development thingy...

I'm sure it's been said before, but it seems to me that there is an obvious inevitability about these things, which is basically structural and independent of the specific details at hand. The development will happen, the oil will get burnt, and the details of the local, national and even international politics don't matter much overall. The underlying reason for this is that in order to prevent development, the opponents have to keep on winning, for as long as anyone tries to develop the area. In contrast, the developers only have to win once, and then it is (as RC puts it) "game over".

The same dynamic plays out all over the place, for example when Tesco wants to build a new supermarket (or expand an existing one). They can keep trying for as long as it takes, and they only have to win once. This happened in our home town, where strong local opposition to an edge-of-town development was worn down, and the new supermarket was soon one of the most profitable in the country. Last I heard, they were hoping to expand it into an adjoining greenfield site, against more local opposition...in fact I'd be surprised if they haven't by now (there you go).

IMO the only way the Athabascan oil development won't happen is if it becomes uneconomic for some reason, and the most plausible reason for this would be the development of some alternative energy sources (of any type). So delay may be worth pushing for, to allow time for this to happen. But other than that, it's simply a case of when, not if.

Optimists may point to a few reserves such as Yellowstone, where development really has been (almost) prevented. But although it's very beautiful and interesting, it is also desolate and economically low-value land in a region that has an abundance of space. If someone found a Saudi-sized oil field under it, they'd be in with the drills before you could say "it's not a buffalo, it's a bison".

Saturday, November 05, 2011

[jules' pics] Let's (not) Marathon

While a walk in the park (i.e. a 10km) is harmless enough,
Shonan Marathon 2011
and can even be fun,
10 K finishers
the reasons not to run a full marathon are legion...
Finishers in pain
Worse, these are the good (sub-3hr) guys,
salty exhaustion
natural runners,
gasping his last
who should have been having fun...
Collapsing at the finish
finished finisher
There was a shuttle bus back to the station which followed the course, from where we saw the carnage of the 4 hour-plussers... stumbling zombies, bodies piled by the roadside, wheelchairs, ambulances etc. But here's jules, happy to have beaten 1500 (wow) men! And I had a cold.
jules


--
Posted By Blogger to jules' pics at 11/05/2011 04:16:00 PM

Friday, November 04, 2011

Curry on fuzzy logic

Before I get on to the meat of some more new papers...

I noticed not so long ago Curry and Webster flying a kite about fuzzy logic being a better alternative to Bayesian probability, in the context of D&A:

The logic of the IPCC AR4 attribution statement is discussed by Curry (2011b). Curry argues that the attribution argument cannot be well formulated in the context of Boolean logic or Bayesian probability. Attribution (natural versus anthropogenic) is a shades-of-gray issue and not a black or white, 0 or 1 issue, or even an issue of probability. Towards taming the attribution uncertainty monster, Curry argues that fuzzy logic provides a better framework for considering attribution, whereby the relative degrees of truth for each attribution mechanism can range in degree between 0 and 1, thereby bypassing the problem of the excluded middle.


As you will recall, I've been waiting for a year now for Curry to explain her muddled and confused approach to probability, in particular her nonsensical "Italian Flag" analysis which she seems to be recasting as "fuzzy logic" (as an aside, I do agree that her logic is fuzzy, but perhaps not in the way she intended).

So I was eagerly awaiting "Curry (2011b)", which has just appeared. And what does it say about fuzzy logic?

[fx: tumbleweed]

Not one single mention, that's what. No mention of Bayesian probability, either. Or Boolean logic. These terms are completely absent from the paper, so this whole line of specious assertions has simply been abandoned without any support whatsoever.

Solution to the paradox of climate sensitivity

A lot of bloggable papers have suddenly appeared, so I will work through them over the next few days.

First, a quick comment about this interesting paper: "Solution to the paradox of climate sensitivity" by Salvador Pueyo. In it, he argues that we should use a log-uniform prior for estimating climate sensitivity. This is fundamentally an "Objective Bayes" approach, that "non-informative" can be interpreted in a unique way. I don't much like this point of view, but if one is going to take it, then it should at least be done properly, and he seems to have provided decent arguments in that direction. Readers may recall that IPCC authors have in the past claimed that a uniform distribution was the unique correct representation of ignorance, which formed one of the planks of their assessment of the literature in the AR4.

As we showed here, all this talk of a long tail basically vanishes when anything other than a uniform prior is used, so in that sense this new paper is broadly compatible with our existing results which were based on a subjective paradigm. However, I'm not sure how it would work with a more complex multivariate approach, as has been common in this sort of work (eg simultaneously considering the three major uncertainties of ocean heat uptake, aerosol forcing and sensitivity).

What the new IPCC authors will make of it all is anyone's guess. Perhaps we will find out in December some time, when the first draft is scheduled to be opened for comments.

Thursday, November 03, 2011

Shonan International Marathon

It being Culture Day today, the high culture of road racing came to the Shonan Coast (that's where we live). We had spotted this event when we came back from Boulder earlier this year, and entered in an a rather out-of-character fit of enthusiasm. Only the 10km version of course, we aren't too enthusiastic. There was also a marathon (with a much larger entry of about 18,000 versus our 5,000) and an "elite" half-marathon.

So the day came along, and we went and did it. This time we cheated by "training" for the race, at least to the extent that one run a week counts as training (that's on top of our daily bike ride to work, of course). There's a nice pavement along the coast in Kamakura which is popular with weekend joggers, but even at 8am the summer heat often made it a bit of a trial, to say the least.

It has cooled down now, though in fact this morning was still close to 20C - any more could have been uncomfortable for me. As it was, conditions seemed near perfect, and I was pleased to go round in about 42:40 which is 8 mins quicker than I was in Boulder and narrowly achieved my target of beating my age. Jules was also significantly quicker and achieved her sub-55 mins goal.

(Results now up: 42:44 [42:34 net] time for me, 74th place! jules was 57:06 [54:49 net], 232nd - these both out of fields of over 2000!)

I think I must have got the pacing just about right - though it's hard to be sure, as the first few km markers were ridiculously poorly placed. I cruised through 2km in about 7 mins which was almost 2 mins ahead of my planned schedule - brief fantasies that a week at altitude in Denver had miraculously transformed my fitness were dispelled when a later "km" took about 6 minutes.

Worryingly, I rather enjoyed it...though jules says I shouldn't admit to this as we are officially retiring.

Tuesday, November 01, 2011

[jules' pics] Denver

We started as we meant to carry on (elk 'n' Guinness)...

"Elk Sliders"
The next day we explored downtown,

Downtown, Denver
with Rob and Amity.

James, Rob and Amity
Rob is, officially, even cleverer than James, and he's almost as tall, but here he struck a deliberately daft pose in order to try and crack my lens. The lens survived and thus he is blogged! 

Denver is the capital of the state of Colorado. Here is the centre of power.

State Capitol, Denver 
The day was amazingly clear, and, looking the other way, the Rocky Mountains could be easily seen 40 miles away.

Govt, Denver
The leaves were also looking very autumnal.

Capital City of Denver
Unfortunately, this was the only cowboy we saw.

Cowboy
Denver generally shows signs of gentrification with sculpture,

Sculture, Denver 
and prettied up old buildings

Union Station, Denver
like the massive brick REI cathedral (of which this is just one end).

REI, Denver
Next day it was down to the basement to play with our phones,

WCRP, Denver 
 and before long we were flying home on a chikinorbeef airline

San Francisco to Narita


--
Posted By Blogger to jules' pics at 11/01/2011 09:44:00 PM

Judith Curry, Detection and Attribution and the IAC report

I was going to snark about the BEST (by which I mean funniest, there is clearly not much real science to speak of) climate science kerfuffle in a long time, but everyone else has beaten me to it, and I've nothing particularly witty to say about it (what do you mean, "no change there then"?) Have a look at Stoat for some links.

But while having a scan of the relevant blogs, I came across something else that struck me as worth mentioning.

It concerns La Curry's criticism of the IPCC, particularly the statement of WG1 that "Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic GHG concentrations". Curry (and Webster) apparently don't like this statement. If they left it at that, then fmdidgad might be the most natural response, but rather than leaving it at that, they double down on it by asserting here that the IAC supports their view.

The relevant section of the IAC report is here, and it can be readily checked that their criticism which C&W quote was in fact aimed at the far more vague statements frequently made in WG2, and was clearly not intended to be applied to the WG1 statement at all. In fact, the WG1 statement is not imprecise in this sense at all, it is simply a one-sided probability statement of the general form "The probability of x > y is p". In this case x is the warming caused by anthropogenic effects, y is ~0.3C (being half the observed trend of ~0.6C) and p is 90%. Another IPCC statement of logically equivalent form is that the equilibrium climate sensitivity is very likely to be greater than 1.5C. If the IAC had intended their criticism to apply to the huge number of similarly-structured probabilistic statements in WG1, it is surely inconceivable that they would not have mentioned this criticism anywhere in the section where they actually address the treatment of probability in WG1. Indeed, the only place in that chapter where the IAC mention this issue of "imprecise statements, made without reference to the time period under consideration or to a climate scenario under which the conclusions would be true" is in that one section and even page where they are quite explicitly and specifically addressing WG2. There is, of course, nothing significantly ambiguous about the time period under consideration or the scenario under consideration in the statement that C&W object to.

I invite either of Curry and Webster to explain why they believe that the IAC intended this criticism, so clearly aimed at WG2, to apply to that D&A-based statement in WG1. Or alternatively, they could abandon their patently untenable claim that the IAC "shares their concerns" over this statement. Of course, I've been looking forward to Curry explaining her "Italian flag" blether for a long time now, to no effect. So I'm not holding my breath.

[jules' pics] Let's Merry


Let's Merry, originally uploaded by julesberry2001.


Clearly we're not in Denver any more...

You might hope this is the Christian response to Let's Zen, but actually "Let's Merry" appears to be the slogan for St. Arbucks Japan's winter campaign.

Oh it's so nice to be back to normality...

--
Posted By Blogger to jules' pics at 11/01/2011 02:50:00 PM