Cold snaps curtail invasions

ResearchBlogging.orgClimate change not only causes shifts in the distributions of native species, but also allow invasive species to establish new populations. For example, many Caribbean species are taking advantages of warming temperatures, expanding polewards and invading into the south-eastern United States.

Green porcelian crabHaving established themselves, however, it’s not unknown for the invaders to come to pain. For example, in early 2010, the south-eastern United States experienced a particularly cold winter, which came to be known as “Snowmageddon”. After Snowmageddon, scientists found that the populations of several established invaders had crashed, in some cases been entirely wiped out.

Kaplan-Meyer survival curves for the experimental crabsCurious, Dr. João Canning-Clode and his colleagues collected a number of invasive green porcelain crabs (Petrolisthes armatus) to study. They had three groups: one control group would be held at what would be a fairly mild winter temperature at the collection site, one group would go through a cold snap similar to that experienced in January 2010, and one would experience a cold snap which was a couple of degrees even more extreme.

The results were striking. In the control group, 83% of the crabs survived the winter. In the Snowmageddon group, however, only 39% of the crabs survived – and the population that experienced an even colder snap was entirely wiped out. They also noted that cold temperatures caused the crabs to move around less – which, in the wild, would have probably caused them to be more vulnerable to predators and also make it harder for them to find their own food.

The researchers figure that the occasional cold snap may have the effect of stopping invasive species in their tracks – devastating, if not wiping out the populations. However, as the globe warms, extreme cold snaps have been getting less frequent, a trend which is expected to continue.


Canning-Clode, J., Fowler, A., Byers, J., Carlton, J., & Ruiz, G. (2011). ‘Caribbean Creep’ Chills Out: Climate Change and Marine Invasive Species PLoS ONE, 6 (12) DOI: 10.1371/journal.pone.0029657

DeGaetano, A., & Allen, R. (2002) Trends in Twentieth-Century Temperature Extremes across the United States. Journal of Climate, 15(22), 3188-3205.

Kodra, E., Steinhaeuser, K., & Ganguly, A. (2011) Persisting cold extremes under 21st-century warming scenarios. Geophysical Research Letters, 38(8).


Tapeworms like it hot

ResearchBlogging.orgBird tapeworms (Schistocephalus solidus) have three distinct life stages. First, they infect copepods (tiny crustaceans), such as Cyclops strenuus abyssorum. The copepods are eaten by sticklebacks – in this case, the three-spined stickleback, Gasterosteus aculeatus. The sticklebacks are then eaten a bird, in which they breed and produce eggs with which to infect the next generation of copepods.

In order to be infectious to a bird, the tapeworm larvae must grow to a size of at least 50mg. That being said, the bigger the better – larger parasites are far more fertile, producing many times more eggs – which are also larger. Larger parasites also make their hosts less able to breed and more likely to be eaten by a bird.

Parasites infecting organisms which do not control their own body temperatures (such as most fish) are more likely to be directly affected by climate change – a parasite infecting a warm-blooded mammal, for example, can rely on a temperature-controlled living space. To test what impact temperature would have on how infective the tapeworms were, Macnab and Barber (2011) kept two populations at different, static temperatures, within their normal temperature range – 15°C and 20°C respectively – and fed half of each population infected copepods (the others got non-infected copepods).

Temperature, they found, did not affect the likelihood that a fish eating an infected copepod would be infected – in both cases, about half of the exposed fish were infected. However, they found that the tapeworm larvae grew much faster in the warm-water group. 8 weeks in, every tapeworm larvae in the warm-water group had reached the 50mg size necessary to infect a bird – whereas none of the larvae infecting the cooler group had. In fact, in the warmer population, the average size of the tapeworms was twice the size they needed to infect a bird. They estimate that this difference would allow each parasite to produce at least an order of magnitude more eggs than in the 15°C group – almost 200,000 eggs each as compared to 12,000.

Infected fish preferred warm waterThey also showed that once infected, the fish with infective worms preferred warmer water. A different population of infected and non-infected sticklebacks were introduced to an aquarium with cooler (~15°C) and warmer (~21°C) compartments, with an intermediate-temperature (~18°C) linking chamber. The fish were then allowed to settle in the intermediate chamber and watched for three hours.

The non-infected fish, as well as those with parasites too small to infect a bird, tended to stay in the intermediate chamber. However, fish with large, infective parasites preferred warmer waters, with a thermal preference over 1°C warmer than the other groups.

Although such a pattern might be perhaps be explained by an attempt on the part of the sticklebacks to increase the effectiveness of their immune system, the authors suggest that the tendency of fish bearing larger-but-noninfective parasites towards lower temperatures is more likely motivated by the tapeworms. Larger parasites would have increased energy demands, increasing the likelihood that the host would starve – and the parasites with it. When the parasites are large enough to infect a bird, however, all bets are off – the priority is to get large and to get eaten.

Previous studies on these species, such as Barber et. al. (2004), have found that, once a stickleback was infected by a sufficiently large parasite, the parasite would impair the fish’s abilities to flee predators. Fish infected by such parasites were less likely to make any evasive behaviour, less likely to reach cover, less likely to perform “staggered dashes” to prevent a predator from anticipating where they would move next and more likely to try and “evade” predation by simply slowly swimming away.

Fish that prefer warmer waters are probably going to end up at the surface and at the edges of lakes – right where they’d be more vulnerable to bird attacks. There is also potential for a positive feedback relationship – fish infected by larger parasites prefer warmer waters in which the parasites grow faster and the fish are more likely to be consumed by birds. It seems that one beneficiary of a warming climate is the tapeworms.


Macnab, V., & Barber, I. (2011). Some (worms) like it hot: fish parasites grow faster in warmer water, and alter host thermal preferences Global Change Biology DOI: 10.1111/j.1365-2486.2011.02595.x

Barber, I., Walker, P., & Svensson, P. (2004). Behavioural Responses to Simulated Avian Predation in Female Three Spined Sticklebacks: The Effect of Experimental Schistocephalus Solidus. Infections Behaviour, 141 (11), 1425-1440 DOI: 10.1163/1568539042948231 [PDF]

Other coverage

Some like it hot (if they’re riddled with parasites) – Not Exactly Rocket Science. Be sure to check out the comments, one of the authors has added information.

Do Cosmic Rays Cause Global Warming?

This post was chosen as an Editor's Selection for

How do you make a cloud? Well, first you start with an aerosol particle, a small particle around which the much larger cloud condensation nuclei (CCNs) can condense. It takes a large CCN – at least 100 nanometres in size – for water vapour to be able to condense from water vapour. Clouds are made up of many CCNs with water condensed on them. Clouds can reflect sunlight back into space, cooling the Earth – but they can also reflect heat back to Earth, warming the Earth instead. The total cooling effect of clouds is about 44W/m2, contrasted with about 31W/M2 of warming from them reflecting heat back to Earth – a net effect of about -13W/m2 (Ramanathan et. al, 1989).

A new paper, Kirkby et. al. (2011), published by scientists at CERN claims to shed light on the role of cosmic rays in the formation of these aerosol particles. Cosmic rays are mostly the remnants of atoms which have been accelerated to near the speed of light, along with some more exotic particles such as stable antimatter. Most cosmic rays reach such speeds while bouncing around in the magnetic fields and remnants of supernova, though some reach even higher energies through not-yet-fully-understood processes. The sun’s magnetic field diverts most cosmic rays away from the Earth, so the solar maximum is the low for cosmic rays, and during the solar minimum is when we get the most cosmic rays here on Earth. As they pass through the atmosphere, they collide with gasses, donating their energy and ionizing the molecules.

Kirkby et. al. used a particle accelerator to create analogs to cosmic rays, ionizing the gasses. They found that this increased the formation – the “nucleation” – of small aerosols – nanometre sized particles – by a factor of 2-10. They also showed a much larger – 100 to 1000 times – increase in nucleation from the presence of ammonia in addition to sulphate. It remains the case that they observed far more nucleation than in previous lab studies, it was still several times lower than is observed in the atmosphere.

Their setup only considered small, nanometre-sized particles – it was in principle incapable of producing larger ones. No doubt, future experiments, including future work by these same scientists, will investigate whether similar increases in the formation of larger, potentially cloud-forming, particles are found. However, other studies have found results that cast doubt over whether this will be the case. For example, Snow-Kropla et. al. (2011) found that the observed variation in cosmic rays made less than a 0.2% difference in the concentration of 80 nm particles. They saw a larger increase (1%) in 10 nm particles, suggesting that the impact of cosmic rays falls off as you look at larger particles. Pierce & Adams (2009) find similar results, and suggest that this could be because of a lack of condensable gasses that would allow the particles to grow larger – that it’s not the lack aerosol particles limiting the growth of potential cloud condensation nuclei. If so, it doesn’t matter how many aerosol particles there are about, so far as CCNs are concerned – they are limited by a different factor. It would certainly be possible for the results of a future experiment to contradict these results – and it would be interesting to see how the question was resolved.

We can see that there’s rather a long way to get from this finding – that cosmic rays increase the production of nanometre-sized particles – to have an effect on cloud formation. And to get to the idea that cosmic ray variation can explain global warming, we must assume that:

  1. An increase in aerosol nucleation increases the concentration of large cloud condensation nuclei (though it’s not out of the question, we’ve already seen that it has been contradicted by other studies).
  2. That an increase in CCNs increases cloud cover (this, at least, seems plausible).
  3. That there is a downwards trend in cosmic rays (which would decrease nucleation, which may reduce CCN formation, which may decrease cloud formation).

So what are cosmic rays doing? Well… not much. The data just doesn’t contain the sort of decline that would be necessary for any possibility that cosmic rays could be the cause for global warming.

Cosmic rays seen by monitoring stations in Climax, New Mexico and Oulu, Finland. Shown as the variation from a 1970-1999 baseline.

Many in the media jumped to the conclusion that cosmic rays were affecting the climate. Lawrence Solomon wrote a hyperbolic article claiming that the paper proved absolutely, beyond a shadow of doubt, that cosmic rays were responsible for all the observed global warming. The International Business Times claimed that anthropogenic global warming had been disproved. James Delingpole has got a wonderful conspiracy theory going. One must wonder why people so “sceptical” about the well-established link between greenhouse gasses and global temperatures would be so quick to jump to conclusions without even a correlation. No such conclusions are supported, or even suggested, in the paper they each claim to be writing about.

Cosmic rays as above, plotted alongside NOAA’s monthly global index (which uses 1901-2000 as a baseline).


Kirkby, J., Curtius, J., Almeida, J., Dunne, E., Duplissy, J., Ehrhart, S., Franchin, A., Gagné, S., Ickes, L., Kürten, A., Kupc, A., Metzger, A., Riccobono, F., Rondo, L., Schobesberger, S., Tsagkogeorgas, G., Wimmer, D., Amorim, A., Bianchi, F., Breitenlechner, M., David, A., Dommen, J., Downard, A., Ehn, M., Flagan, R., Haider, S., Hansel, A., Hauser, D., Jud, W., Junninen, H., Kreissl, F., Kvashin, A., Laaksonen, A., Lehtipalo, K., Lima, J., Lovejoy, E., Makhmutov, V., Mathot, S., Mikkilä, J., Minginette, P., Mogo, S., Nieminen, T., Onnela, A., Pereira, P., Petäjä, T., Schnitzhofer, R., Seinfeld, J., Sipilä, M., Stozhkov, Y., Stratmann, F., Tomé, A., Vanhanen, J., Viisanen, Y., Vrtala, A., Wagner, P., Walther, H., Weingartner, E., Wex, H., Winkler, P., Carslaw, K., Worsnop, D., Baltensperger, U., & Kulmala, M. (2011). Role of sulphuric acid, ammonia and galactic cosmic rays in atmospheric aerosol nucleation Nature, 476 (7361), 429-433 DOI: 10.1038/nature10343

Pierce, J., & Adams, P. (2009). Can cosmic rays affect cloud condensation nuclei by altering new particle formation rates? Geophysical Research Letters, 36 (9) DOI: 10.1029/2009GL037946 [PDF]

Ramanathan, V., Cess, R., Harrison, E., Minnis, P., Barkstrom, B., Ahmad, E., & Hartmann, D. (1989). Cloud-Radiative Forcing and Climate: Results from the Earth Radiation Budget Experiment Science, 243 (4887), 57-63 DOI: 10.1126/science.243.4887.57 [PDF]

Snow-Kropla, E., Pierce, J., Westervelt, D., & Trivitayanurak, W. (2011). Cosmic rays, aerosol formation and cloud-condensation nuclei: sensitivities to model uncertainties Atmospheric Chemistry and Physics, 11 (8), 4001-4013 DOI: 10.5194/acp-11-4001-2011

Rapid adaptation to temperature change… and its limits

ResearchBlogging.orgPeople often think of evolution as though natural selection were sitting around waiting for new mutations to promote or cull. But it’s not really like that. A great deal of variation exists in any population, much of which has little or no effect on the survival or reproductive success of individuals carrying that variation. However, a changing environment can alter all that.

Gasterosteus aculeatusBarrett et. al. (2010) were interested in how population of three-spine sticklebacks (Gasterosteus aculeatus) would respond to lower temperature extremes. They collected a sample of sticklebacks from both marine lagoons and freshwater lakes in British Columbia, Canada. First, they acclimated the fish to living in fresh water, as well at a consistent temperature and daylight length.

Lakes are far more variable in temperature than the oceans – they are warmer in summer and cooler in the winter – due to the smaller quantity of water which needs to gain or lose heat. Rather unsurprisingly, the researchers found that the lake-dwelling sticklebacks could tolerate significantly colder temperatures than their marine counterparts (both populations could tolerate much higher temperatures than they ever encountered in the environment). They also demonstrated that the degree of tolerance for cold extremes was heritable – even raised in the same environment, the offspring of lake-dwelling fish could tolerate lower temperatures, whereas marine sticklebacks could not (and hybrids were intermediate).

The interesting part, however, was when they got to raising populations of sticklebacks with marine ancestors in ponds, which could get even colder in the winter than the freshwater lakes. In just three generations (three years), the population evolved to tolerate temperatures 2.5°C colder than their marine forebears! This wouldn’t have been a new mutation – existing genes, already present (but perhaps rare) had become far more common in the population than they had previously been.

It wasn’t all good news for the sticklebacks, though. Genetic diversity is critical to maintaining populations, and a period of such strong natural selection will dramatically reduce a population’s diversity. Even if a population can adapt to one sudden shock, it may so deplete their genetic diversity that there won’t be any convenient alternative genes in the population when the next hit comes.

Canada temperature anomaly 2009 vs 2006-2008 from GISTEMPThe next year brought the coldest winter that part of Canada had seen for several decades, and despite all their adaptations, all three of the experimental populations were wiped out. It may be that it was just too cold, or perhaps the increased ice cover on the ponds reduced the oxygen levels in the water to below what the fish needed. Either way, it’s a grim prospect for conservation biologists if a population that seems, by all accounts, to be surviving and even adapting to the changes in its environment can suddenly hit an unpassable barrier and go extinct.

Sticklebacks have a history of being able to adapt to significantly changing temperatures over the last few millennia, and so they may have had an advantage in having genes for dealing with a changing climate already present in their populations. That may not be the case for all species, and this study has shown just how drastic effect a change in temperature extremes can have on populations.


Barrett, R., Paccard, A., Healy, T., Bergek, S., Schulte, P., Schluter, D., & Rogers, S. (2010). Rapid evolution of cold tolerance in stickleback Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2010.0923

Was the Arctic Ice Cap ‘Adjusted’?

Over at “American Thinker”, Randall Hoven has a post about the Arctic ice caps and, specifically, the difference between the “area” and “extent” values for the size of these. The problems start with the interpretation of a graph much like this:

Now, you probably noticed the substantial discontinuity in the “area” during 1987. This is even more apparent if you look purely at the difference between extent and area:

I’ve also plotted the difference between the extent and area for the entire period (taken from the bootstrap data):

Now, Randall includes an “Important Note” from raw data which explains that:

Important Note: The “extent” column includes the area near the pole not imaged by the sensor. It is assumed to be entirely ice covered with at least 15% concentration. However, the “area” column excludes the area not imaged by the sensor. This area is 1.19 million square kilometres for SMMR (from the beginning of the series through June 1987) and 0.31 million square kilometres for SSM/I (from July 1987 to present). Therefore, there is a discontinuity in the “area” data values in this file at the June/July 1987 boundary.

So the discontinuity exists because, from the start to mid-1987, data is taken from the SMMR, which did not have any data for 1.19 million square kilometres in the polar regions (a “pole hole”), whereas it was then replaced by the SSM/I instrument, which only missed 0.31 million square kilometres. As the “area” figure does not account for this, and as at least most of that area will be covered in sea ice, there will be almost 0.88 million square kilometres of extra sea ice from the middle of 1987 onwards, purely due to the instrument change. So from mid-1987, the area figure includes the ice from an additional area 0.88 million square kilometres. So, obviously, if I remove this difference, the discontinuity disappears:

Looking at this, there is still substantial variation in the difference between “extent” and “area” figures. Randall asks why:

What were the differences? From the above words from the NSIDC, you would think that the differences would be constant offsets (1.19 million sq km from 1979 through June of 1987, and 0.31 million since). But the actual differences in the data file were not constant at all; they varied between 1.93 and 3.42 million sq km.

Notice, however – it shows up particularly clearly with the complete data set – that these differences are clearly changing on an annual cycle (plus some variation – weather). And there’s no reason to assume that “extent” and “area” are measuring exactly the same thing. So, if we check how the NSIDC define these terms, we learn:

In computing the total ice-covered area and ice extent with both the NASA Team and Bootstrap Algorithms, pixels must have an ice concentration of 15 percent or greater to be included. Total ice-covered area is defined as the area of each pixel with at least 15 percent ice concentration multiplied by the ice fraction in the pixel (0.15 to 1.00). Total ice extent is computed by summing the number of pixels with at least 15 percent ice concentration multiplied by the area per pixel, thus the entire area of any pixel with at least 15 percent ice concentration is considered to contribute to the total ice extent.

These, obviously, are different figures, as in each pixel “area” is weighted by its concentration, and this would presumably be higher in winter – which is exactly what we see in the differences. Randall, on the other hand, resolves the issue by completely ignoring it.

Going back to the March data, before adjusting for the “pole hole”, like Randall, I find it actually has a slight positive trend:

However, after adding the pole hole region, I get a much stronger downwards trend in the quantity of sea ice:

Now, I’ll emphasise that this isn’t (necessarily) accurate, some (unknown to me) portion of the pole hole might not contain sea ice during March. That data obviously exists, but I don’t have the time at the moment to try and analyse it.

And all of this means that the rest of Randall’s conclusions are invalid, being that they are based on a false premise.

Actually, the rate of growth is statistically insignificant, meaning that a statistician would say that it is neither growing nor shrinking; it just bobs up and down randomly. More good news: no coming ice age, either.

No, there is definitely a significant trend.

You see that “extent” always shows more shrinkage than “area” does. In the months of maximum sea ice, February and March, the area trend is upward. And for winter months generally, December through May, any trend in area is statistically insignificant. For summer months, July through October, the trend is downward and statistically significant.

But these calculations are all based on extremely biased data for the start of the period, and so are all wrong.

Katie Couric should have used the month of September as her example. In three decades, the Arctic sea ice “extent” shrank by 34%. She could make such claims while stating, truthfully, that the data come from NSIDC/NOAA and the trend is statistically significant. It’s science.

Despite the sarcasm dripping from this sentence, yes, it is science. The Arctic ice is melting. Without the “pole hole”, September looks like this:

As you can see, my trend line isn’t a very good fit to this data, and, as Randall says, any decrease seems to be in just the last few years. After adding in the pole, however, things look a lot different:

Again, the red line represents “area,” the only thing actually measured. A downward trend is evident to the eyeball. But look closely and that downward trend is fairly recent — say, since 2000. Indeed, the calculated trend was slightly upward through 2001. That is, the entire decline is explained by measurements since 2002, a timespan of just eight years.

But the older data was biased, so the downward trend was actually for the whole period, and somewhat stronger, to boot.

To understand the trend, you need to understand the data you’re looking at. Or, as the readme file for the data Randall Hoven looked at put it: “we recommend that you read the complete documentation in detail before working with the data”. Had Randall done that, and checked the meanings of “area” and “extent” before writing this piece, he could have saved himself a lot of bother and embarrassment.

Randall starts his conclusion like this:

This little Northern Hemisphere sea ice example captures so much of the climate change tempest in microcosm.

And that’s very true. Someone looking at data then didn’t understand, analysing it improperly, and reaching strong but extremely false conclusions as a result. And then, even when corrected on the misunderstanding, continuing to believe those conclusions.

See, as I was writing this post, Randall posted a correction on his site. It turns out that he’d found the definitions of area and extent (technically, he still got the definition of area slightly wrong, but it’s not as bad). However, although pointing out these problems with his main article, he tries to recover the point with this:

If we add the “pole hole” back to the measured “area,” we would get a downward trend in area due to the change in pole hole size in 1987. If we assume that the pole hole is 100% ice, then the downward trend in March would be 2.2% per decade. But if we assume that the pole hole is only 15% ice (the low end of what is assumed), then the downward trend is only 0.1% per decade, which is not statistically significant. (The corresponding downward trend for “extent” was 2.6% per decade.)

It is true that whatever downward trend there is for March is due only to these adjustments (assumed pole hole size and concentration). And whether that trend is statistically significant depends on ice concentration in the “pole hole,” an assumed value.

For a start, it seems to me to be a fairly reasonable assumption that the ice content of the pole hole is towards the high end of the range – after all, that’s the bit of the Arctic closest to the North Pole. And the thing is, that assumption is a pretty darn testable one. All you have to do is go to the North Pole and look. Come to think of it, I’d be willing to bet that someone already has.
Image from Wikipedia

“Mike’s Nature trick”

Cross-posted on Young Australian Skeptics

One of the most hyped emails from the Climategate hack was this one, sent by Phil Jones:

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd from 1961 for Keith’s to hide the decline.

“Proxy records” for temperatures provide a method measuring temperature without having to have a convenient thermometer around. Many natural phenomena occur at different rates depending on climate conditions, and these differences can be observed in, for example, rocks, ice cores, corals and trees. Obviously, Earth has had a climate for a few billion years, rather than the few hundred years for which we’ve had thermometers. Accordingly, if you want to understand long-term climate you’ll need proxies. Tree rings are one common proxy record for spring-summer temperature, and, in general, closely match other proxy records and (when available) the instrumental (thermometer) record.

Until the late 20th century. The tree-ring divergence problem The “decline” refers to the “divergence problem”, documented in Briffa et. al. (1998). Basically, during recent years, the tree ring proxy record has been diverging away from the instrumental record, as shown in the figure on right.

There are several variables which can influence tree growth. At extremely high or low latitudes, temperatures are typically a major factor. If each of these other variables is held constant, changes in temperature will be echoed in the tree rings. However, if another of those variables starts changing, the tree ring trends will no longer reflect the temperature trends.

It seems that, late in the previous century, another of these variables started changing. Several hypotheses as to which variable(s) are changing have been made, and are discussed, for example, in D’Arrigo et. al. (2008).

So what was “Mike’s Nature trick”? The “trick” was used in Mann, Bradley & Hughes (1998). Basically, it involves a diagram of the Northern Hemisphere temperature record from 1610-1995, the “NH” portion of figure 7.
Mike's Nature "trick"
From 1610-1980, they use the tree ring record. However, from 1981, the proxy record diverges away from the instrumental data, and so they use the instrumental data for that period. If you look closely (click to embiggen), you’ll notice that the last part of the diagram is drawn differently – the proxy record (1610-1980) is a dashed line, the instrumental record (1981-1995) is a dotted line. This is explained clearly in the diagram’s caption:

‘NH’, reconstructed NH temperature series from 1610–1980, updated with instrumental data from 1981–95.

And that’s it.

So “Mike’s Nature trick” consisted of a legitimate way of displaying the most accurate available data, clearly documented in Mike’s Nature paper.

Other posts

Here are a few other relevant posts discussing this (feel free to let me know about any other good posts):


Briffa, Keith; et. al. (1998) Trees tell of past climates: but are they speaking less clearly today? Phil. Trans. R. Soc. Lond. B January 29, 1998 353:65-73; doi:10.1098/rstb.1998.0191 [fulltext]

D’Arrigo, Rosanne; Wilson, Rob; Liepert, Beate; Cherubini, Paolo (2008). On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change (Elsevier) 60: 289-305. doi:10.1016/j.gloplacha.2007.03.004 [<a href="fulltext]

Mann, Michael; Bradley, Raymond; Hughes, Malcolm (1998). Global-scale temperature patterns and climate forcing over the past six centuries. Nature 392, 779–787. doi:10.1038/nature02478 [fulltext]

Hiding the rise

Alternative title: The complete idiot’s guide to cherrypicking.

Willis Eschenbach (of Darwin Zero fame) has a post on Watt’s Up With That concerning the homogenisation process in Anchorage and Matanuska (both in Alaska). Matanuska was chosen for being close to Anchorage. But why start in Anchorage? No explanation is given. Something about this smells like cherrypicking – picking out a station that happened to have an odd-looking trend and expanding a conspiracy theory around it.

Well, two can play at this game. I wrote a little program to find stations that had a downward trend in homogenisation in the GHCN v2 data. Not anything even approaching being clever, I just picked a few of the stations that had a reasonable amount of data from the last 40 years and had less homogenisation in 2009 than in 1970. But let’s say I didn’t give that explanation, and just talked about the odd trend in Asheville? That would hardly be methodologically valid.

Here’s how the temperatures in Asheville, North Carolina, were homogenised:

I can’t explain the homogenisation in Asheville. Would it be reasonable to suppose that an AGW denialist hacked into the GHCN website and modified the data? Well, no. Whereas Eschenbach was excited by a .7°C increase spread out over 20 years, here we had a .375°C decrease in a single year. Note how this causes the homogenised line (red) to drop away from the raw data (blue).

What about Pohang, in South Korea?

These major drops cut off what had looked like a warming trend. Certainly Pohang was not homogenised to deliberately create an artificial warming trend! In two bursts within just 8 years (and at the start of the data set), temperatures are adjusted upwards by a startling .65°C!

And then there’s Cairns Airport. What the heck is going on here?

Is homogenisation being used to hide global warming all over the world?

Why is data being homogenised like this? Well, it’s unfortunate, but most of the various temperature stations were not set up to track climate. As such, they periodically got moved, changed, located in places that were more convenient to measure for the daily news rather than for tracking world climate, and so forth. This means that before trying to use these for studying climate it’s probably worthwhile controlling for these factors.

But, seriously, there are a whole bunch of factors, and between them they could cause adjustments – both up and down. And if you look at enough sites, you’ll no doubt find examples of both. So if someone is showing you one site’s homogenisation and asking you to draw conclusions of fraud on the basis of it – why that site? Why not any of the thousands of others? Are all the sites homogenised in the same direction? Or is it simply more likely that you are listening to someone who is cherry-picking, finding any anomaly and then wrapping it in a conspiracy and their own absolute certainty that global warming is a lie and that the scientists who give evidence in favour of it are liars and frauds?

Is something rotten in Alaska?

Via an open thread on Deltoid, I discovered a link to this article by E.M. Smith (reposted on Watt’s Up With That), looking at an odd map he’d managed to generate using a temperate map generator on the NASA GISS site. The map generator’s pretty fun to play with.

A map of the temperate anomalies can be generated by entering a base period and a time period. As I understand it, it then takes the difference in mean temperature between the baseline period and the time period and draws that on a map. Pretty simple, right?

Now, if you use the same base period as time period, you’d expect that the map anomalies would all just be zero, right? Well, almost. The default settings exclude ocean data, and E.M. Smith does not change that. Without the ocean data and a 250km smoothing radius, you actually get the following map:
Temperature anomalies, time period 1951-1980, base period 1951-1980

What’s going on? Well, the short answer is that in the GHCN data, 9999 is used as a flag value to designate missing data (see the help file at the bottom of a map page, “Missing data are marked as 9999.”). As there’s no ocean data, 9999 appears there. Now, probably those should be greyed out. In maps that have a different base period and time period, grey is used to designate regions that don’t have any data.

However, the simple fact that this was almost certainly just displaying a flag value didn’t stop the conspiracy! Oh no! Presumably, those 9999 values are leaking into the real graphs and causing all the red values in a map like this one:
Temperature anomalies, 2009, base period 1951-1980

Nice idea. So I ran with it. Don’t know how long this GISS map stays up on their site, but I just did 2009 vs 2008 baseline. The “red” runs up to 10.3 C on the key.

So unless we’ve got a 10 C + heat wave compared to last year, well, I think it’s a bug

So I think this points to the ‘bug’ getting into the ‘non-NULL’ maps. Unless, of course, folks want to explain how it is 10 C or so “hotter” in Alaska, Greenland, and even Iran this year: what ought to be record setting hot compared to 1998…

I’ll leave it for others to dig up the actual Dec 1998 vs 2009 thermometer readings and check the details. I’ve got other things taking my time right now. So this is just a “DIg Here” from me at this point.

It’s not the color red that’s the big issue, it is the 9999 C attached to that color… Something is just computing nutty values and running with them.

BTW, the “missing region” flag color is supposed to be grey…

Now, this is something of a leap: how unlikely is it that unusual values in the ocean would magically happen to manifest themselves as warming in Alaska or Greenland – let alone Iran – rather than in, oh, say, the oceans. Never mind the idea that a modest change in temperatures between two years is especially unlikely. But, even though this claim is extremely unlikely, let’s do a little investigating.

So, the question: was the temperature in Alaska during December 2009 really 4-12.3°C warmer than 1998, or are those 9999s leaking through? This is what NASA’s GISS temperature map shows:

Happily, this is an easy question to answer if you actually look at the data. I downloaded the unadjusted mean GHCN data for the various sites in Alaska (the headers are 42570398000-425704820011). I picked out all the sites which had data for 2009 (I’ve also uploaded the raw data for 1998, 2008, 2009 for these sites so you can look at them if you like). Note that the temperature values are in tenths of a degree.



Dec 1998

Dec 2009


























St Paul




Cold Bay




King Salmon








Annette Island




I’ve helpfully marked highlighted the differences for those sites in Alaska in the region which are particularly red in the map. There appears to be some sort of correlation. The average temperature difference between Dec 1998 and Dec 2009 at those sites is 4.9°C warmer. The darkest shade of red represents an anomaly of between 4 and 12.3°C, so, Alaska is properly represented. The average, Alaska-wide, was 2.8°C warmer.

It’s not just me. A commenter on Watt’s Up With That, carrot eater, points out:

First station I tried: Goose, Newfoundland.

is 8.6 C warmer in Dec 09 than Dec 08.

Let’s look for other stations in red splotches in Dec 09, compared to Dec 08

Egesdesminde, Greenland 5.1 C
Fort Chimo, Canada. 10 C

Looks like I found your 10 C difference between Dec 08 and Dec 09. Third station I tried. Hence, the range of the colorbar.

Let’s see what else we find.
Danmarkshavn, Greenland. 2.7 C
Godthab Nuuk: 5 C
Inukjuak Quebec: 6.6 C
Coral Harbor: 8.6 C

So I’ve found a bunch of stations that are between 5 and 10 C warmer in Dec 09 compared to Dec 08.

This is a fun game, after all. Let’s say I want to find the biggest difference between Dec 09 and Dec 98. There are lots of red splotches on the map, and the colorbar has poor resolution. So I’ll download the gridded data and have a look.

Scrolling past all the 9999s for missing data, and I find that I should be looking at some islands north of Russia. I try some station called Gmo Im.E.T, and I get:

Dec 09 is 12.3 C warmer than Dec 98. First try.

So, yeah, this “bug” turned out to just be a weather fluctuation. Colour me surprised.