Rapid adaptation to temperature change… and its limits

ResearchBlogging.orgPeople often think of evolution as though natural selection were sitting around waiting for new mutations to promote or cull. But it’s not really like that. A great deal of variation exists in any population, much of which has little or no effect on the survival or reproductive success of individuals carrying that variation. However, a changing environment can alter all that.

Gasterosteus aculeatusBarrett et. al. (2010) were interested in how population of three-spine sticklebacks (Gasterosteus aculeatus) would respond to lower temperature extremes. They collected a sample of sticklebacks from both marine lagoons and freshwater lakes in British Columbia, Canada. First, they acclimated the fish to living in fresh water, as well at a consistent temperature and daylight length.

Lakes are far more variable in temperature than the oceans – they are warmer in summer and cooler in the winter – due to the smaller quantity of water which needs to gain or lose heat. Rather unsurprisingly, the researchers found that the lake-dwelling sticklebacks could tolerate significantly colder temperatures than their marine counterparts (both populations could tolerate much higher temperatures than they ever encountered in the environment). They also demonstrated that the degree of tolerance for cold extremes was heritable – even raised in the same environment, the offspring of lake-dwelling fish could tolerate lower temperatures, whereas marine sticklebacks could not (and hybrids were intermediate).

The interesting part, however, was when they got to raising populations of sticklebacks with marine ancestors in ponds, which could get even colder in the winter than the freshwater lakes. In just three generations (three years), the population evolved to tolerate temperatures 2.5°C colder than their marine forebears! This wouldn’t have been a new mutation – existing genes, already present (but perhaps rare) had become far more common in the population than they had previously been.

It wasn’t all good news for the sticklebacks, though. Genetic diversity is critical to maintaining populations, and a period of such strong natural selection will dramatically reduce a population’s diversity. Even if a population can adapt to one sudden shock, it may so deplete their genetic diversity that there won’t be any convenient alternative genes in the population when the next hit comes.

Canada temperature anomaly 2009 vs 2006-2008 from GISTEMPThe next year brought the coldest winter that part of Canada had seen for several decades, and despite all their adaptations, all three of the experimental populations were wiped out. It may be that it was just too cold, or perhaps the increased ice cover on the ponds reduced the oxygen levels in the water to below what the fish needed. Either way, it’s a grim prospect for conservation biologists if a population that seems, by all accounts, to be surviving and even adapting to the changes in its environment can suddenly hit an unpassable barrier and go extinct.

Sticklebacks have a history of being able to adapt to significantly changing temperatures over the last few millennia, and so they may have had an advantage in having genes for dealing with a changing climate already present in their populations. That may not be the case for all species, and this study has shown just how drastic effect a change in temperature extremes can have on populations.


Barrett, R., Paccard, A., Healy, T., Bergek, S., Schulte, P., Schluter, D., & Rogers, S. (2010). Rapid evolution of cold tolerance in stickleback Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2010.0923

Small no-take zones can help top predators

ResearchBlogging.orgIt’s difficult to protect large marine areas from fishing – a great deal of resources must be put into patrolling and enforcing such an area. However, new research suggests that small but well-targeted protection zones can have a significant effect all the way up the food chain.

African Penguin
African Penguins (Spheniscus demersus) are a vulnerable species of penguin restricted to South Africa. They are threatened by human activities, such as egg collection and oil spills. Their population dropped by about 90% in the 20th Century, and has continued to drop since. There are now fewer than 26,000 breeding pairs.

Top predators, such as these penguins, are important members of an ecosystem, and removing them from an environment can ripple throughout the web in drastic ways. Pichegru et. al. (2010) looks at the effects of a small no-take zone around a penguin colony has on the success of the colony, comparing it with another nearby colony which did not get a protected zone. They measured the duration and length of their hunting trips, diving time and dive depth to calculate the effort expended by the penguins in finding food.

Over just three months, the protection had a substantial effect on the penguins. Overall, the penguins in the protected zone spent less time hunting, travelled shorter distances and stayed closer to the colony, reducing their effort spent foraging effort by 25-30%. This meant that they were able to spend an extra 5 hours each day on their eggs. They also shifted their hunting patterns – before the protected zone was created, they foraged in it about a quarter of the time, but by the end of the study they were doing over 70% of their hunting inside the zone.

It is interesting that the penguins in the control colony lost weight and spent longer foraging during the study period. It’s possible that protecting the one area shifted more of human fishing into the area around the other island. However, the positive effects on the protected colony far outweighed the negatives on the control island, and in any case the fishing wouldn’t all be shifted to the other colony. This study also didn’t look at what effects the protection may have had on the breeding or survival of the penguins – which, of course, is an important question.

Study location


Pichegru, L., Gremillet, D., Crawford, R., & Ryan, P. (2010). Marine no-take zone rapidly benefits endangered penguin Biology Letters, 6 (4), 498-501 DOI: 10.1098/rsbl.2009.0913

What options would those be?

Intelligent design proponent (and young earth creationist) David Tyler has a post on the ARN blog about a fossil pelican’s beak. The short version is that it seems that there was a pelican with a beak similar to that of a modern pelican flying around 30 million years ago.

The gist of David’s argument is this:

What we are seeing here is a particular type of stasis, and it concerns complexity. Much diversification has little or no effect on complexity and examples of diversification therefore have little or no bearing on the origin of complexity. The pelican beak, however, is not just a big beak! There are numerous coordinated elements that have to be present for the beak to function at all. The fossil find is important because the earliest fossil of a pelican exhibits the full functionality of the modern birds. As far as the known fossil record is concerned, complexity was present – before the radiation of the Pelecanidae.

Over 65 millions ago there were dinosaurs. Many of them were fairly complex. It’s hardly surprising that there were complex things a mere 30 million years ago. Unless he’s basing the argument off Lord Kelvin’s estimate of the age of the Earth, there’s really nothing more there.

Evidently, the pelican beak in much its current form at least 30 million years ago. This fossil puts pelican beak evolution back at least that far, and there is certainly an interesting question as to why it hasn’t changed in that time. But it is not evidence that the pelican beak did not evolve.

Yes, this makes stasis in the pelican beak intriguing and it means that Darwinism has nothing to offer by way of an explanation. New explanations should include the options opened up by intelligent design.

I can only wonder what those options might be.

New software

I’m a software developer, and I tend to solve problems (in so far as they are so solvable) by writing software. As I do like to share it, I’ve created a little website for my programs. The Seraphiel website is now updated to take of advantage the new site, and I’ll be adding new programs over the next few days.

Creationism in the national curriculum

Cross-posted onto Young Australian Skeptics

Australia is in the process of creating a national curriculum, but the current draft of the history curriculum contains the following (emphasis added):

Students develop their historical skills in an investigation of TWO of the following controversial issues:

  1. human origins (e.g. Darwin’s theory of evolution and its critics)
  2. dating the past (e.g. radio-carbon dating, tracing human migrations using DNA)
  3. fakes and forgeries (e.g. Piltdown Man, the Treasure of Priam, Noah’s Ark, the Turin Shroud)
  4. the use and display of human remains (e.g. repatriation of Aboriginal and Torres Strait Islander human remains, The Iceman, Egyptian mummies, Lady Dai)
  5. imperialistic attitudes towards archaeological property (e.g. Indigenous cultural artefacts from around the world)
  6. the ownership of cultural property (e.g. the return of Parthenon sculptures)
  7. the impact of war and terrorism on antiquities (e.g. the Buddhas of Bamyan, the looting of Iraqi museums)
  8. political and ideological uses of archaeology (e.g. archaeology under the Nazis and Fascists)
  9. a school-developed study of a controversial issue.

Students examine the nature and context of the controversy, including:

  1. the historical background
  2. the extent of the controversy (media coverage, nationalistic feeling, government involvement) and significant developments relating to the controversy
  3. different perspectives and their bases
  4. an assessment of the different perspectives.

Now, in terms of say, science, those first two are roughly as historical “controversies” such as Velikovsky’s theories, or ancient astronauts. So why are there in the curriculum? Well, it looks pretty much like – actually, exactly like – the “teach the controversy” campaign aimed at teaching students falsehoods in the US. Now, the points each look like they’re perfectly reasonable, and the intention is that look perfectly reasonable – but they give a creationist teacher an opportunity to teach or reward blatant falsehoods. It’s then a lottery as to whether you get a history teacher with the necessary scientific knowledge to accurately assess technical details on radiocarbon dating, or one who repeats long-debunked nonsense.

There’s also the Piltdown man in there, and again, that could work, just as long as you don’t get a creationist teacher. It is in there with other hoaxes such as the various “findings” of Noah’s ark or the Turin shroud, and that’s something at least.

Furthermore, these are scientific topics – why would they be introduced into the history curriculum, instead of the science curriculum? Well, as PZ put it:

The science side of the debate has gotten hardened by repeated attacks, and is usually better prepared to resist the foolishness, so they switch targets and catch history or philosophy off guard. Every academic discipline is subject to this corruption.

However, in this case, there is something you can do. The draft curriculum is open for consultation. The creationist questions can be found here, under unit 2 (you’ll need to register first).

Hat-tip: PZ Myers

Was the Arctic Ice Cap ‘Adjusted’?

Over at “American Thinker”, Randall Hoven has a post about the Arctic ice caps and, specifically, the difference between the “area” and “extent” values for the size of these. The problems start with the interpretation of a graph much like this:

Now, you probably noticed the substantial discontinuity in the “area” during 1987. This is even more apparent if you look purely at the difference between extent and area:

I’ve also plotted the difference between the extent and area for the entire period (taken from the bootstrap data):

Now, Randall includes an “Important Note” from raw data which explains that:

Important Note: The “extent” column includes the area near the pole not imaged by the sensor. It is assumed to be entirely ice covered with at least 15% concentration. However, the “area” column excludes the area not imaged by the sensor. This area is 1.19 million square kilometres for SMMR (from the beginning of the series through June 1987) and 0.31 million square kilometres for SSM/I (from July 1987 to present). Therefore, there is a discontinuity in the “area” data values in this file at the June/July 1987 boundary.

So the discontinuity exists because, from the start to mid-1987, data is taken from the SMMR, which did not have any data for 1.19 million square kilometres in the polar regions (a “pole hole”), whereas it was then replaced by the SSM/I instrument, which only missed 0.31 million square kilometres. As the “area” figure does not account for this, and as at least most of that area will be covered in sea ice, there will be almost 0.88 million square kilometres of extra sea ice from the middle of 1987 onwards, purely due to the instrument change. So from mid-1987, the area figure includes the ice from an additional area 0.88 million square kilometres. So, obviously, if I remove this difference, the discontinuity disappears:

Looking at this, there is still substantial variation in the difference between “extent” and “area” figures. Randall asks why:

What were the differences? From the above words from the NSIDC, you would think that the differences would be constant offsets (1.19 million sq km from 1979 through June of 1987, and 0.31 million since). But the actual differences in the data file were not constant at all; they varied between 1.93 and 3.42 million sq km.

Notice, however – it shows up particularly clearly with the complete data set – that these differences are clearly changing on an annual cycle (plus some variation – weather). And there’s no reason to assume that “extent” and “area” are measuring exactly the same thing. So, if we check how the NSIDC define these terms, we learn:

In computing the total ice-covered area and ice extent with both the NASA Team and Bootstrap Algorithms, pixels must have an ice concentration of 15 percent or greater to be included. Total ice-covered area is defined as the area of each pixel with at least 15 percent ice concentration multiplied by the ice fraction in the pixel (0.15 to 1.00). Total ice extent is computed by summing the number of pixels with at least 15 percent ice concentration multiplied by the area per pixel, thus the entire area of any pixel with at least 15 percent ice concentration is considered to contribute to the total ice extent.

These, obviously, are different figures, as in each pixel “area” is weighted by its concentration, and this would presumably be higher in winter – which is exactly what we see in the differences. Randall, on the other hand, resolves the issue by completely ignoring it.

Going back to the March data, before adjusting for the “pole hole”, like Randall, I find it actually has a slight positive trend:

However, after adding the pole hole region, I get a much stronger downwards trend in the quantity of sea ice:

Now, I’ll emphasise that this isn’t (necessarily) accurate, some (unknown to me) portion of the pole hole might not contain sea ice during March. That data obviously exists, but I don’t have the time at the moment to try and analyse it.

And all of this means that the rest of Randall’s conclusions are invalid, being that they are based on a false premise.

Actually, the rate of growth is statistically insignificant, meaning that a statistician would say that it is neither growing nor shrinking; it just bobs up and down randomly. More good news: no coming ice age, either.

No, there is definitely a significant trend.

You see that “extent” always shows more shrinkage than “area” does. In the months of maximum sea ice, February and March, the area trend is upward. And for winter months generally, December through May, any trend in area is statistically insignificant. For summer months, July through October, the trend is downward and statistically significant.

But these calculations are all based on extremely biased data for the start of the period, and so are all wrong.

Katie Couric should have used the month of September as her example. In three decades, the Arctic sea ice “extent” shrank by 34%. She could make such claims while stating, truthfully, that the data come from NSIDC/NOAA and the trend is statistically significant. It’s science.

Despite the sarcasm dripping from this sentence, yes, it is science. The Arctic ice is melting. Without the “pole hole”, September looks like this:

As you can see, my trend line isn’t a very good fit to this data, and, as Randall says, any decrease seems to be in just the last few years. After adding in the pole, however, things look a lot different:

Again, the red line represents “area,” the only thing actually measured. A downward trend is evident to the eyeball. But look closely and that downward trend is fairly recent — say, since 2000. Indeed, the calculated trend was slightly upward through 2001. That is, the entire decline is explained by measurements since 2002, a timespan of just eight years.

But the older data was biased, so the downward trend was actually for the whole period, and somewhat stronger, to boot.

To understand the trend, you need to understand the data you’re looking at. Or, as the readme file for the data Randall Hoven looked at put it: “we recommend that you read the complete documentation in detail before working with the data”. Had Randall done that, and checked the meanings of “area” and “extent” before writing this piece, he could have saved himself a lot of bother and embarrassment.

Randall starts his conclusion like this:

This little Northern Hemisphere sea ice example captures so much of the climate change tempest in microcosm.

And that’s very true. Someone looking at data then didn’t understand, analysing it improperly, and reaching strong but extremely false conclusions as a result. And then, even when corrected on the misunderstanding, continuing to believe those conclusions.

See, as I was writing this post, Randall posted a correction on his site. It turns out that he’d found the definitions of area and extent (technically, he still got the definition of area slightly wrong, but it’s not as bad). However, although pointing out these problems with his main article, he tries to recover the point with this:

If we add the “pole hole” back to the measured “area,” we would get a downward trend in area due to the change in pole hole size in 1987. If we assume that the pole hole is 100% ice, then the downward trend in March would be 2.2% per decade. But if we assume that the pole hole is only 15% ice (the low end of what is assumed), then the downward trend is only 0.1% per decade, which is not statistically significant. (The corresponding downward trend for “extent” was 2.6% per decade.)

It is true that whatever downward trend there is for March is due only to these adjustments (assumed pole hole size and concentration). And whether that trend is statistically significant depends on ice concentration in the “pole hole,” an assumed value.

For a start, it seems to me to be a fairly reasonable assumption that the ice content of the pole hole is towards the high end of the range – after all, that’s the bit of the Arctic closest to the North Pole. And the thing is, that assumption is a pretty darn testable one. All you have to do is go to the North Pole and look. Come to think of it, I’d be willing to bet that someone already has.
Image from Wikipedia