Dec 16

Stentorian

We had the annual “looking at muck down a microscope” practical last week. As usual, the best thing we saw was a ciliate in some pond water, in this case a little trumpet animalcule:

Stentor sp. [CC-BY-SA-3.0 Steve Cook]

Stentor sp. The green bits are either symbiotic algae, or dinner, possibly a bit of both.

Previous winners: Vorticella and Lacrymaria. The Ciliata really are the phylum that keeps on giving.

Dec 15

A queen’s Christmas message

Well, at least 2:8 is plausible.

2:1 And it came to pass 10 years after the death of Herod the Great, that there went out a decree from Caesar Augustus that all the world – except of course for those irrelevant little bits that the Romans hadn’t conquered yet – should be taxed.

2:2 (And this taxing was first made when Quirinius was governor of Syria during what – by a large stretch of the imagination – may have been his second term, his first (entirely undocumented) term having been in 4 BCE, during which an (entirely undocumented) census almost certainly didn’t take place either.)

2:3 And all went to be taxed, every one into his own ancestors’ city, in direct contravention of previous Roman policy and of common sense.

2:4 And Joseph also went up from Galilee, out of the city of Nazareth – using his wooden time-machine to travel back through the centuries required for the town to come into existence – into Judaea, unto the city of David, which is called Bethlehem.

2:5 To be taxed with Mary his espoused wife, being great with her child, whom she claimed – completely plausibly – to have fallen into her womb from heaven, rather than to have been formed in the usual sweaty and grisly fashion.

2:6 And so it was, that, while they were there, the days were accomplished that she should be delivered of her son, in accordance with a variety of mistranslated prophesies that meant something quite different.

2:7 And she brought forth her firstborn son, and wrapped him in swaddling clothes, and laid him in a manger; because there was no room for them in the inn, this being full of the whole population of Judea, who – like Joseph – also had unrealistic ideas about Fisher’s relatedness coefficient and the importance of Y chromosomes.

2:8 And there were in the same country shepherds abiding in the field, keeping watch over their flock by night.

2:9 And, lo, the angel of the Lord came upon them, which must have been a bit of a surprise. The glory and bowel-loosening terror of the Lord shone round about them: and they were sore afraid, particularly of the cherubim.

2:10 And the angel said unto them, Fear not: for, behold, I bring you good tidings of great joy, which shall be to all people. For some value of ‘people’. And of ‘all’.

2:11 For unto you is born this day in the city of David a man who is God, and also the son of God, and also the son of a girl from a town that doesn’t exist. But definitely not the son of Joseph. Despite the effort we’ve gone to in establishing his back-story. Good luck working that one out.

2:12 And this shall be a sign unto you; Ye shall find the logical abomination wrapped in swaddling clothes, lying in a manger, possibly attended by a number of Persian priests or kings or wise-men or some-such, whom the author of this document will casually forget to mention.

2:13 And suddenly there was with the angel a multitude of the four-faced, six-winged heavenly host praising God, and saying,

2:14 Glory to God in the highest, and on earth peace and good will toward men, except Monophysites, Monothelites, Arians, Nestorians, Manichaeans, Marcionites, Ebionites, Sadducees, Pharisees, Docetists, Cathars, and especially not those bloody atheists. And probably not towards women either, come to think of them.

2:15 And it came to pass, as the angels were gone away from them into heaven, the shepherds said one to another, Let us now go even unto Bethlehem, and see this thing which is come to pass, which the Lord hath made known unto us. It sounds like a right laugh, and there’s always the chance of a punch-up.

2:16 And they came with haste, and found Mary, and Joseph, and the babe lying in a manger.

2:17 And when they had seen it, they made known abroad the saying which was told them concerning this child, about which Joseph must have been absolutely delighted.

2:18 And all they that heard it wondered at those things which were told them by the shepherds. because – frankly – you would wonder about it, wouldn’t you? Wouldn’t you?

2:19 But Mary kept all these things, and pondered them in her heart, for she knew that her remaining verses were numbered.

2:20 And the shepherds returned, glorifying and praising God for all the things that they had heard and seen, as it was told unto them. Unfortunately, when they got back, the sheep had wandered off, and they there was much unseemly blasphemy.

2:21 And (as will probably be prudishly edited out when you hear this read in the dim and distant future), when eight days were accomplished for the mutilation of the child’s penis, his name was called Joshua, which was so named of the angel before he was conceived in the womb.

A very Merry Joshuamas to you all.

Jul 02

Organism of the week #22 – Faking it and making it

Nettles have a rather unhappy reputation as bringer of painful welts, and – at this time of year – dribbling noses too. The welts are probably caused by histamine, and the pain by oxalic and tartaric acids, which the nettle injects into your skin through the tiny brittle hairs that cover its stems and leaves. If you’re stung badly, the pain can last for several hours.

Urtica dioica [CC-BY-SA-3.0 Steve Cook]

Stinging nettles (Urtica dioica) in Russia Dock Woodlands, Rotherhithe

If you are stung by nettles at some point, you’ll probably avoid trampling barefoot on them in future. If getting trampled by humans is a big ecological problem for nettles, then a less stingy nettle stands a poor chance of growing up to make baby nettles. Stingless nettles will therefore go extinct, and their stingier competitors will inherit the earth. Praise be to Darwin.

Urtica dioica trichomes [CC-BY-SA-3.0 Frank Vincentz]

Stinging nettle stinging hairs [CC-BY-SA-3.0 Frank Vincentz]

Unfortunately for you (and fortunately for those who make a living from it), it’s almost always true that “I think you’ll find it’s a bit more complicated than that” in biology. If humans can be persuaded to avoid meddling with nettles through a single painful experience, there is a lot of opportunity for cheats to exploit your fear. In the case of nettles, one well-known group of cheats are the so-called dead nettles:

Lamium galeobdolon [CC-BY-SA-3.0 Steve Cook]

Yellow archangel, a common dead nettle (Lamium galeobdolon)

The leaves of dead nettles look remarkably like those of stinging nettles, but the dead nettles neither sting, nor are they even close relatives of stinging nettles: stinging nettles are related to hops and cannabis; dead nettles to mint and sage. The flowers give the game away at this time of year, but in spring, the two plants are really very similar. You need to get quite close to spot the missing stings on the dead nettles, and if you’ve had a bad experience with the real thing in the past, getting quite close is probably something you – or a fluffy wuffy bunny, or whatever – would think twice about.

Urtica dioica and Lamium album (spot the difference) [CC-BY-SA-3.0 Steve Cook]

Stinging nettle (Urtica dioica) and white dead-nettle (Lamium album): spot the difference. The dead nettle has conspicuous white flowers; the stinging nettle’s flowers are greenish-brown tassels

Good biologists should always be skeptical of plausible stories, so I should add that I’ve not actually been able to track down any experimental studies seeing whether bunnies who have learnt to avoid stinging nettles also avoid dead nettles, let alone any that show dead nettles are more successful at making seeds when real nettles are in the same area. Assuming this actually is the case, dead nettles would be “Batesian mimics” of stinging nettles, or – if you’d rather – fakers. They don’t have to waste energy making histamine and oxalic acid and hypodermic needles; they merely have to look somewhat similar to stinging nettles to receive all the benefits of having bunnies avoid them, with fewer of the costs.

But what would happen if dead nettles were such good fakers that they became very common? The bunnies would rarely meet the real thing, and would probably never learn to avoid nettle-like plants of any sort. Even if the bunnies did occasionally meet stinging nettles, those reckless bunnies that threw caution to the wind and ate things that looked like nettles would still tend to get more to eat than more cautious bunnies. In either case, the dead nettles would get nibbled back into relative rarity. And then the more reckless bunnies would get stung more often, as they’d meet real stinging nettles more frequently, and this would – in its turn – favour bunnies that were more cautious again, leading to a resurgence of the dead nettles. And so on, and so on.

The relative rarity of dead nettles and stinging nettles wouldn’t necessarily roller-coaster up and down like this: the cycles could be quite small. However, it’s interesting that neither a field of dead nettles on their own, nor of stinging nettles on their own, is stable. A field of nothing but stinging nettles is prone to invasion by fake dead nettles; but if the number of dead nettles gets too high, the bunnies will never meet the real thing, and won’t learn to avoid nettle-like plants in the first place. There is likely to be some ratio of real to fake nettles (and of cautious to reckless rabbits) that is stable in the long term, but it won’t be 0% or 100%.

These sorts of ‘game’ between mimics – the dead nettle “fakers” – and their models – the stinging nettle pain “makers” – are very common in biology, and are an important part of the ecology of many organisms. Wherever an organism has made some sort of ‘effort’, there is likely to be a living made scrounging off them, or mimicking their appearance.

But of course, it’s always a bit more complicated in biology. Not all mimics are fakes. Some mimics benefit from looking dangerous because they really are dangerous.

Mimicry [CC-BY-SA-3.0 Steve Cook]

Honeybee (Apis mellifera), bumblebee (Bombus terrestris), cinnabar moth caterpillar (Tyria jacobaeae), hoverfly (Eupeodes luniger)

The honeybee and bumblebee in the image above both have black and yellow striped bodies. Both are able to sting, and both seem to have similar colours. Is one mimicking the other, and if so, why?

As I said earlier, you should be skeptical of plausible stories. Bumblebees and honeybees are quite closely related, so perhaps the black-and-yellow is just a colour-scheme they’ve inherited from their common ancestor that has nothing to do with mimicry. We need more evidence.

As it turns out, there is very good evidence that black-and-yellow is meaningful mimicry, not accidental similarity. For example, the cinnabar moth caterpillar in the third image is not closely related to the bees, so it is likely that this caterpillar’s colours have evolved independently from those of the bees. Can it sting? Not exactly, but it is poisonous, because it mostly eats ragwort, and it steals the ragwort’s poisons for its own defence. Any bird that has learnt to avoid black-and-yellow insects through unhappy run-ins with bees is likely to avoid this caterpillar too. Importantly, this works both ways: any bird that’s had a bad experience with cinnabar moth caterpillars is also likely to avoid bees (and wasps, and other similar insects).

This sort of mimicry, where makers – the animals and plants that can back up their threats – all come to have similar warning colours is called Müllerian mimicry. If you need any more convincing, it’s telling that there are also many Batesian fakers of the black-and-yellow “warning” colour-scheme too, like the harmless hoverfly shown in the fourth image.

The natural world if full of liars and cheats; except when it isn’t.

Jun 20

Half a life

As of today, I will have spent precisely half of my life at $PLACE_OF_WORK.

I first arrived at what would become my workplace as a badly coiffured youth in 1995 to do a biology degree. South Kensington seemed a great improvement over Croydon, where I had endured my previous 18 years: there was a refreshing absence of casual street violence, and a greatly improved proximity to the grubby delights of Soho. At that time, my Hall of Residence was directly above the first-year lecture theatre, and in the same building as the Students’ Union. Despite this tempting proximity to cheap vodka – and even cheaper dates – I somehow managed to attend almost every lecture of my first year, aside from a week (a week!) of lectures on algae, which I traded for bossing Munchkins about in the Questor’s Theatre in Ealing. I met my personal tutor at least twice, survived two under-catered field-trips to somewhere, somewhere in a field in HampshireBerkshire, and made friends whom I treasure to this day.

Ecology 1996

I discovered a cache of mediaeval exam papers in the bottom of a filing cabinet when I last cleared out my office. I have a very distinct memory of answering this question, probably because it involved talking about the “Sexy Sons” hypothesis.

Second-year forced me out into less convenient accommodation: an ill-conceived double-Georgian knock-through near Brompton Cemetery with 18 bedrooms, and anything up to 2 working bathrooms on any given day. Due to sometwo else both failing their first-year exams, I found myself promoted to Homosexual in Chief of the LGBT society, for which dubious honour I now have a pot behind the Union bar.

Pot

I am number 2 on the list of Chief Homosexuals. I believe number 1 is now advising the Lib Dems on election strategy. I suspect this pot may be cursed.

My final-year project on copper-tolerant fungi somewhere, somewhere in that field in Berkshire led to the offer of a PhD in wood preservation, which I leapt upon, having received no careers guidance whatsoever up to that point, and having begun to fear moving back to Croydon for want of any botanical PhD opportunities in London. My undergraduateship ended with a viva voce, upon which I thought hung the fate of my entire degree; in fact, I turned out to be a control, and what I had thought would be a bowel-loosening grilling turned out to be entirely unmemorable.

Summer Ball 1998

I had to hire my suit for the Summer Ball just before I graduated. I still hate wearing suits of any kind, which probably contributes to my unemployability outside of academia.

Like most postgraduate research degrees, mine was a heady mix of disappointment, poverty, and the growing realisation that week-day nights-out were incompatible with competent laboratory work. My department had moved out of the timeshare flat with the Students’ Union and into a brand-new building during the summer between my BSc and PhD, but someone had been a little unrealistic about the space available in the new labs. The first and second years of my PhD were spent trying not to poison myself with arsenic trioxide amongst a labyrinth of broken vacuum impregnators, quickfit glassware, and bottles of solvent with labels written in Linear A; the third and fourth years spent trying to fit research into the gaps between the demonstrating in lab practicals I had to do in order to have enough money to eat. Somehow I captured the heart of a young aeronautical engineer, who has miraculously put up with my questionable charms ever since.

I presented my ground-breaking findings on the bacterial biotransformation of an anti-sapstain chemical to a conference in glamorous Cardiff, and left it at that. My contribution to the greater knowledge of humankind will forever be a few grey literature conference proceedings, and a large blue book buried in quicklime below the College library.

Pallet boards [CC-BY-SA-3.0 Steve Cook]

I occasionally have nightmares about being buried under a landslide of poorly preserved pallet boards.

Having drifted into a PhD, I continued on my under-thought career path by applying for a three-year post-doctoral position that combined part-time research with a part-time PGCE in secondary school education. In retrospect, combining the laugh-a-minute relaxation of academic research with the delights of herding teenagers through GCSEs may not have been the best life decision I’ve made. There were amusing moments – the attempts of year 7 students to embarrass me during sex-ed lessons were doomed from the start – but mostly it was exhausting and impossible. I somehow made it through to the other side, but with no interest whatsoever in ever darkening the door of a secondary school or research lab again.

Woodlice simulator [CC-BY-SA-3.0 Steve Cook]

Simulating woodlice: anything was better than differentiating my citizenship lessons for kinaesthetic learners [sic]

Fortunately, I had kept up a bit of lab demonstrating on the side, and had even been roped into giving a few first-year lectures in the twilight of my PhD. A temporary position opened up convening a first-year biology course, giving a few lectures, and running some of the practicals I’d been demonstrating for the best part of a decade. And so began a slow accretion from ‘stop-gap teaching gimp’ to ‘senior teaching fellow’.

Marking

One of my major roles is the conversion of caffeine into grades.

Many of the staff who taught me as an undergrad have since retired or moved on; even the new-born building of 1998 is now old enough to legally have sex and drive a moped. Some 1700 students have learned – or at least endured – first-year molecular biology and enzymology with me, and the pile of marking in front of me (for which writing this banal drivel is the sort of displacement activity against which I’ve hypocritically warned those very students) probably contains the ten thousandth script I have scrawled with the Biros of judgement.

I probably ought to get back to it.

In confirmation of the universe’s pitiless malevolence, I now give the lectures on algae that I skived off in my first-year.

Tally

Aleph-naught bottles of beer on the wall, aleph-naught bottles of beer, you take one down, you hand it on round, aleph-naught bottles of beer on the wall

Jun 04

Organism of the week #21 – Flying machines

It is frequently, and largely accurately, said that an area of Amazon rainforest the size of Wales is deforested every year. Horrendous though this statistic is, it’s worth remembering that the UK deforested an area at least the size of Wales (including most of the area commonly known as “Wales”) before anyone started keeping notes.

The UK’s track record at maintaining its biodiversity has been – to put it generously – somewhat patchy. We have wiped out a goodly swathe of our large mammals: brown bears, elks, lynxes, and wolves; we drove our blue backed stag beetles to oblivion; and Davall’s Sedge has not been spotted since 1930. One species that was formerly so common in the UK that Shakespeare felt the need to warn theatre-goers about its favoured nest-building materials is the red kite:

My Trafficke is sheetes: when the Kite builds, looke to lesser Linnen.

This beautiful bird was very nearly wiped out in the UK by the early 20th century; only a handful of breeding pairs were left by 1990, in – you guessed it – Wales. Its populations in southern Europe continue to decline, and it is still considered near threatened. However, since the 1990s, the red kite has been the target of a major reintroduction program in the UK, and in a few places they are once again a common sight, soaring on thermals and seeking out rabbits, carrion, and recently washed pillowcases.

A good place to see these impressive birds is the Chilterns, a range of chalk hills just north of London. I’m not generally a charismatic-megafauna kind of biologist, but getting close enough for even this somewhat blurry action shot was thrilling:

Milvus milvus [CC-BY-2.0 Alex Lomas]

Naturally-selected flying machine: red kite (Milvus milvus), unimpressed by the launch of a glider over the top of its head

The kites particularly like to hang around on airfields, presumably on the look-out for tasty pilots. Their blasé attitude to the planes and gliders is amusing if you’re on the ground. It is somewhat less amusing when you meet them in the air, and they remind you in no uncertain terms that their lineage has been flying since before your lineage even took to the trees, let alone came back down from them.

flying_machines [CC-BY-SA-3.0 Steve Cook]A

Human-designed flying machines. Well, one’s a glider, which is to flying what a snake is to tap-dancing, but you know what I mean.

Apr 23

Bagging botanic gardens

I’m not sure whether bagging botanical gardens is better or worse than bagging Munros, Michelin stars or the numbers off of rolling stock, but it keeps me off the streets…

Edinburgh botanical gardens Gunnera [CC-BY-2.0 Alex Lomas]

This Gunnera at Edinburgh makes me look even more like a lawn-ornament

London (Kew)

Just ten stops down the District Line from $WORK lies the Royal Botanic Gardens Kew. The gardens have three enormous glasshouses, a number of smaller glasshouses, and 121 hectares of trees, beds and desperately awful architecture to explore. Unfortunately, it also has an entry fee (for non-concession adults) of £14.50, which is a little steep, and possibly one of the reasons that disappointingly few of my students seem to have visited it, despite its proximity.

Kew gardens temperate house [CC-BY-2.0 Alex Lomas]

Temperate House (closed for renovations at the time of writing)

My favourite indoor displays at Kew are the two rooms of carnivorous plants in the Princess of Wales Conservatory (don’t miss the newer cloud-forest full of Nepenthes), and the ever-changing contents of the Alpine and Waterlily Houses. The latter often has large clumps of sensitive plants (Mimosa pudica and relatives) to poke. I also enjoy the very Victorian approach to health-and-safety in the walkway at the top of the Palm House.

Kew gardens Nepenthes [CC-BY-SA-3.0 Steve Cook]

Nepenthes robcantelyi

Botanerd highlights.

  1. Play “palm or cycad?” at the two ends of the Palm House.
  2. Flowers are essentially tarts. Prostitutes for the bees. The Princess of Wales Conservatory has a very good selection of huge flower-free ferns and spikemosses (Selaginella).
  3. There’s a gigantic Ginkgo just round the back of the Princess of Wales Conservatory; and the oft ignored (and therefore much less busy) far end of the gardens has a fantastic collection of conifers, including AraucariaSequoia, Sequoiadendron, Torreya, Cunninghamia, and Cryptomeria, in addition to all the yews, cypresses, pines, firs, larches, cedars and spruces you can eat.

Kew gardens Araucaria [CC-BY-2.0 Alex Lomas]

Monkey-puzzle (Araucaria araucana) at Kew

London (Chelsea)

Small but perfectly formed, the Chelsea Physic Garden is one of the oldest botanic gardens in the world (Oxford, below, claims the top spot). It specialises in plants used by humans, including (when I last went) a special display of plant fibre ropes. Entry about £10.

Chelsea physic garden ropes [CC-BY-SA-3.0 Steve Cook]

Hemp rope smells nicest and can be drawn very fast over skin without causing friction burns, which is an important consideration for, erm, rigging – yes – rigging

 Botanerd highlights.

  1. Count the number of exciting ways you could get killed by the plants in the Pharmaceutical Garden display.

Edinburgh

I don’t remember the Royal Botanic Garden Edinburgh being this sunny either time we visited, but apparently it was on at least one trip. Unlike Kew, the glasshouses are squidged together, so if the weather’s misbehaving, your fern to rain ratio will be much higher than in London. Unfortunately, like Kew, at the time of writing, some of the glasshouses are shut for renovations. Console yourself with the fact that entry to the gardens themselves is free.

Edinburgh botanical gardens [CC-BY-2.0 Alex Lomas]

Edinburgh botanical gardens

 Botanerd highlights.

  1. Edinburgh is the only place I’ve ever seen a clubmoss (Lycopodium) on display. See if it’s still there.
  2. It’s also the only place I’ve ever seen a Gnetum (picture at the end of this post).
Edinburgh botanical gardens Lycopodium [CC-BY-SA-3.0 Steve Cook]

Lycopodium pinifolia at Edinburgh

Barcelona

Sitting on the slopes of Montjuïc, just below the Olympic Stadium, the Jardí Botànic de Barcelona is my most recent bagging. Unlike the other gardens here, it is entirely outdoors, with no glasshouses, and therefore specialises in plants from Mediterranean scrub habitats like Chile, South West Australia and California. Entry fee to the gardens is a very reasonable €3.

Barcelona botanical gardens Xanthorrhoea [CC-BY-SA-3.0 Steve Cook]

Grass-trees (Xanthorrhoea) at the Barcelona botanical gardens

Botanerd highlights.

  1. Australian grass-trees and giant Chilean Puya bromeliads.
  2. As you’re on the side of a hill, the views are also fantastic, and the easiest way to get there is via cable-car, so you get to soar over the local conifers too.

Amsterdam

The photo below of De Hortus Botanicus Amsterdam doesn’t do it justice, but it’s well worth the €8.50 entry. The glasshouses are very well laid out, and they have a very good selection of carnivorous plants, obscure ferns (including Marattia) and cycads.

Amsterdam botanical gardens [CC-BY-2.0 Alex Lomas]

Amsterdam botanical gardens

 Botanerd highlights.

  1. The aforementioned carnivorous plants and obscure ferns.
  2. I wonder what this could possibly be?
Amsterdam botanical gardens cannabis [CC-BY-2.0 Alex Lomas]

The source of the nice rope at Chelsea

Oxford

Claiming to be the oldest botanic garden in the world (and I’ve no reason to doubt them!), the University of Oxford Botanic Garden is a snip at £4.50 entry, and has a good mixture of outdoor beds and glasshouses. The glasshouses are small, but absolutely rammed with stuff, including Pachypodium (below), assorted ferns, jade vines, a lovely Amorphophallus rivieri (well, lovely until you stick your nose over it), but – as it turns out – no Orchis fatalis.

Oxford botanical gardens Pachypodium [CC-BY-SA-3.0 Steve Cook]

Oxford botanical gardens Pachypodium

  Botanerd highlights.

  1. This is the only place I’ve ever seen a Psilotum, which had me squealing with excitement, much to the disdain and bafflement of my long-suffering companion on these trips.
  2. Like the rest of this garden, the carnivorous plant glasshouse crams a lot of variety into a small space.

Berlin

The Botanischer Garten und Botanisches Museum Berlin-Dahlem claims to be the second-largest in the world (after Kew), and now has a dedicated moss garden (which unfortunately post-dates my visit) as well as the usual beds and (extensive) glasshouses. Entry fee is €6.

Berlin botanical gardens [CC-BY-2.0 Alex Lomas]

Berlin botanical gardens

Botanerd highlights.

  1. Several of the botanic gardens mentioned above cultivate Welwitschia mirabilis, a very strange plant from the Namib that grows only two enormous strap-like leaves in its lifetime, but only Berlin seems to have been completely successful: their plants are verdant and frequently in flower (‘in cone’, really, as this plant is closely related to the pines and other conifers).
Berlin botanical gardens Welwitschia [CC-BY-2.0 Alex Lomas]

Welwitschia mirabilis at Berlin

(Dis)honourable mentions

I didn’t quite make it into the San Francisco Botanical Garden, but perhaps one day I’ll return with more time, and having not been recently fleeced at the California Academy of Sciences ($30 entry!)

Brussels has a wholly confusing pair of botanic gardens, the National Botanic Garden of Belgium, which is just north of Brussels, and the Botanical Garden of Brussels, which sits on the real botanic garden’s old site in the middle of Brussels. I got the former mixed up with the latter, much to my disappointment. It’s perfectly pleasant, but not really a botanic garden.

Darwin’s House at Down in Kent has a small glasshouse with a good collection of carnivorous plants. Well worth a visit, and a wander down the sandwalk.

Edinburgh botanical gardens Gnetum [CC-BY-SA-3.0 Steve Cook]

Gnetum at Edinburgh

Where next?

In particular, I’d love to know where I can see the following obscure corners of the vegetable empire:

  • Ophioglossum or Botrychium (adders-tongue fern).
  • Hornworts (Anthoceros), and/or a really good moss and liverwort display (preferably closer than Berlin!)
  • Quillworts (Isoetes).
  • Amborella.
  • Utricularia tenella or Utricularia multifida (previously Polypompholyx tenella and Polypompholyx multifida, until Peter Taylor cast the fairy aprons into the eternal darkness of taxonomic obsolescence).

Mar 06

Organism of the week #20 – Don’t point that thing at me

It’s amazing how informative an anus can be.

Take this sea urchin. The orange pucker in the middle of the spines is its “around-the-bum”, although zoologists would insist on writing that in Greek as “periproct“. The bright orange ring-piece is characteristic of this species, and marks it out as Diadema setosum, rather than any of the less rectally blessed species of Diadema.

Diadema setosum [CC-BY-SA-3.0 Steve Cook]

Diadema setosum

The butt-hole of an urchin is actually the second it will own, because urchins go through a metamorphosis that shames even that of a butterfly. The larva of an urchin looks not even a little bit like the adult…

Pluteus larva (Public domain: out-of-copyright edition of the Encyclopædia Britannica)

Pluteus larva of a sea urchin. The adult will develop as a ball inside the larva’s body. The spikes on the larva are nothing to do with the spikes on the adult

…and the adult urchin develops like a well-organised tumour within the body of the larva. For this reason, the adult’s anus is an entirely different hole from the larva’s anus.

The development of the larva’s original butt-hole during development from a fertilised egg turns out to be quite revealing. Surprisingly, it marks out sea urchins and their relatives – like sea cucumbers and starfish – as much closer relatives of yours and other backboned animals, than they are of insects or worms or jellyfish, or indeed, or pretty much any other animal.

As a fertilised human or sea urchin egg divides, it forms a hollow ball of cells, somewhat like a football. Then, some of the cells on the surface fold in on themselves, forming a shape rather like what you get if you punch your fist into a half-deflated football. The dent drills its way through, and eventually opens out through the other side of the ball. What you end up with is a double-walled tube, with a hole at either end.

Gastrulation [CC-BY-SA-3.0 Steve Cook]

In humans and all other animals with backbones, and in the larva of sea urchins and starfish and sea cucumbers, the first hole – the one formed by the dent – becomes the anus; and the second hole – where the dent punches through to the other side – becomes the mouth.

In most other animals, the first hole becomes the mouth, and the second the anus (pedant alert: I’m glossing over some details here).

Humans and sea urchins develop arse-first. Or mouth-second, as zoologists would prudishly have it, preferably euphemised further by writing it in Greek. Humans and fish, and sea urchins and starfish are all “deuterostomes”.

The development of the chocolate starfish of a starfish and of the asshole of an ass hint at a deep evolutionary connection between two very different groups of animals. Enlightenment can be found in the most unexpected places.

Mar 03

Nonlinear regression

Nonlinear regression is used to see whether one continuous variable is correlated with another continuous variable, but in a nonlinear way, i.e. when a set of x vs. y data you plan to collect do not form a straight line, but do fall on a curve that can be modelled in some sensible way by a known equation, e.g.

v = \frac{ v_{max} \cdot [S] }{ K_M + [S] }

Some important general considerations for fitting models of this sort include:

  • The model must make physical sense. R (and Excel) can happily stick polynomial curves (e.g. a cubic like y=ax3+bx2+cx+d) through a data set, but fitting random equations through data is a pointless exercise, as the values of a, b, c and d are meaningless and do not relate to some useful quantity that characterises the behaviour of the data. If you want to fit a curve to a data set, it has to be a curve (and therefore an equation) you’ve chosen because it estimates something meaningful (e.g. the Michaelis constant, KM).
  • There must be enough data points. In general, you cannot fit a useful model of n parameters to a data set smaller than n+1 in size. In linear regression, you cannot fit a slope and an intercept (2 parameters) to just one datum, as there are an infinite number of lines that pass through a single point and no way to choose between them. You shouldn’t fit a 2 parameter model to 2 data points as this doesn’t buy you anything: your model is at least as complex as a simple list of the two data values. The Michaelis-Menten model has two parameters, so you need at least three concentrations of S, and preferably twice this. As ever, collect the data needed for the analysis you plan to do; don’t just launch into collecting the data and then wonder how you will analyse it, because often the answer will be “with great difficulty” or “not at all”.
  • The data set should aim to cover the interesting span of the response, even if you don’t really know what that span is. A linear series of concentrations of S is likely to miss the interesting bit of an enzyme kinetic curve (around KM) unless you have done some preliminary experiments. Those preliminary experiments will probably need to use a logarithmic series of concentrations, as this is much more likely to span the interesting bit. This is particularly important in dose/response experiments: use a concentrations series like 2, 4, 8, 16…mM, or 10, 100, 1000, 10000…µM, rather than 2, 4, 6, 8, 10…mM or 10, 20, 30, 40…µM; but bear in mind the saturated solubility (and your own safety!) when choosing whether to use a base-2, a base-10, or base-whatever series.

The data in enzyme_kinetics.csv gives the velocity, v, of the enzyme acid phosphatase (µmol min−1) at different concentrations of a substrate called nitrophenolphosphate, [S] (mM). The data can be modelled using the Michaelis-Menten equation given at the top of this post, and nonlinear regression can be used to estimate KM and vmax without having to resort to the Lineweaver-Burk linearisation.

In R, nonlinear regression is implemented by the function nls(). It requires three parameters. These are:

  • The equation you’re trying to fit
  • The data-frame to which it’s trying to fit the model
  • A vector of starting estimates for the parameters it’s trying to estimate

Fitting a linear model (like linear regression or ANOVA) is an analytical method. It will always yield a globally optimal solution, i.e. a ‘perfect’ line of best fit, because under the hood, all that linear regression is doing is finding the minimum on a curve of residuals vs. slope, which is a matter of elementary calculus. However, fitting a nonlinear model is a numerical method. Under the hood, R uses an iterative algorithm rather than a simple equation, and as a result, it is not guaranteed to find the optimal curve of best fit. It may instead get “jammed” on a local optimum. The better the starting estimates you can give to nls(), the less likely it is to get jammed, or – indeed – to charge off to infinity and not fit anything at all.

For the equation at the start of this post, the starting estimates are easy to estimate from the plot:

enzyme.kinetics<-read.csv( "H:/R/enzyme_kinetics.csv" )
plot(
    v ~ S,
    data = enzyme.kinetics,
    xlab = "[S] / mM",
    ylab = expression(v/"µmol " * min^-1),
    main = "Acid phosphatase saturation kinetics"
)

Acid phosphatase saturation kinetics scatterplot [CC-BY-SA-3.0 Steve Cook]

The horizontal asymptote is about v=9 or so, and therefore KM (value of [NPP] giving vmax/2) is about 2.

The syntax for nls() on this data set is:

enzyme.model<-nls(
    v ~ vmax * S /( KM + S ),
    data  = enzyme.kinetics,
    start = c( vmax=9, KM=2 )
)

The parameters in the equation you are fitting using the usual ~ tilde syntax can be called whatever you like as long as they meet the usual variable naming conventions for R. The model can be summarised in the usual way:

summary( enzyme.model )
Formula: v ~ vmax * S/(KM + S)
Parameters:
     Estimate Std. Error t value Pr(>|t|)    
vmax 11.85339    0.05618  211.00 7.65e-13 ***
KM    3.34476    0.03860   86.66 1.59e-10 ***
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 

Residual standard error: 0.02281 on 6 degrees of freedom
Number of iterations to convergence: 4 
Achieved convergence tolerance: 6.044e-07

The curve-fitting has worked (it has converged after 4 iterations, rather than going into an infinite loop), the estimates of KM and vmax are not far off what we made them by eye, and both are significantly different from zero.

If instead you get something like this:

Error in nls( y ~ equation.blah.blah.blah, ): singular gradient

this means the curve-fitting algorithm has choked and charged off to infinity. You may be able to rescue it by trying nls(…) again, but with different starting estimates. However, there are some cases where the equation simply will not fit (e.g. if the data are nothing like the model you’re trying to fit!) and some pathological cases where the algorithm can’t settle on an estimate. Larger data sets are less likely to have these problems, but sometimes there’s not much you can do aside from trying to fit a simpler equation, or estimating some rough-and-ready parameters by eye.

If you do get a model, you can use it to predict the y values for the x values you have, and use that to add a curve of best fit to your data:

# Create 100 evenly spaced S values in a one-column
data frame headed 'S'
predicted.values <- data.frame( 
    S = seq( 
       from       = min( enzyme.kinetics$S ),
       to         = max( enzyme.kinetics$S ),
       length.out = 100
    )
)

# Use the fitted model to predict the corresponding v values, and assign them
# to a second column labelled 'v' in the data frame predicted.values
$v <- predict( enzyme.model, newdata=predicted.values )

# Add these 100 x,y data points to the graph, joined up by 99 line-segments
# This will look like a curve at the resolution of the graph
lines( v ~ S, data=predicted.values )

Acid phosphatase nonlinear regression model [CC-BY-SA-3.0 Steve Cook]

If you need to extract a value from the model, you need coef():

vmax.estimate <- coef( enzyme.model )[1]
KM.estimate   <- coef( enzyme.model )[2]

This returns the first and second coefficients out of the model, in the order listed in the summary().

Exercises

Try fitting non-linear regressions to the data sets below

  1. Estimate the doubling time from the bacterial data set in bacterial_growth.csv. N is the optical density of bacterial cells at time t; N0 is the OD at time 0 (you’ll need to estimate N0 too: it’s ‘obviously’ 0.032, but this might have a larger experimental error than the rest of the data set). t is the time (min); and td is the doubling time (min).
N = N_0 e^{ \frac{ \ln{2} }{ t_d } \cdot t }
  1. The response of a bacterium’s growth rate constant μ to an increasing concentration of a toxin, [X], is often well-modelled by a sigmoidal equation of the form below, with a response that depends on the log of the toxin concentration. The file dose_response.csv contains data on the response of the growth constant μ (“mu” in the file, hr‑1) in Enterobacter sp. to the concentration of ammonium ions, [NH4+] (“C” in the file, µM). The graph below shows this data plotted with a nonlinear regression curve. Reproduce it, including the superscripts, Greek letters, italics, etc., in the axis titles; and the smooth curve predicted from the fitted model.
\mu = \frac{ \mu_{max} }{ 1 + e^{ - \frac{ (\ln{[X]} - \ln{IC_{50}} ) }{ s } } }
  • μ is the exponential phase growth constant (hr−1) at any given concentration of X.
  • μmax is the maximum value that μ takes, i.e. the value of μ when the concentration of toxin, [X], is zero. The starting estimate should be is whatever the curve seems to be flattening to on the left.
  • IC50 is the 50% inhibitory concentration, i.e. the concentration of X needed to reduce μ from μmax to ½μmax. The starting estimate of ln IC50 should be the natural log of the value of [X] that gives you half the maximum growth rate.
  • s is the shape parameter: if s is small the curve drops off sharply like a cliff-edge; if s is large, the curve slopes more gently. If s is negative, the curve is Z-shaped; if s is positive, the curve is S-shaped. It starting estimate should be (ln IC25 − ln IC75) / 2, where IC25 is the concentration needed to reduce μ by 25%, and IC75 is the concentration needed to reduce μ by 75%. If you wish to prove this to yourself, pretend that e=3 rather than 2.718… and work out what [X] has to be to give you this value.
  • Note the x-variable in the equation and on the plot is the natural logarithm of [X], so you’ll need to use log(C) in your calls to plot() and nls().

Dose-response nonlinear regression model [CC-BY-SA-3.0 Steve Cook]

Answers

  1. The doubling time td is around 25 min.
bacterial.growth<-read.csv( "H:/R/bacterial_growth.csv" )
plot(
    N    ~ t,
    data = bacterial.growth,
    xlab = "N",
    ylab = "t / min",
    main = "Bacteria grow exponentially"
)

head( bacterial.growth )
   t    N
1  0 0.032
2 10 0.046
…

We estimate the parameters from the plot: N0 is obviously 0.032 from the data above; and the plot (below) shows it takes about 20 min for N to increase from 0.1 to 0.2, so td is about 20 min.

bacterial.model<-nls(
    N     ~ N0 * exp( (log(2) / td ) * t),
    data  = bacterial.growth,
    start = c( N0 = 0.032, td = 20 )
)
summary( bacterial.model )
Formula: N ~ N0 * exp((log(2) / td) * t)
Parameters:
    Estimate Std. Error t value Pr(>|t|)    
N0 3.541e-02  6.898e-04   51.34 2.30e-11 ***
td 2.541e+01  2.312e-01  109.91 5.25e-14 ***
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
Residual standard error: 0.002658 on 8 degrees of freedom
Number of iterations to convergence: 5 
Achieved convergence tolerance: 1.052e-06

This bit adds the curve to the plot below.

predicted.values <- data.frame( 
    t = seq( 
       from       = min( bacterial.growth$t ),
       to         = max( bacterial.growth$t ),
       length.out = 100
    )
)
predicted.values$N <- predict( bacterial.model, newdata = predicted.values )
lines( N ~ t, data = predicted.values )

Bacterial growth nonlinear model [CC-BY-SA-3.0 Steve Cook]

  1. Dose/response model and graph
dose.response<-read.csv( "H:/R/dose_response.csv" )
plot(
    mu   ~ log(C),
    data = dose.response,
    xlab = expression("ln" * "[" * NH[4]^"+" * "]"
/ µM),
    ylab = expression(mu/hr^-1),
    main = expression("Sigmoidal dose/response to ammonium ions in " *
italic("Enterobacter"))
)

The plot indicates that the starting parameters should be

  • μmax ≈ 0.7
  • ln IC50 ≈ ln(3), because IC50 is the value of [X] corresponding to μmax/2 ≈ 0.35, and this is about 3. You can see this by inspecting the raw data (the mid μ value is caused by an [X] somewhere between 2 and 4), or by looking at the plot (where the mid μ value corresponds to a ln([X]) of about 1). We’ll call ln(IC50) logofIC50 in the formula to nls() below so it’s clear this is a parameter we’re estimating, not a function we’re calling.
  • s ≈ − 0.6, because s ≈ (ln IC25− ln IC75) / 2 ≈ (ln(1.5) − ln(6)) / 2 ≈ (0.4 − 1.8) / 2. IC25 is the concentration needed to reduce μ by 25% (μ = 0.525, corresponding to about 1.5 µM), and IC75 is the concentration needed to reduce μ by 75% (μ = 0.175, corresponding to about 6 µM).
dose.model <- nls(
    mu    ~ mumax / ( 1 + exp( -( log(C) - logofIC50 ) / s ) ),
    data  = dose.response,
    start = c( mumax=0.7, logofIC50=log(3), s=-0.7 )
)
summary( dose.model )
Formula: mu ~ mumax/(1 + exp(-(log(C) - logofIC50)/s))
Parameters:
          Estimate Std. Error t value Pr(>|t|)    
mumax     0.695369   0.006123  113.58 1.00e-09 ***
logofIC50 1.088926   0.023982   45.41 9.78e-08 ***
s        -0.509478   0.020415  -24.96 1.93e-06 ***
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
Residual standard error: 0.00949 on 5 degrees of freedom
Number of iterations to convergence: 5 
Achieved convergence tolerance: 6.394e-06
Achieved convergence tolerance: 5.337e-06

This lists the estimates of μmax, ln(IC50) and s, plus the standard error of these estimates and their p values. Note that they’re all pretty close to what we guessed in the first place, so we can be fairly sure they’re good estimates. To get the actual value of IC50 from the model:

exp( coef(dose.model)[2] )

This gives 2.97 µM, which corresponds well with our initial guess.

To add the curve, we do the same as before:

predicted.values <- data.frame( 
    C = seq( 
       from       = min( dose.response$C ),
       to         = max( dose.response$C ),
       length.out = 100
    )
)
predicted.values$mu <- predict( dose.model, newdata = predicted.values )
lines( mu ~ log(C), data=predicted.values )

Feb 24

Analysis of variance: ANOVA (2 way)

The technique for a one-way ANOVA can be extended to situations where there is more than one factor, or – indeed – where there are several factors with several levels each, which may have synergistic or antagonistic effects on each other.

In the models we have seen so far (linear regression, one-way ANOVA) all we have really done is tested the difference between a null model (“y is a constant”, “y=“) and a single alternative model (“y varies by group” or “y=a+bx“) using an F test. However, in two-way ANOVA there are several possible models, and we will probably need to proceed through some model simplification from a complex model to a minimally adequate model.

The file wheat_yield.csv contains data on the yield (tn ha−1) of wheat from a large number of replicate plots that were either unfertilised, given nitrate alone, phosphate alone, or both forms of fertiliser. This requires a two-factor two-level ANOVA.

wheat.yield<-read.csv( "H:/R/wheat_yield.csv" )
interaction.plot(
    response     = wheat.yield$yield,
    x.factor     = wheat.yield$N,
    trace.factor = wheat.yield$P
)

As we’re plotting two factors, a box-and-whisker plot would make no sense, so instead we plot an interaction plot. It doesn’t particularly matter here whether we use N(itrate) as the x.factor (i.e. the thing we plot on the x-axis) and P(hosphate) as the trace.factor (i.e. the thing we plot two different trace lines for):

Wheat yield interaction plot [CC-BY-SA-3.0 Steve Cook]

You’ll note that the addition of nitrate seems to increase yield: both traces slope upwards from the N(o) to Y(es) level on the x-axis, which represents the nitrate factor. From the lower trace, it appears addition of just nitrate increases yield by about 1 tn ha−1.

You’ll also note that the addition of phosphate seems to increase yield: the Y(es) trace for phosphate is higher than the N(o) trace for phosphate. From comparing the upper and lower traces at the left (no nitrate), it appears that addition of just phosphate increases yield by about 2 tn ha−1.

Finally, you may notice there is a positive (synergistic) interaction. The traces are not parallel, and the top-right ‘point’ (Y(es) to both nitrate and phosphate) is higher than you would expect from additivity: the top-right is maybe 4, rather than 1+2=3 tn ha−1 higher than the completely unfertilised point.

We suspect there is an interaction, this interaction is biologically plausible, and we have 30 samples in each of the four treatments. We fit a two-factor (two-way) ANOVA maximal model, to see whether this interaction is significant.

First, we fit the model using N*P to fit the ‘product’ of the nitrate and phosphate factors, i.e.

wheat.model<-aov(yield ~ N*P, data=wheat.yield )
anova( wheat.model )
Analysis of Variance Table
Response: yield       
           Df  Sum Sq Mean Sq  F value   Pr(>F)    
N           1  83.645  83.645  43.3631   9.122e-10 ***
P           1 256.136 256.136 132.7859   < 2.2e-16 ***
N:P         1  11.143  11.143   5.7767   0.01759 *  
Residuals 136 262.336   1.929                       
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

N shows the effect of nitrate, P the effect of phosphate, and N:P is the interaction term. As you can see, this interaction term appears to be significant: the nitrate+phosphate combination seems to give a higher yield than you would expect from just adding up the individual effects of nitrate and phosphate alone. To test this explicitly, we can fit a model that lacks the interaction term, using N+P to fit the ‘sum’ of the factors without the N:P term:

wheat.model.no.interaction<-aov(yield ~ N+P, data=wheat.yield )
anova( wheat.model )
Analysis of Variance Table
Response: yield         
          Df  Sum Sq Mean Sq F value    Pr(>F)   
N          1  83.645  83.645  41.902    1.583e-09 ***
P          1 256.136 256.136 128.312    < 2.2e-16 ***
Residuals 137 273.479   1.996                      
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

We can then use the anova() function with two arguments to compare these two models with an F test:

anova( wheat.model, wheat.model.no.interaction )
Analysis of Variance Table
Model 1: yield ~ N * P
Model 2: yield ~ N + P
  Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
1    136 262.34        
2    137 273.48 -1   -11.143 5.7767 0.01759 *
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

An alternative way of doing the same thing is to update the maximal model by deletion. First you fit the maximal model, as before:

wheat.model<-aov(yield ~ N*P, data=wheat.yield )
anova( wheat.model )

You note the least significant term (largest p value) is the N:P term, and you selectively remove that from the model to produce a revised model with update():

wheat.model.no.interaction<-update( wheat.model, ~.-N:P)
anova( wheat.model, wheat.model.no.interaction )

The ~.N:P is shorthand for “fit (~) the current wheat.model (.) minus (-) the interaction term (N:P)”. This technique is particularly convenient when you are iteratively simplifying a model with a larger number of factors and a large number of interactions.

Whichever method we use, we note that the deletion of the interaction term significantly reduces the explanatory power of the model, and therefore that our minimally adequate model is the one including the interaction term.

If you have larger numbers of factors (say a, b and c) each with a large number of levels (say 4, 2, and 5), it is possible to fit a maximal model (y~a*b*c), and to simplify down from that. However, fitting a maximal model in this case would involve estimating 40 separate parameters, one for each combination of the 4 levels of a, with the 2 levels of b, and the 5 levels of c. It is unlikely that your data set is large enough to make a model of 40 parameters a useful simplification compared to the raw data set itself. It would even saturate it if you have just one datum for each possible {a,b,c} combination. Remember that one important point of a model is to provide a simplification of a large data set. If you want to detect an interaction between factors a and b reliably, you need enough data to do so. In an experimental situation, this might impact on whether you actually want to make c a variable at all, or rather to control it instead, if this is possible.

Exercises

Analyse the following data set with ANOVA

  1. The file glucose_conc.csv contains fasting blood serum glucose levels (mg dL−1) for humans homozygous for either a mutant allele or the wild-type allele at two loci (Rtfm and Stfw) thought to be involved in blood glucose control. Do the data support a correlation between possession of the mutant alleles and elevated glucose levels?

Answers

  1. Glucose concentrations
glucose.conc<-read.csv( "H:/R/glucose_conc.csv" )
interaction.plot(
    response     = glucose.conc$conc,
    x.factor     = glucose.conc$Stfw,
    trace.factor = glucose.conc$Rtfm
)

Glucose interaction plot [CC-BY-SA-3.0 Steve Cook]

The lines are parallel, so there seems little evidence of interaction between the loci. The difference between the mutant and wildtypes for the Rtfm locus doesn’t look large, and may not be significant. However, the Stfw wildtypes seem to have better control of blood glucose. A two-way ANOVA can investigate this:

glucose.model<-aov( conc ~ Stfw*Rtfm, data = glucose.conc )
anova( glucose.model )
Analysis of Variance Table
Response: conc
           Df Sum Sq Mean Sq  F value  Pr(>F)   
Stfw        1 133233  133233 311.4965 < 2e-16 ***
Rtfm        1   1464    1464   3.4238 0.06526 .  
Stfw:Rtfm   1     17      17   0.0407 0.84020    
Residuals 296 126605     428                     
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

As we suspected, the Stfw:Rtfm interaction is not significant. We will remove it by deletion:

glucose.model.no.interaction<-update( glucose.model, ~.-Stfw:Rtfm )
anova( glucose.model, glucose.model.no.interaction )
Analysis of Variance Table

Model 1: conc ~ Stfw * Rtfm
Model 2: conc ~ Stfw + Rtfm
  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1    296 126605                           
2    297 126622 -1   -17.421 0.0407 0.8402

The p value is much larger than 0.05, so the model including the interaction term is not significantly better than the one excluding it.

The reduced model is now:

anova(glucose.model.no.interaction )
Analysis of Variance Table
Response: conc
           Df Sum Sq Mean Sq  F value  Pr(>F)   
Stfw        1 133233  133233 312.5059 < 2e-16 ***
Rtfm        1   1464    1464   3.4349 0.06482 .  
Residuals 297 126622     426                     
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

When we remove the interaction term, its variance is redistributed to the remaining factors, which will change their values compared to the maximal model we fitted at the start. It appears that the wildtype and mutants of Rtfm do not significantly differ in their glucose levels, so we will now remove that term too:

glucose.model.stfw.only<-update( glucose.model.no.interaction, ~.-Rtfm )
anova( glucose.model.no.interaction, glucose.model.stfw.only )
Analysis of Variance Table
Model 1: conc ~ Stfw + Rtfm
Model 2: conc ~ Stfw
  Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
1    297 126622                              
2    298 128087 -1   -1464.4 3.4349 0.06482 .
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Again, we detect no significant different between the models, so we accept the simpler one (Occam’s razor).

anova( glucose.model.stfw.only )
Analysis of Variance Table

Response: conc
           Df Sum Sq Mean Sq F value    Pr(>F)    
Stfw        1 133233  133233  309.97 < 2.2e-16 ***
Residuals 298 128087     430                      
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The Stfw locus does have a very significant effect on blood glucose levels. We could try deletion testing Stfw too, but here it in not really necessary as the ANOVA table above is comparing the Stfw only model to the null model in any case. To determine what the effect of Stfw is, we can use a Tukey test:

TukeyHSD( glucose.model.stfw.only )
  Tukey multiple comparisons of means
    95% family-wise confidence level
Fit: aov(formula = conc ~ Stfw, data = glucose.conc)
$Stfw
                     diff     lwr       upr p adj
wildtype-mutant -42.14783 -46.859 -37.43666     0

… or, as this model has in fact simplified down to the two-levels of one factor, this is equivalent to just doing a t test at this point:

t.test( conc ~ Stfw, data = glucose.conc)
        Welch Two Sample t-test
data:  conc by Stfw
t = 17.6061, df = 289.288, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 37.43609 46.85958
sample estimates:
  mean in group mutant mean in group wildtype 
             123.87824               81.73041

Given this analysis, the best way to represent our data is probably a simple boxplot ignoring Rtfm, with an explanatory legend:

boxplot(
    conc ~ Stfw,
    data = glucose.conc,
    xlab = expression( italic(Sftw) ),
    ylab = expression( "[Glucose] / mg "*dL^-1 ),
    main = "Mutants at the Stfw locus are less able\nto control their blood glucose levels"
)

Glucose boxplot [CC-BY-SA-3.0 Steve Cook]

Homozygous mutants at the Stfw locus were found to be less well able to control their blood glucose levels (F=310, p≪0.001). Homozygous mutation of the Rtfm locus was found to have no significant effect on blood glucose levels (F on deletion from ANOVA model = 3.4, p=0.06). Homozygous mutation at the Stfw locus was associated with a blood glucose level 40 mg dL−1 (95% CI: 37.4…46.8) higher than the wildtype (t=17.6, p≪0.001).

Next up… Nonlinear regression.

Feb 24

Analysis of variance: ANOVA (1 way)

Analysis of variance is the technique to use when you might otherwise be considering a large number of pairwise F and t tests, i.e. where you want to know whether a factor with more than 2 levels is a useful predictor of a dependent variable.

For example, cuckoo_eggs.csv contains data on the length of cuckoo eggs laid into different host species, the (meadow) pipit, the (reed) warbler, and the wren.

A box-and-whisker plot is a useful way to view data of this sort:

cuckoo.eggs<-read.csv( "H:/R/cuckoo_eggs.csv" )
boxplot( 
    egg.length ~ species,
    data = cuckoo.eggs,
    xlab = "Host species",
    ylab = "Egg length / mm"
)

Cuckoo eggs boxplot [CC-BY-SA-3.0 Steve Cook]

You might be tempted to try t testing each pairwise comparison (pipit vs. wren, warbler vs. pipit, and warbler vs. wren), but a one-factor analysis of variance (ANOVA) is what you actually want here. ANOVA works by fitting individual means to the three levels (warbler, pipit, wren) of the factor (host species) and seeing whether this results in a significantly smaller residual variance than fitting a simple overall mean to the entire data set.

Conceptually, this is very similar to what we did with linear regression: ANOVA compares the residuals on the model represented by the “y is a constant” graph below:

Cuckoo eggs null model [CC-BY-SA-3.0 Steve Cook]

…with a model where three individual means have been fitted, the “y varies by group” model:

Cuckoo eggs ANOVA model [CC-BY-SA-3.0 Steve Cook]

It’s not immediately obvious that fitting three separate means has bought us much: the model is more complicated, but the length of the red lines doesn’t seem to have changed hugely. However, R can tell us precisely whether or not this is the case. The syntax for categorical model fitting is aov(), for analysis of variance:

aov( egg.length ~ species, data=cuckoo.eggs )
Call:
   aov(formula = egg.length ~ species, data = cuckoo.eggs)
Terms:
                 species Residuals
Sum of Squares  35.57612  55.85047
Deg. of Freedom        2        57
Residual standard error: 0.989865
Estimated effects may be unbalanced

As with linear regression, you may well wish to save the model for later use. It is traditional to display the results of an ANOVA in tabular format, which can be produced using anova()

cuckoo.model<-aov( egg.length ~ species, data=cuckoo.eggs )
anova( cuckoo.model )
Analysis of Variance Table
Response: egg.length
          Df Sum Sq Mean Sq F value    Pr(>F)    
species    2 35.576 17.7881  18.154 7.938e-07 ***
Residuals 57 55.850  0.9798                      
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The “y is a constant” model has only one variance associated with it: the total sum of squares of the deviances from the overall mean (SST, or SSY, whichever you prefer) divided by the degrees of freedom in the data set (n−1).

The “y varies by group” model decomposes this overall variance into two components.

  • The first component is associated with categorising the data into k groups (k=3 here). It is the sum of squares of the deviances of each datum from its group-specific mean (SSA) divided by the number of degrees of freedom those k means have, (k−1). This represents the between group variation in the data.
  • The second component is the residual error remaining, which is the sum of squares of the remaining deviances (SSE) divided by the degrees of freedom left after estimating those k individual means, (nk). This represents the within group variation in the data.

In the ANOVA table, the Sum Sq is the sum of the squares of the deviances of the data points from a mean. The Df is the degrees of freedom, and the Mean Sq is the sum of squares divided by the degrees of freedom, which is the corresponding variance.

The F value is the mean-square for species divided by the mean-square of the Residuals. The p value indicates that categorising the data into three groups does make a significant difference to explaining the variance in the data, i.e. estimating three separate mean for each host species, rather than one grand mean, does make a significant difference to how well we can explain the data. The length of the eggs the cuckoo lays does vary by species.

Compare this with linear regression, where you’re trying to find out whether y=a+bx is a better model of the data than y=. This is very similar to ANOVA, where we are trying to find out whether “y varies by group, i.e. the levels of a factor” is a better model than “y is a constant, i.e. the overall mean”.

You might well now ask “but which means are different?” This can be investigated using TukeyHSD() (honest significant differences) which performs a (better!) version of the pairwise t test you were probably considering at the top of this post.

TukeyHSD( cuckoo.model )
  Tukey multiple comparisons of means
    95% family-wise confidence level
Fit: aov(formula = egg.length ~ species, data = cuckoo.eggs)
$species
                 diff        lwr        upr    p adj
Warbler-Pipit -1.5395 -2.2927638 -0.7862362 0.000023
Wren-Pipit    -1.7135 -2.4667638 -0.9602362 0.000003
Wren-Warbler  -0.1740 -0.9272638  0.5792638 0.843888

This confirms that – as the box-and-whisker plots suggested – the eggs laid in pipit and warbler nests are not significantly different in size, but those laid in wren nests are significantly smaller than those in the warbler or pipit nests.

ANOVA has the same sorts of assumption as the F and t tests and as linear regression: normality of residuals, homoscedacity, representativeness of the sample, no error in the treatment variable, and independence of data points. You should therefore use the same checks after model fitting as you used for linear regression:

plot( residuals( cuckoo.model ) ~ fitted( cuckoo.model ) )

 

Cuckoo eggs residuals [CC-BY-SA-3.0 Steve Cook]

We do not expect a starry sky in the residual plot, as the fitted data are in three discrete levels. However, if the residuals are homoscedastic, we expect them to be of similar spread in all three treatments, i.e. the residuals shouldn’t be more scattered around the zero line in the pipits than in the wrens, for example. This plot seems consistent with that.

qqnorm( residuals( cuckoo.model ) )

Cuckoo eggs normal QQ plot [CC-BY-SA-3.0 Steve Cook]On the normal Q-Q plot, we do expect a straight line, which – again – we appear to have (although it’s a bit jagged, and a tiny bit S-shaped). We can accept that the residuals are more-or-less normal, and therefore that the analysis of variance was valid.

Exercises

Analyse the following data set with ANOVA

  1. The file venus_flytrap.csv contains data on the wet biomass (g) of Venus flytraps prevented from catching flies (control), allowed to catch flies, or prevented from catching flies but instead given fertiliser applied to the soil. Does the feeding treatment significantly affect the biomass? If so, which of the three means differ significantly, and in what directions? Have a look at the residuals in the same way as you did for the regression. How do they look?

Answers

  1. Venus flytrap biomass data
venus.flytrap<-read.csv("H:/R/venus_flytrap.csv")
plot(
    biomass ~ feed,
    data = venus.flytrap,
    xlab = "Feeding treatment",
    ylab = "Wet biomass / g",
    main = expression("Venus flytrap feeding regime affects wet biomass")
)

Venus flytrap boxplot [CC-BY-SA-3.0 Steve Cook]

flytrap.model<-aov( biomass ~ feed, data = venus.flytrap )
anova( flytrap.model )
Analysis of Variance Table
Response: biomass
         Df  Sum Sq Mean Sq F value    Pr(>F)   
feed      2  5.8597 2.92983  22.371 1.449e-08 ***
Residuals 87 11.3939 0.13096                      
---
Signif. codes: 
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
TukeyHSD( flytrap.model )
  Tukey multiple comparisons of means
    95% family-wise confidence level
Fit: aov(formula = biomass ~ feed, data = venus.flytrap)
$feed
                         diff         lwr        upr     p adj
fertiliser-control -0.3086667 -0.53147173 -0.0858616 0.0039294
fly-control         0.3163333  0.09352827  0.5391384 0.0030387
fly-fertiliser      0.6250000  0.40219493  0.8478051 0.0000000

The feeding treatment makes a significant different to the wet biomass (F=22.3, p=1.45 × 10−8) and a Tukey HSD test shows that all three means are different, with the fly treatment having a beneficial effect (on average, the fly-treated plants are 0.32 g heavier than the control plants), but the fertiliser has an actively negative effect on growth, with these plants on average being 0.31 g lighter than the control plants.

To plot the residuals, we use the same code as for the linear regression:

plot( residuals(flytrap.model) ~ fitted(flytrap.model) )

Venus flytrap residuals [CC-BY-SA-3.0 Steve Cook]

There is perhaps a bit more scatter in the residuals of the control (the ones in the middle), but nothing much to worry about.

qqnorm( residuals( flytrap.model ) )

Venus flytrap normal QQ plot [CC-BY-SA-3.0 Steve Cook]

On the normal Q-Q plot, we do expect a straight line, which – again – we appear to have. We can accept that the residuals are essentially homoscedastic and normal, and therefore that the analysis of variance was valid.

Next up… Two-way ANOVA.

Older posts «