Population biologists often want to infer the demographic history of the species they study. This includes identifying population subdivision, expansion, and bottlenecks. Genetic data sampled from multiple individuals can often be applied to study population structure. When phylogenetic methods are used to link evolutionary relationships to geography, the approaches fall under the guise of phylogeography.
The past decade has seen the rise in popularity of a particular phylogeographical approach for intra-specific data: nested clade analysis (Templeton et al. 1995; Templeton 2004). Many of the methods used in intra-specific phylogeography have been called into question because of their lack of statistical rigor, as I have described previously (How do you really feel, Dr. Wakely?). Nested clade phylogeographical analysis (NCPA) is no exception. Lacey Knowles summarizes the criticisms of NCPA in the most recent issue of Evolution (Why does a method that fails continue to be used?).
The strongest part of Knowles' critique focuses on one primary issue: NCPA tends to result in false positives. While NCPA does an adequate job of inferring actual demographic events (such as population subdivision), it falsely identifies extra events. Knowles points out that it is especially biased towards inferring isolation by distance when there has not been any. Interestingly, these false inferences occur with both empirical data (for which the demographic history is assumed to be well known) and simulated data (for which the demographic history is known). Alan Templeton (the creator of NCPA and its most ardent defender) argues that the simulation studies are not an adequate test of NCPA because they only offer simple evolutionary scenarios. However, Knowles points out that if NCPA fails with simple scenarios, how can it be trusted with the complicated ones that exist in nature?
NCPA is a very popular method -- Remy Petit identified over 1700 citations as of about one year ago (doi:10.1111/j.1365-294X.2007.03589.x). Additionally, Knowles points out that a six year old critique of NCPA (Knowles and Maddison 2002) has been cited 210 times, often by empirical studies that still used NCPA! That raises the question: why do people continue to use NCPA if it hasn't been shown to work? It can't be because they don't know of the limitations of NCPA -- they're citing papers that layout those limitations.
Finally, I will relate this to previous rants on evolgen. Knowles points out that some of the criticisms of NCPA rely on the inference of historical events from simulation studies of a single locus. As I have mentioned previously, inference of historical demographic events from a single locus is not acceptable (see here and here). There is so much stochastic noise in evolutionary systems, and trying to identify demographic history using a sample size of one does not take the large variance of the system into account. Templeton and other NCPA defenders argue that simulations using a single locus are not an adequate test of NCPA. But 88% of the NCPA studies Knowles identified used only a single locus. Not only are people using a method that has never been shown to work, but they are also using the method with insufficient data. Double fail!
Knowles and Maddison 2002. Statistical phylogeography. Mol Ecol 11: 2623-2635 [link]
Knowles 2008. Why does a method that fails continue to be used? Evolution 62: 2713-2717 [link]
Petit 2008. The coup de grâce for the nested clade phylogeographic analysis? Mol Ecol 17: 516 - 518 doi:10.1111/j.1365-294X.2007.03589.x
Templeton et al. 1995. Separating Population Structure from Population History: A Cladistic Analysis of the Geographical Distribution of Mitochondrial DNA Haplotypes in the Tiger Salamander, Ambystoma tigrinum. Genetics 140: 767-782 [link]
Templeton 2004. Statistical phylogeography: methods of evaluating and minimizing inference errors. Mol Ecol 13: 789 - 809 doi:10.1046/j.1365-294X.2003.02041.x
- Log in to post comments
I got to it first - it's too much fun not to point out. The Knowles paper is a great rant: they should allow comments at the end and call it a blog.
But the DOI still isn't working. :-(
I haven't had chance to read the Knowles paper yet, but if I had to answer the question (Why does a method that fails continue to be used?) as a neutral you would have to look at lack of trust. That would be a lack of trust in the criticisms of the 3 primary groups investigating NCPA. Petit and Knowles in particular are not helping themselves by the choice of language and style of writing. I review a lot of papers using NCPA (among other approaches) and the authors aren't ignorant of the criticisms. Saying that NCPA makes an error in X% of cases misses the point since equal importance is not given to each outcome and authors have a Discussion section to consider the evidence. Even the publications of the critics show it is probably quite good at correctly identifying major subdivisions. The papers in question usually say "Hey look a big subdivision! There may also be some other stuff, more data and a diversity of analysis needed". Well OK. Are summary stats/ coalescent really going to be any better? No, possibly much worse in inexperienced hands. I really hope when I read the Knowles paper that it doesn't sound too crazy, a bit of common sense would be helpful now.
I understand exactly why someone would use a technique that has been shown not to work. It's the same reason why people still say "drink a glass of water upside down" to get rid of hiccups. They don't have anything to correctly do the job they want, so they reach for what is available.
My guess is that any substitute for NCPA takes longer and is harder to do. This isn't an excuse, since no one said that genetics was easy.
I've used NCPA as part of a battery of approaches to tease apart intraspecific goings-on in various taxa, and as Dave noted above, it usually does a fine job of detecting major subdivisions (as does AMOVA and just about any other remotely appropriate method). In my experience, most people seem pretty good at remaining skeptical regarding the dodgier inferences spit out by NCPA. Now, you could say "If better methods tell you the same thing, why use NCPA at all?". I agree, but as a practical matter, if you don't do it, some reviewer (or an editor) will probably make you do it anyway! I was going to agree with Travis that NCPA is easier than other methods, but (unless it's changed substantially in the last few years) NCPA is a pain! Multiple software packages, using the inference key...yuck.
As far as using a single locus for inferring historical demography, it's fine to argue that it's unacceptable (and anyone with any knowledge of coalescent theory would agree), but again, as a practical matter, unless you are working on a fairly well-studied group, you are often stuck. Mitochondrial loci are easy to amplify and variable enough for intraspecific questions (too bad they are all linked), while primers for sufficiently variable nuclear loci are often unavailable without a lot of pilot work (and even if you do attempt to develop your own primers, you could come up with nothing useful).
Hey Andy,
What are the alternatives to NCPA and how do they compare in terms of overall work? I was just guessing, since I am not familiar with the process.
This discussion reminds me of something that Richard Feynman was saying in his book, "The Pleasure of Finding Things Out" where apparently, a psychologist professor had figured out why rats in mazes seem to know their absolute location, rather than their relative location. Lots of experiments go wrong because of this. Anyway, one person found out that they were echolocating with the vibrations as they ran, so he eventually found that putting sand under the maze made it go away and results were as expected. Feynman looked into it and realize that 20 years after the results were published, psychologists are still running rats in mazes the old way.
I think geneticists should be better at fixing something like this, or at least using it as appropriate.
What are the alternatives to NCPA and how do they compare in terms of overall work? I was just guessing, since I am not familiar with the process.
I preface all of what I'm about to write below by noting that I am, by training, a phylogeneticist and not a population geneticist. Also, I haven't done this sort of work in a couple of years, and the field changes rapidly.
If you are just looking for evidence of geographic structure or limited gene flow among populations, there are lots of methods that could be used. For example, AMOVA (Analysis of Molecular Variance, implemented in Arlequin) is commonly used for this purpose. However, NCPA does (or claims to do) more than that -- it attempts to provide explanations (i.e., isolation by distance, ancient vicariance, etc.) for the observed genetic/geographic patterns. That's where things get quite dicey. Other approaches that could be used for such purposes include a "statistical phylogeographic" approach (as advocated by Lacey Knowles and many others) that allows explicit hypothesis testing while attempting to account for coalescent processes. For example, you might have a situation where you expect ancient vicariance between two populations due to, say, a climatic change two million years ago. Based on that hypothesis (and lots of other info for your system, like generation times for your critters, current and historical population size estimates, etc.), you can simulate gene genealogies under an ancient vicariance scenario and then see how your observed genetic data compare to your (simulation-based) expectations. Better yet, you can compare two mutually exclusive hypotheses to see if one fits the observed data better.
It's a very cool approach, but it may require you to have estimates of various parameters that may not be easy to estimate, and there isn't yet a program out there that will easily do all of this for you (Mesquite is often used for this sort of work, and it's great, but it isn't exactly straightforward...of course, neither are the NCPA programs!). One problem I've had with some of the statistical phylogeographic work I've seen is that the population parameters needed to generate the simulated data often seem to come out of thin air. I'd prefer to see more sensitivity analyses that take into account uncertainties in some of these crucial parameters. I suspect this often isn't done because this type of work is still such a hassle to do that it's hard to simulate gene trees under twenty different scenarios that account for uncertainty in population sizes, etc. That's understandable, but clearly not optimal.
Another package that I've used is Jody Hey's IM (and I see he has a newer, better version called IMa). IM is nice if you have a pair of closely related populations and you want to attempt to estimate migration rates between the populations, effective population sizes, etc., without putting too much stock in a given gene tree. IM estimates population parameters by integrating over gene genealogies using an MCMC approach. LAMARC and Migrate-n (and probably several other packages) do similar things. These methods are all explicitly grounded in coalescent theory. IM is great, but assessing convergence can be a bit tricky (as it is with any MCMC-based method), and it's entirely possible that a given data set is insufficient to allow estimates of parameters of interest. I had one data set that was easy to work with in IM and I got perfectly reasonable results. A student of mine had (what I thought was) a better data set that caused IM to go kablooey. You never know until you try. The more data (in terms of individuals and loci), the better, but more data means longer -- potentially much longer -- run times.
This pretty much taps out my knowledge on the subject. It's a rapidly evolving field, so it's possible some of what I've written is a bit out of date. Several of the software packages I mentioned have updated versions that I haven't used, and it's entirely possible that I have missed a major theoretical breakthrough or three. But I hope this helps!