Modeling antiviral resistance, IV: the essential assumption

[A series of posts explaining a paper on the mathematical modeling of the spread of antiviral resistance. Links to other posts in the series by clicking tags, "Math model series" or "Antiviral model series" under Categories, left sidebar. Preliminary post here. Table of contents at end of this post.]

We are almost ready to begin a detailed examination of the mathematical model presented in the paper, "Antiviral resistance and the control of pandemic influenza," by Lipsitch et al., published in PLoS Medicine. The main model is presented in the first four paragraphs of the Methods.

Some preliminaries on homogeneous population models:

The first paragraph tells us that the presentation is of a slightly simpler model than used for the numerical simulations reported in Results. The authors have made the code for the model freely available in a form which can be pasted into one of the commonly used software packages for solving systems of ordinary differential equations (Berkeley Madonna version 8.2). This enables others to check, modify and build on their work. This is commendable practice and we hope to see it more often.

The description of the basic model begins in paragraph 2 of Methods. Its essential elements are displayed in Figure 1 (page 3), which we will examine in the next post. Before we dig into it in earnest, we need to talk a little more about this kind of model, known as a homogeneous population model. The word "homogeneous" means "the same throughout." In this case it refers to the fact that the population being modeled is assumed composed of essentially identical individuals, each of which has an equal chance of coming in contact with any other person in the population (this is also called random mixing). This is an idealized situation and in the numerical simulations that are given in Results the authors refine it by putting people into age groups, allowing different contact rates in and between the groups.

It turns out adding an age structure makes relatively little difference in this case. The random mixing assumption is commonly used and allows certain kinds of mathematical analysis to be done (it is also the kind of assumption non-modelers make when they envisage scenarios like, "what happens if a school child brings flu to her classroom?"). There are other kinds of models that examine how disease spread is affected by non-random contact patterns. In this model it is not one of the things being investigated, although we will discuss it again in the second last post. It is best to think of modeling research in the same way you would a laboratory experiment where certain questions are isolated and examined while other elements are controlled. Modelers also try to isolate certain key questions they want to answer with the model and they devise systems that keep other things, like contact structure, constant. So think of the homogeneous (random mixing) assumption as similar to using a mouse to answer questions about a human. The fit isn't perfect (a mouse isn't a human), but you hope the essential features are sufficiently close that inferences from your mouse experiment give you usable information about human health and disease. At some point you have to ask about the consequences of your assumptions. But you don't do everything at once. Just as biologists gradually build up a picture of what is happening by doing many different experiments, using different techniques and species, modelers do the same thing with different kinds of models. They are looking for patterns and inferences they can make on the basis of their different experimental systems, just like wet bench researchers. The hope is that with continued experience, the pieces of the jigsaw puzzle will start to fit together in a coherent picture.

The homogeneous population model is extremely important for understanding mathematical modeling of this sort because it gives modelers a means to describe how people move from being susceptible to infection, to being infected, to dying or recovering. The idea behind this kind of analysis came originally from chemistry, where it was called the Law of Mass Action. I'll explain it in that context first because I think it is easier to visualize.

The Law of Mass Action in chemistry:

Chemists are interested in how chemicals react with each other in a solution or in a mixture of gases, so they reasoned this way. If two molecules are going to react they need to be in the same place (technically the same small volume) at the same time. This is the chemist's version of being "in contact." Chemicals don't react with each other from across the room. Consider the probability that a molecule will be at some place at a particular time. It should be unaffected by the probability another molecule will also be there. As they randomly bounce around in solution or the gas they are "unaware" of the other molecules except if they bump into each other. This "unawareness" is expressed by saying the probabilities of being in a particular place at a particular time are independent of each other. The probability of a combined event for two independent events is the product of the probabilities of the events considered separately. This is the same as saying the probability of tossing two heads in a row (each toss having an independent probability of 1/2) is 1/4 (1/2 times 1/2). You can see the validity of this for yourself if you think of the four equally likely outcomes of the two coin tosses: HT, TH, TT and HH. The HH (Heads Heads) combo is one fourth of the total possible outcomes. Thus the probability that two reacting molecules will be in the same small volume at the same time is equal to the probability one is in a particular small volume times the probability the other is, too.

Now comes the crucial step. The probability that some molecule of chemical A will be in a particular small volume at a particular time depends on how many molecules of chemical A are in the gas or the solution. If there are twice as many, the probability is twice as much. This makes some intuitive sense. In your town, whatever the random chance of any person being at the corner of Lake and Spruce Streets at noon on July 12, if you double the number of people in town you will roughly double the chances that someone will be walking by that corner at that time. The same is true for chemical B. If you double the number of B molecules, you will double the chance one of them will be in the same place at a particular time as one of the A molecules. Chemists therefore reasoned that how fast a chemical reaction occurs (the number of such simultaneous encounters per hour, say) is proportional to the product of the number of molecules in the reacting volume of each. That's because we are multiplying probabilities. This was an educated guess about how the chemical world would work. They called this broad generalization, The Law of Mass Action. And it seems to be the case, an empirical fact, not a true "Law." Its main justification in chemistry is that it works. It describes the world. It turns out that it can also be made to follow mathematically from additional assumptions about random mixing and small encounter probabilities.

The Law of Mass Action at work in epidemiology:

It was but a short leap from this analogy to the idea of a Law of Mass Action for epidemiology. Instead of two different chemicals being in contact we have two people being in contact, one infected and one susceptible. The number of new cases of influenza in a day or a week will depend on how many contacts between an infected and a susceptible person there are in a day or a week. This is converted into a mathematical notion by expressing the [per capita] rate of new infections as being proportional to the number of susceptibles times the number of infectives. If you double either one, the [per capita] number of new infectives in a day or a week will also double. Again, while this makes some intuitive sense, it is an assumption. As in the chemistry example it can be made to follow from the earlier assumptions that all people in the population are exchangeable with one another (all molecules of chemical A are the same) and randomly mixed (susceptibles and infecteds come in contact unaware of each other's status and with equal chance). There are ways around these assumptions but we will not deal with them here so as to keep the explanation as clear as we can make it. We will return to the assumption itself in the second last post.

We are now ready to examine the model in detail and explain the figure. That will begin in the next post.

[Addendum/clarification: Ethan in the comments pointed out an ambiguity in the second last paragraph, which I have now clarified with bracketed insertions of [per capita]. The chemistry example should be clear, as is, and the epidemiological analog parallels it. Hat tip to Ethan for pointing out the ambiguity.]

Table of contents for posts in the series:

What is a model?

A modeling paper

The Introduction. What's the paper about?

The essential assumption.

Sidebar: thinking mathematically

The model variables

The rule book

More on the rule book

Finishing the rule book

The rule book in equation form

Ready to run the model

Effects of treatment and prophylaxis on resistance

Effects of Tamiflu use and non drug interventions

Effects of fitness costs of resistance

Discussion

A few words about model assumptions

Conclusion and take home messages

More like this

[A series of posts explaining a paper on the mathematical modeling of the spread of antiviral resistance. Links to other posts in the series by clicking tags, "Math model series" or "Antiviral model series" under Categories, left sidebar. Preliminary post here. Table of contents at end of this post…
[A series of posts explaining a paper on the mathematical modeling of the spread of antiviral resistance. Links to other posts in the series by clicking tags, "Math model series" or "Antiviral model series" under Categories, left sidebar. Preliminary post here. Table of contents at end of this post…
[A series of posts explaining a paper on the mathematical modeling of the spread of antiviral resistance. Links to other posts in the series by clicking tags, "Math model series" or "Antiviral model series" under Categories, left sidebar. Preliminary post here. Table of contents at end of this post…
[A series of posts explaining a paper on the mathematical modeling of the spread of antiviral resistance. Links to other posts in the series by clicking tags, "Math model series" or "Antiviral model series" under Categories, left sidebar. Preliminary post here. Table of contents at end of this post…

In the second to last paragraph, you say,

"If you double either one, the number of new infectives in a day or a week will also double."

Is this correct? It seems to me that it is not. For example, in an SIS model the rate of infection is equal to the product of effective contact rate (lambda) the absolute number of susceptibles and the relative number of infectives (i.e. under mass action). If this is true, then the case where S = 0.5 and I = 0.5, the rate of infection is equal to 0.5 * 0.5 * lambda. If we double I to equal 1.0, then the rate of infection is equal to 0.5 * 1.0/1.5 * lambda. Then, assuming everything else equal, the relative increase in the rate of infection corresponding to an doubling of the number of infectives is equal to 1.3bar not 2.

By Ethan Romero (not verified) on 24 Mar 2007 #permalink

OK, I wrote a little SIS simulation and got the following results.

Setting 1

S = 0.5
I = 0.5

lambda = 1.0
delta = 0.25 (inverse of the duration of infectivity)

At time 100 when R0 is approximately 1, the prevalence is 0.5

Setting 2

S = 0.5
I = 1.0

lambda = 1.0
delta = 0.25 (inverse of the duration of infectivity)

At time 100 when R0 is approximately 1, the prevalence is 0.75

So the relative increase in the rate of infection near equilibrium is 133% not 200%. I believe that this is due to the fact that when the rate of infection is increased (by increasing the number of infectives) the rate of recovery is also increased (by the corresponding increase in the rate of infection). This is a feed back loop which dampens the effect of increasing the number of initial infectives in the initial state.

By Ethan Romero (not verified) on 24 Mar 2007 #permalink

Ethan: we're only talking about the per capita new infectives produced per day (or week). With Mass Action, 1/S * dX/dT = -β * I, i.e., the per capita increase is proportional to the number of infectives. If you double the number of infectives you double the per capita increase (or for the variable I exhibited here, double the decrease in susceptibles).

Ethan: I just reread my text in the post and I think I see where the confusion starts. I rewrote some of my text before posting and it can certainly be read the way you read it. It should be read the way I wrote it in my previous comment. I should clarify that in the text and I thank you for calling this to my attention. Keep reading. You'll likely find other errors along the way. I actually found two in the paper itself, both very minor, but it happens a lot -- even in this blog :).

I was really looking forward to following this series of explanatory posts, delivered in the excellent manner of 'The Revere' and I have been enjoying them - until now.

I have just read that the whole model is based on the homogeneous population concept. Such a model is just about excusable in chemistry but it does not even make it through the world of bacteria, let alone into the world of humans. As a microbioligist, if I did not base my sampling and assessment protocols on the populations being NON homogeneous, I would fail to assess and correctly interpret microbiological situations.

With people, the importance of non homogeneity or more specifically, the importance of choke points and networkers is even more critical. You live in a nation shaped by individuals, you even take your name from one of the most influential networkers ever to change the face of your society. It is no wonder their (Marc Lipsitch's) variant of age group classification had little impact, if they subsequently ignored the specifics within those age groups of geographically static meeting points (choke points) and geographically mobile networkers (those individuals that meet hundreds of others every day).

In the past few years a number of researchers have shown repeatedly how a few highly connected individuals (or places) can transform a large world model into a small world model and how these factors can have a massive effect on a systems tipping points.

What is the point in constructing a model, when key aspects of disease spread are left out of the model? Did these expert Harvard modellers find it just too hard to include the concepts of 'Small World Networks' and 'Tipping Points' or do they really believe that these mechanisms which pervade every aspect of our lives will somehow fail to play any part in the process of disease spread and subsequent development of resistance?

Yes we have to make some rationalisations in order to create a model, but if that model looses contact with reality, then the chances are that the lessons we draw from that model will likewise belong to the realm of fantasy.

I will continue reading the series with interest - hoping to see that these aspects do not have any impact on the findings from the model.

By DerekSmith (not verified) on 24 Mar 2007 #permalink

My nit to pick. Both in the second paragraph of the Law of Mass Action for chemistry, and in the only paragraph of the Law of Mass Action for epidemiology, when you say "double the number of people", you must also specify "while keeping your town the same size".

It is unusual to double the number of people without building new suburbs or slums, which (at least in the case of suburbs) discourages the new people from finding the corner of Lake and Spruce Streets.

Derek: I think you don't give them enough credit. Everyone in the epi modeling world struggles with the underlying assumptions. The Colizza model uses a scale free topology for the air system but retains a random mixing model for the cities it connects. The question is not whether the model is right or wrong. It is wrong, because all models are wrong. The question is whether it is useful and for what. Random mixing models are very useful for certain purposes, just as exponential increase models are useful. As a microbiologist you do this when you model bacterial growth into lag phases, log phases, platueus and declines. The log phase is exponential growth, but only for a restricted domain. As I pointed out, continued exponential growth, even for short periods, is impossible. But when we do food safety, we are concerned with the lag and log phases and they are the basis for the "Keep it hot or keep it cool or don't keep it" adage.

The topologies you cite are also quite imperfect. Small world topologies are not the same as scale free ones and neither is a random graph. All three have threshold effects, so it isn't a question of "tipping points." But for some purposes random graphs work just fine, just as exponential models do. This paper is one of those instances. Part of the art and science of modeling is knowing what you are doing. These folks know what they are doing and can't be accused of being ignorant of the consequences of adopting a homogeneous population model for the purposes they are using it. I know this because I have discussed it with Lipsitch. So my advice is to read the paper and/or the series and cut them some slack. Then decide what you think. At the very least, you would have to ask, as modelers now do habitually, what difference would it make if I used a different contact topology? If you can't answer it satisfactorily, or answer it in a way that suggests (as here) that the important influenza control implications aren't changed, then I think you might view this differently. But that will be up to you.

I am quite familiar with the work of Barabasi and Watts and Strogatz and think highly of it. I don't worry it negates the value of this work. Remember, ferrets are not people but we learn a lot from experimenting on them with influenza virus. Some people think that letting a small rodent like animal stand in for a human is losing touch with reality, but I don't agree. I'll return to the assumptions at the end of the series.

Greg: About your nit. That was bad wording on my part (see my clarification and comment in reply to Ethan), but when we talk about doubling the number, we are talking about the number in one of the compartments or boxes. When you double the number of infectives in a population of fixed size, you are also decreasing the number of susceptibles by an equal amount, so the total population remains the same (in an SIR model, people are also leaking out to the Removed box). Because what happens in one box affects the others, the equations are "coupled." That's what makes them hard. The fact they are nonlinear makes them really hard.

Yes, Revere, that makes sense. Thank you.

Revere: Of course I will continue to follow the series. One of the problems of presenting an explanation in 'bite sized chunks' as you are doing in this series, is that it creates a tendency for folks like me to comment on the 'chunk in hand' in light of the 'series so far'. This means that comments are likely to be made without knowledge of the content of future articles within the series and therefore could well be jumping the gun with respect to future explanations. If this is what I have done, then please bear with me while I absorb the rest of the series.

However, you say that you "do not worry if it [small world topology] negates the value of this work" If indeed this is the case that the value of this work is negated, then what is its purpose? Is this nothing more than an elaborate academic exercise with no extrapolative value to a real world situation?

You comment that you are in the envious position of being in active communication with Lipsitch. Could I ask then why Lipsitch et al feel that it is unnecessary to use [even an imperfect] small world model to map infection? Why is it unnecessary to map infection in a manner which approximates to reality? If not approximating reality, why is it necessary to map infection at all?

Finally, you state that "it isn't a question of 'Tipping Points'" because 'Small World', 'Scale Free' and 'Random Graph' all have threshold effects. I would disagree, Ro=1.00 is a tipping point and a model which acknowledges the factors which influence the Ro will influence and track that tipping point.

You suggest that I have not given these folk enough credit. Please rest assured my questions and criticisms are made in order to fill in holes in my perception and understanding, the ignorance is mine, not theirs and I hope to learn a lot from this series [their paper is sadly mostly beyond my comprehension].

By DerekSmith (not verified) on 25 Mar 2007 #permalink

'' you say that you "do not worry if it [small world topology] negates the value of this work" ''

Revere did not use the word "if" between "... worry" and "it ...". The rest of the paragraph makes his meaning clear.

Derek: No problem about jumping the gun. The full explanation took much longer to set out than I thought at the outset, and I truly wouldn't have done it if I'd known. My hope was that when we finished you wouldn't feel that most of the paper is beyond your comprehension. It isn't. But I now think the blog format just isn't suited to this. Now to your points.

If you read my comment again you'll see I didn't say I don't worry if it negates it. I said I didn't worry that it did negate it, because I don't think it does. The non-random topologies tend to predict less spread, so at the least we are talking about an upper bound, which is very useful to know. Moreover the qualitative issues this paper discusses are not necessarily strongly affected by the topology. They are calculating eventual attack rates (long run behavior). There is also no reason to think that some of the other measures, such as time to peak, are qualitatively affected. It would be wrong, as they say and I repeat often, to rely on quantitative results. We are looking at general behavior. That is why I said the threshold concept wasn't germane. All the topologies you discuss (actually you only discuss small world but I assume you meant to include power law topologies as well) have thresholds like the kind of Kermack-McKendrick model we have here. You are right to say R0 is a threshold in this case. So feel free to consider effective R's up or down as you see appropriate to account for differences you might encounter in another topology. The scheme they present allows you to do this and in fact they use it to account for non-drug interventions.

Your question about why they didn't use another topology (I note that the evidence doesn't suggest small world as the appropriate one; scale free is more likely, but we can argue about that in another context) is because they used the tool they thought most appropriate in the circumstance. It's like asking one of the mouse or ferret scientists why they didn't use humans. There are practical and other reasons to use the tools they used. Colizza et al. used a combination to include the air transport system, to see what difference that made. In this instance it isn't obvious how to use another topology (we don't have the data for disease transmission on a global scale nor do we know what that topology might be; for air transport they had passenger data from IATA, so they used it for that segment) or more importantly, what you would buy by using it. You can look at the Colizza paper and decide if you think it was worth it. This is research and you don't do everything at once. The only way to eat an elephant is one bite at a time.

All infectious disease modelers are struggling with the contact topology prolbem. They all recognize it and are working to understand its role, if any. IMO, we will probably wind up using hybrid designs in those instances where we have some evidence it makes a difference and we have the data to allow us to do it. When there is uncertainty, you try to figure out what the effects of your assumption have for your results. We do this routinely in epidemiology when we have unknown biases. We try to get a handle on the extent and direction of possible bias and that can be done in modeling, too.

Meanwhile, you work a little at a time, pushing back the boundaries of knowledge. Science is slow. That gives us all plenty of time to understand the methods and techniques. Think of this as understanding how a mass spectrograph works, one of the workhorses of proteomics. Someday we will have something better. We use it now for very good reasons and it serves us well. Similarly, random mixing has been very useful and is the workhorse of a great deal of infectious disease modeling. It has been for a long time and we have a lot of experience with it. Someday we may have something better, especially as we begin to understand the role of contact topology and in what connections we need it and when we don't. And if we have the data to employ it in modeling, which we don't in most instances, now.

I once had a graduate student spend several years including a lot of extra information about personal habits in modeling exposure in an epi study. In the end it turned out all the complications we added into our simple model made hardly any difference. In the Lipsitch paper you will also see that adding in age structure made hardly any difference, even though a priori you would have thought it would. That's why science is interesting. You usually don't know the answer ahead of time.

Revere: Sorry, I read an 'if' that was not there and totally reversed the meaning of your statement - mea culpa.

I am surprised that you feel that the blog format does not suit this exercise. Can I say from the 'Student' side of the screen that I am finding it excellent. I can follow your post at a pace and time suitable to me, I can read and re-read to refine understanding and nuance. Then, through the comments facility, I can follow other 'Students' questions and ask my own both of yourself, other 'Students' and other 'Experts' should they join the discussion.

Why do you feel the blog format is less than ideal?

By DerekSmith (not verified) on 25 Mar 2007 #permalink

Derek: No problem. Anyway, blog readers are used to shortish posts (something i violate regualrly anyway), frequently updated and they are multitasking and surfing many sites, often daily. Thus they don't concentrate on a continuing story as easily, and the chronological and serial format scrolls things off the page and it's gone if they miss a few days or a week.

There will be some (I hope) who stay with it, but I am not optimistic. I get over a thousand unique visits a day and if 10 are still with me at the end of 16 posts I'll be astonished.

Revere: So if I may play Devils Advocate and paraphrase for the sake of my own understanding, the model was based on an homogeneous population, (or to use the chemistry analogy you coined, a second order rate reaction) because:-

a) it is a model they had easy access to
b) they had experience of using it before
c) it has been used before in similar analyses

and because:-

d) they did not have data to support the construction of a small world population
e) the extra cost of running a small world model might not be justified (and when a student incorporated significant quantities of what later turned out to be trivial factors into a model, they had no influence)

and despite :-

f) there is no way people behave in any way similar to molecules under Brownian motion
g) there is no conceivable way in which a real human population will be seeded in a random distribution
h) there is no way that diffusion describes human movement/interaction/contact (the checkout girl is more than a slight anachronism in the diffusion model)
i) ample evidence that age plays a key part in moderating pandemic response, yet when the homogeneous population model showed little age related effect it was simply ignored

Yes I agree that over time the bits of the jigsaw are drawn together, but it helps a lot if the bits all belong to the same jigsaw, i.e. we don't take time out trying to understand something that does not conform to reality. Yes, modellers are "looking for patterns and inferences ...... just like wet bench researchers". The difference is that the 'wet bench researchers' are nailed by reality while modellers have to impose it as a matter of self control.

Modellers have a moral responsibility to the public not to allow 'imaginative exercises' escaping into the hands of the media and Politicians. While a modeller can, in hindsight, say - Oh yes, now I see why that was wrong - the world might be reeling from the incorrect use and misunderstanding of the potential invalidity of the model construct.

I look forward to when this topic is covered in more detail in the later posts.

By DerekSmith (not verified) on 25 Mar 2007 #permalink

Derek: First, a correction. Age wasn't ignored. The model is age structured. The presentation of the model isn't, but the numerical runs were, so all the results take age into account.

As to your other points, I would say them much differently, but most of them are factually correct. They are just like the assumptions made in physics and chemistry. They are known to work in certain restricted domains and probably not outside them. But you seem to hold modelers to a different standard than you do bench scientists. Microbiologists use model organisms, after all (E. coli K12) and flu scientist use A/PR8. Are they behaving irresponsibly when they publish their results?

Think, also, about what I said about gauging the effect of the random model. It is likely to make things spread faster and farther than a small world model because the distant jumps are more frequent. BTW, you keep saying "small worlds" but there is no evidence diseases spread via small world topologies (if you have it, I'd like a cite). The airline system is not a small world topology, nor is the internet. It's not even clear the original small world Milgram experiment was really a small world topology as defined by Watts and Strogatz. Should they (or you) not talk about this because people might misunderstand and take that possibility seriously?

The Lipsitch paper is replete with cautionary statements and is clearly written for a scientific audience. Unless you want to stop scientific publishing because journalists read it and talk about it, I don't understand your point.

Regarding whether people act like molecules, they do well enough in certain circumstances, like class rooms or offices or other places where a random mixing model works. There is plenty of evidence that compartmental models like this also work in practice. You may think they shouldn't, but the natural world doesn't much care what you or I think.

While I am a bit puzzled by your vigorous reaction to this, I certainly have achieved one of my purposes. By letting readers see what is in the box, they are starting to form opinions they didn't have before they knew what was in it. So I'm satisfied to that point. Now I'd like you to see more of what's in there. I'm glad you are interested enough to follow along and comment and I am not at all put out by your disagreement. I'm engaging you in it.

Revere: May I take the liberty of correcting your last correction. I did not suggest that age had been ignored. I was trying to highlight the anomaly that, in the model, age had little effect, yet in reality age has a significant impact, and it was this anomaly that had been ignored. However, I take fully your point that wet bench researchers are just as able to mislead the public (deliberately or otherwise), both in the selection of their test criteria and the interpretation of their results, but I still hold that modellers have an obligation to press home the 'frailty' of their work. You and I will have both seen very limited research being grasped (in the absence of anything else) and being formulated into government action or legislation. I feel that part of the responsibility for any misunderstanding or misuse lays with the modeller, even though they may be creating a test bed purely for a scientific audience cognisant of any limitations. Scientists in general can be stunningly naive when it comes to the depth of public incomprehension and the opportunity for misuse. Oppi said it best "In some sort of crude sense, which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose."
J. Robert Oppenheimer

When a man feels the cold shiver in his spine and is compelled to say "I have become death" then you know he has accepted responsibility for his work. Of course I do not suggest that important work like this be withheld from publication, just that its potential to cause damage should not be ignored because it is only a 'mental exercise - a model'.

As to my 'vigorous reaction', please do not misconstrue my words and 'vigor' in any negative way. I have for most of my scientific career been 'bitten by the modelling bug'. The 'vigor' of my response is from interest and pleasure. By bringing this model into the blogosphere you are doing the field of modelling a great service and I genuinely cannot thank you enough. A real and important issue, a state of the art model, an expert tutor and a medium ideal for progressive discussion - Christmas has come early!! The only thing more I could have wished for is to have Lipsitch et al occasionally drop by and fill in some background reasoning. I would go so far as to say that your initiative is so potentially valuable that it warrants its own Blogette and wide publication to all students of model creation.

By DerekSmith (not verified) on 25 Mar 2007 #permalink