Okay, I'm back. Prison Break seems to be off to a rip-roaring, if extremely implausible, start. Guess I'll keep watching. I've followed it for this long, I suppose I should hang around to find out what happens to these guys. Really, though, I'm just marking time to the season premiere of House next week.
Towards that end, have a look this interesting article from The New York Times. It discusses a new book called The Happiness Hypothesis, by University of Virginia psychologist Jonathan Haidt, that discusses the evolution of morality:
Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution.
At first glance, natural selection and the survival of the fittest may seem to reward only the most selfish values. But for animals that live in groups, selfishness must be strictly curbed or there will be no advantage to social living. Could the behaviors evolved by social animals to make societies work be the foundation from which human morality evolved?
In a series of recent articles and a book, “The Happiness Hypothesis,” Jonathan Haidt, a moral psychologist at the University of Virginia, has been constructing a broad evolutionary view of morality that traces its connections both to religion and to politics.
Sounds like interesting stuff, guess I'll have to go read the book.
The article discusses a number of issues, many of which will be familiar to people who follow this issue. This caught my eye:
The emotion of disgust probably evolved when people became meat eaters and had to learn which foods might be contaminated with bacteria, a problem not presented by plant foods. Disgust was then extended to many other categories, he argues, to people who were unclean, to unacceptable sexual practices and to a wide class of bodily functions and behaviors that were seen as separating humans from animals.
“Imagine visiting a town,” Dr. Haidt writes, “where people wear no clothes, never bathe, have sex 'doggie style' in public, and eat raw meat by biting off pieces directly from the carcass.”
Reminds me of college. Frankly, as long as the naked, filthy, horny, carcass-eaters were content to leave me alone to do the things I enjoy doing, I think I could be persuaded to look the other way.
I recommend browsing through the whole article. The reason I linked to it, however, is for the moral dilemma it presents at the beginning:
Many people will say it is morally acceptable to pull a switch that diverts a train, killing just one person instead of the five on the other track. But if asked to save the same five lives by throwing a person in the train's path, people will say the action is wrong. This may be evidence for an ancient subconscious morality that deters causing direct physical harm to someone else. An equally strong moral sanction has not yet evolved for harming someone indirectly.
I love moral dilemmas (as long as they are only theoretical exercises, and not real-life situations). In that example, my knee-jerk reaction is precisely what is described above. It seems acceptable to pull the switch, but not to actually throw someone in front of the train. Alas, the only difference I can find is that the latter requires direct physical contact with the person I am killing whereas the switch pulling does not. That doesn't seem like it should be morally significant.
There are several others I have come across that have struck me as interesting. There's the old chestnut where you are on a sinking ship and you can either a save a total stranger, or your dog. What do you do? The interesting thing about that one is that everyone “knows” the right answer is to save the stranger, and that is the response most people will give when asked. Secretly, though, I think a lot of people would choose to save the dog. Especially if you could be sure that nobody would ever find out you could have saved the person instead. The reasoning is probably something like, “I know my dog is sweet and loving and has never done anything to anyone. On the other hand, statistically speaking there's a very good chance that stranger is a total prick. Why take the chance?”
Recently at another blog (I sadly do not remember which one) I saw a challenge to all those who would claim that a frozen embryo is the moral equivalent of a human being. If you were in a fertility clinic that was burning down, and you could either save a human baby or a tray containing a dozen frozen embryos, which would you save? If they are really morally equivalent, then you should obviously save the tray. In reality, however, no one, either in theory or in practice, would let the baby die. For that matter, most people would save a dog over the tray of embryos. That looks like a pretty strong argument to me, though as I recall the pro-lifers at the other blog got very indignant about it in the comments.
Then there was the episode of The Twilight Zone (not the original, occasionally brilliant, Rod Serling version, but one of the several failed attempts to revive the series later on) in which a man shows up at the door of a couple in serious financial distress. He offers them an opaque box with a button on top. They are told that if they press the button two things will happen immediately. First, a man will show up at their door with a million dollars. Second, someone thay don't know will be killed. The couple is given a week to decide. They agonize over the decision. They really need the money, you see. At one point they disassemble the box and discover it is completely empty inside. The button isn't hooked up to anything. So they persuade themselves that it is all a joke, and finally decide to press the button. Why not? As soon as they do so the doorbell rings. It's man with a suitcase carrying a million dollars. The next day the original man returns and asks for his button back. As the couple gives it to him they ask, “What will happen to the button now?” “Don't worry,” said the man with an evil laugh, “I'll be giving it to somebody you don't know!”
Anyway, feel free to contribute your own favorites, or to offer any thoughts on the ones given above.
- Log in to post comments
I don't have any moral dilemmas, though I've read enough of them to have seen some doozies. I just wanted to say that House is incredible, and combined with your wonderful defense of the "New Atheist" position has made you one of my favorite bloggers. Plus this quote is great:
And the problem of explaining morality through biological evolution is as always something like:
*When vacationing in India, why should I help an indian women getting raped?
Helping people from another group then your own, from which your own group has no real links too is only ever a risk to your chances of having more children (not so for groups that have some relation to your own, that is needed for selflessness to evolve).
*Why does it appear that the morality of it being a good thing too not have any children is culturally invariant?
The selfish gene is looking very unselfish there.
etc..
---
I find that evolution is good for explaining the existence of morality and some of its apparent universality (universal moral grammar and so on are good models) but it's horrible at explaining specific morality. For example the thing about disgust evolving through eating meat, how on earth are you supposed to be able to falsify something like that? Is it even science?
The thing is, there's a big difference between "pulling the switch" that will save the five people instead of the one, and "throwing someone on the track" to save five people. There's big difference between not saving someone, and killing someone who would otherwise have been perfectly safe.
I do understand that in both cases it's a choice between five people and one person, but I don't think it's a valid or very useful comparison because it's quite a different thing to let a person in harm's way die as opposed to seeking out and killing a person.
Humans can be indifferent to the suffering of others and have no strong bad feelings, but they don't like to openly *cause* the suffering of others.
Take the five people out of the equation and the results would still be similar. Many (most?) people would fail to act to save someone who was on the tracks with a train barreling down at them. It's something they can rationalize and live with. But would any average person (non-criminal type) just throw someone in front of a train for no reason?
The embryo dilemma is a great one though, and one I'll be using.
Fred, it's more difficult than that, because flicking the switch does kill someone who would otherwise have been safe.
The answer, I think, is that in the first case the victims are going to die in an accident either way; it's just a question of substituting one accident for another. (At least, we can easily choose to see it that way, though we wouldn't see it as an accident if someone flicked a switch that caused a fatal crash when no crash would otherwise have occurred.)
On the other hand, throwing a person onto the railway line cannot possibly be construed as substituting one accident for another. It is a deliberate act of homicide, whether justified or not.
It's surprisingly difficult to get any principle or coherent group of principles that explains our reactions to all the trolley cases. It really does look as if we are just inhibited against doing things that are up-close-and-personal, or which a chimpanzee (say) could understand, while not being so inhibited doing something that seems to be exactly the same in every other way. I assume that is the kind of point Haidt wants to make (I've read various things by him, but haven't yet read his book).
One way to deal with the actual cases given is to say that if we throw someone in front of the train then we are using a human being purely as a means to our end of stopping the train. In the case where we divert the train, by contrast, we are not using the person on the other track as a mere means to our end - at least not in such an obvious way - since we would have diverted the train whether he was there or not. There's no sense of taking a human being and turning him into a tool, which is perhaps something we all fear others doing to us and are likely to want there to be a moral rule against.
However, this is actually an easy one to "solve". Far more ingenious trolley cases have been devised that can't be solved with such a neat quasi-Kantian principle. Ethical philosophers try quasi-contractual analyses and other stuff, but there's no consensus that anything works perfectly to solve them all.
I don't like moral dilemmas like that, at least not most of them. When that very one was first posed to me, my instinct was to say that in both cases you should save the five, assuming for the sake of argument that killing the one is guaranteed to save the five and that you couldn't stop the train by throwing yourself in front of it. Apparently this makes me a freak. That's not why I have a problem with them, though. It's that usually the strings attached to the choices (as above, you're not allowed to throw yourself in front of the train) detach the problems from the social/human context in which morality operates so much that they're no longer useful.
Goes to show how much I live in my own little world. I thought evolution was the obvious explanation to where we get the inclination to behave morally - didn't realise this was in any way new or surprising. But I guess the NYT is right when they say it's "seldom considered" - most people don't think about evolution all the time after all. Ah, if only everyone were an EvoGeek!
About the fertility clinic; I think it clearly demonstrates what Dawkins points out in TGD and which I completely agree with: An embryo does not suffer, therefore it isn't morally wrong to do as we like with it. An infant or a dog would suffer if burned to death, and that's why people instinctively would want to save them. Another point is that most people probably wouldn't even realise that there were embryos on the trays, it's not like you stoop to look carefully at labels on random petri dishes when there's a fire on, eh?
Though I only saw it once maybe 20 years ago, that 'new' Twilight Zone is emblazoned in my memory. I can still picture the woman staring at the box, chain smoking cigarettes, as well as the disassembled box on the table. Thanks for the reminder.
The argument here ignores the core of evolution. Evolution envolves the continuity of one's own genes, not the genes of other people. When facing the dilemma of switching the train, I would prioritize the relationship between that person on the rail and me. Say, if that person were my child, I would one hundred percent switch the train to that rail with no matter how many people on it. you can call me selfish pig. I don't care.
"Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution."
Sigh. E.O. Wilson published Sociobiology over 30 years ago. The whole field of evolutionary psychology is predicated on the idea of evolutionary forces molding human choice, including the basis of morality. Evolution isn't a source that is just "now" being considered, or that is seldom considered. Go, go, NY Times science writers.
My view of evolution is that humans are an extremely tribal species that permitted little gene flow between groups. I cite the New Guinea highlanders as examples. This, as a source of genetic isolation, requisite for evolution, and the high murder rates from intergroup hostility, is a prime factor in our evolution. It is not surprising that there are two sets of morality, in group and out group, that are expressed historically by human beings, altruism and genocide.
The work by Steven Pinker and that of Dr. Haidt outlined in the NYT article are bringing these truths to light. The challenge for Homo atomicus is to find a way to overcome our genetic legacy and find a way to survival.
The trolley problem and similar ethical dilemmas are exhaustively explored in the book "Intricate Ethics: Rights, Responsibilities, and Permissible Harm" by F. M. Kamm, Oxford 2006, ISBN 0195189698.
I recommend it to anyone who wants to explore their own moral intuitions.
The idea that morality is affected by evolution is neither new or revolutionary. From reading the excerpts describing Haidt's work, he's only got part of the story.
When explaining things like morality and altruism, the selfish gene principle still applies. Think of it this way: if you act in a way that helps yourself, that action is adaptive because you share 100% of your genes with yourself, and therefore the action benefits all of your genes. If you act in such a way that benefits your child, such an action would be adaptive if the benefit for the child outweighs your loss, because you share 50% of your genes with your children. As you consider people who are more and more genetically dissimilar to yourself (grandchildren, cousins, complete strangers, etc.), the ratio of their benefit to your loss has to increase in order for the action to be adaptive. This is simple game theory.
In practice, this is why people would allow ten strangers to be killed by a train instead of pushing their child in front to stop it. This is why it's easier to loan money to your brother than some random person on the street. It is beneficial to our genes to help those who share them. This view helps to explain things like racism and sports team fanaticism (although the latter is really an irrational extension of the adaptive process described above).
However, we are rational beings, and while a moral sentiment is adaptive and can evolve, the specific moral choices must be individual. Giving money to the poor is not adaptive: from a gene's perspective, those resources are better used obtaining food. Many people, though, feel a rational sense of obligation to others. It's rationality taking over where evolution leaves off.
I think this ignores the fact that in social groups, those who do not walk the line are killed and other rival groups are also fought off and killed. Survival entails killing 80% of the creatures you come into contact with. Naturalism only supplies subjective social morality, not universals. Morality is of the mind, and mind in the atheistic system is an environmentally, reactionary product. How in any way can that provide a universal idea of morality other than the fact that all would have evolved and have the same subjective morality? It isn't really right and wrong to murder someone or to become a Nazi. It's just an illusion we have based on religious assumptions that I should actually love my fellow man rather than myself (rather than loving for my own benefit). Does anyone disagree with this and why?
I think this ignores the fact that in social groups, those who do not walk the line are killed and other rival groups are also fought off and killed. Survival entails killing 80% of the creatures you come into contact with. Naturalism only supplies subjective social morality, not universals. Morality is of the mind, and mind in the atheistic system is an environmentally, reactionary product. How in any way can that provide a universal idea of morality other than the fact that all would have evolved and have the same subjective morality? It isn't really right and wrong to murder someone or to become a Nazi. It's just an illusion we have based on religious assumptions that I should actually love my fellow man rather than myself (rather than loving for my own benefit). Does anyone disagree with this and why?
The idea that morality is affected by evolution is neither new or revolutionary. From reading the excerpts describing Haidt's work, he's only got part of the story.
When explaining things like morality and altruism, the selfish gene principle still applies. Think of it this way: if you act in a way that helps yourself, that action is adaptive because you share 100% of your genes with yourself, and therefore the action benefits all of your genes. If you act in such a way that benefits your child, such an action would be adaptive if the benefit for the child outweighs your loss, because you share 50% of your genes with your children. As you consider people who are more and more genetically dissimilar to yourself (grandchildren, cousins, complete strangers, etc.), the ratio of their benefit to your loss has to increase in order for the action to be adaptive. This is simple game theory.
In practice, this is why people would allow ten strangers to be killed by a train instead of pushing their child in front to stop it. This is why it's easier to loan money to your brother than some random person on the street. It is beneficial to our genes to help those who share them. This view helps to explain things like racism and sports team fanaticism (although the latter is really an irrational extension of the adaptive process described above).
However, we are rational beings, and while a moral sentiment is adaptive and can evolve, the specific moral choices must be individual. Giving money to the poor is not adaptive: from a gene's perspective, those resources are better used obtaining food. Many people, though, feel a rational sense of obligation to others. It's rationality taking over where evolution leaves off.
"When explaining things like morality and altruism, the selfish gene principle still applies. Think of it this way: if you act in a way that helps yourself, that action is adaptive because you share 100% of your genes with yourself, and therefore the action benefits all of your genes. If you act in such a way that benefits your child, such an action would be adaptive if the benefit for the child outweighs your loss, because you share 50% of your genes with your children. As you consider people who are more and more genetically dissimilar to yourself (grandchildren, cousins, complete strangers, etc.), the ratio of their benefit to your loss has to increase in order for the action to be adaptive. This is simple game theory."
This doesn't really explain why one would be more likely to save his or her genetically distant best friend than a distant brother. And this is more than indiviudal in that most, if not all, people would do the same.
The trolley problem and similar ethical dilemmas are exhaustively explored in the book "Intricate Ethics: Rights, Responsibilities, and Permissible Harm" by F. M. Kamm, Oxford 2006, ISBN 0195189698.
I recommend it to anyone who wants to explore their own moral intuitions.
For Ginger Yellow and Sacred Beef...don't mean to be pedantic, but throwing a human being in front of a train will not stop it. It'll just kill you first then the people you thought you were going to save. Bad move. ;^}
And Jason, shouldn't Prison Break be called Prison Broke at this point? ;^}
Bryan,
Yes I disagree with it because it really is absolutely wrong to, for example, walk up to a random woman carrying a baby, snatch the baby from her arms, douse it with gasoline, and set it on fire. It is not simply an illusion that it is wrong, it is a moral absolute.
Consider these two points:
First, we're savannah-optimized creatures. The environment in which we live is not the environment in which our ancestors evolved. Second, evolution can easily have unexpected consequences and byproducts. Supposing, for example, that our ancestors lived for many generations in small groups which had a high degree of family interrelationships, then they might well have developed a general predisposition to altruistic behavior. You see a hungry child, so you give up your food to help out, because odds are, they're related to you. You don't have any finer discrimination than that, because (a) it's not really necessary in your society, and (b) it's hard to code that extreme kind of specificity in your DNA.
Therefore, when your distant descendants are living in much larger social groups, they're predisposed to respond to any hungry child. The glorious mistakes of evolution!
A similar hypothesis could be formulated for the case of helping a genetically distant friend; the subroutines were coded by natural selection so that they could be invoked by close relatives, but that same code can also be called by other individuals.
Unfortunately, you're considering the set of natural responses to the train scenarios under a completely unwarranted presumption: You're assuming that they "ought to be" optimized for your own preferred goal as the viewer, which is to save as many people as possible.
The problem here is that natural morality and moral response isn't a matter of figuring absolute values for outcomes; real peoples' responses depend heavily on relationships and perspectives. Thus, "kin alliance" doesn't involve calculating a percentage of genetic overlap -- you imprinted on your family early on, giving you a basic instinctive bond, and then you built relationships with them over the course of your life. In the best case, those will have established them as strong, permanent allies with a proven commitment to your welfare. (There are other options, regularly visible from classic literature to the Jerry Springer Show.) Your cradle friend has a similar relationship with you; the rarely-seen distant cousin does not. And someone standing next to you has a literally closer (faint) relationship, than someone standing far away on a train track.
This pattern extends outward to clan and tribal relationships as well... people always seem so surprised at the viciousness of clashes between two religious groups, both of whom preach about "loving your fellow man". But there's a simple explanation: Their love is, has always been, and pretty much always will be, for their fellow tribesman. Yeah, there are some people who managed to extend goodwill to "humanity in general"... but frankly, that's not as easy as it sounds.
"This pattern extends outward to clan and tribal relationships as well... people always seem so surprised at the viciousness of clashes between two soccer fan groups,"
"This pattern extends outward to clan and tribal relationships as well... people always seem so surprised at the viciousness of clashes between two political groups,"
"This pattern extends outward to clan and tribal relationships as well... people always seem so surprised at the viciousness of clashes between two groups of stamp collectors,"
There, fixed it a little for you :)
Suppose we reproduced like reptiles but maintained our level of intelligence: Two random individuals meet and then mate and the part ways. The mother lays eggs and then the babies are born as fully capable beings. The don't need to be nurtured or taught they just need to be born in sufficent numbers to have some out last predators. But we don't develop that way. We develop over time and for a large part of our early lives we depend totally upon our social group. That makes it essentially to develope some level of morality. You need to work for the group or your genes don't get passed on. If we were born as fully realized individuals our sense of what is right and wrong would probably be a lot different.
The way a human being develops depends on having a social group to take care of it for several years. For most of history that group was small. I don't think our genes have caught up with the size of modern societies.
Are we about to see a new re-examination of the insights of Lamarck?
Can genetic modification, induced or fabricated, become sustainably permanent in organic replication? Lamarck would probably have said, certainly.
That conclusion is distinguishable from the conclusions imposed by Lysenkonism, despite the latter's origins in Lamarckian views.
It has been difficult for "Creationists" and many other religious absolutists to consider that "Creation" may be an ongoing process, or that after resting on that seventh day, work began again, incorporating the facilities of both the living and non-living (to use Aristotelian classifications).
R.Richard Schweitzer
s24rrs@aol.com
"We develop over time and for a large part of our early lives we depend totally upon our social group. That makes it essentially to develope some level of morality. You need to work for the group or your genes don't get passed on. If we were born as fully realized individuals our sense of what is right and wrong would probably be a lot different."
But once the elite have developed, they no longer need the social group to survive (at least, not in the sense that they need to be moral to them in order to survive). It seems more natural that a creature tries to get to the top and then subject all other creatures of his "tribe" to himself in order to do his bidding. I see a need, therefore, for the social group, but not for the purpose of morality. Even so, this morality wouldn't be a matter of good or evil, but instead an agreed upon preference. That preference in the specifics can vary from culture to culture. After all, some times it's beneficial to love your neighbor and some times it's beneficial to eat them.
Christoffer Green: Your examples are all classic ur-tribes, but they don't go around claiming/commanding "goodwill to all", much less proclaim allegance to similar or identical gods. Accordingly, it's little surprise when they squabble.
On the contrary, soccer clubs very often try to proclaim goodwill to all participants of the game. Yet some do violence to some members of the game (other supporters). They all claim allegance to one set of rules for one game (one abstract notion here exchanged for another).
Peaceful people don't talk about peace. People who feel goodwill to all don't talk about the importance of feeling goodwill to all.
If you want to know what people are doing, look at their laws - what they are forbidden and encouraged to do. What is forbidden is what people would do spontaneously if they weren't discouraged, and what they are encouraged to do is what they wouldn't do on their own without social pressure.
Soccer clubs feel obligated to talk about being part of the community of fans because there is no community, just groups of people who support various teams and don't view the others as fellow tribespeople - in fact, they're viewed as rival tribes.
Ginger, if I knew that pushing a person in front of a train would save 5 others, I think that I would make the same choice you would. What's more, if I had the option of sacrificing myself or pushing a stranger, I think I would opt for the stranger.
oh. my. dog.
http://www.huffingtonpost.com/2007/09/18/new-view-cohost-sherri_n_64864…
need to get whole thing...
Caledonian: Writing the statement "Peaceful people don't talk about peace." In a random blogg doesn't make it true, please provide a peer reviewed study that makes this claim and backs it up with evidence, until you do I will consider it to be ludicrous. All kinds of people talk about all kinds of things, I see no reason to make such a claim.
Talking about a thing over and over again is a sign people are trying to create/maintain a belief or behavioral pattern in the face of pressure. People who are peaceful have no need to reinforce their peacefulness - only people who are inclined towards violence need to stress the importance of peace.
Religions like Buddhism talk about peace, compassion, and lack of attachment because they're meant to be practiced by people who are in furor, uncaring about others, and emotionally attached to all kinds of things. If the natural state of people were to be what Buddhism teaches, it wouldn't need to be taught, would it?
You just switched right now from talking about how people are to talking about how the natural state of people are. You just made things a whole lot less concrete and at the same time offered no supporting evidence.
The question that comes to my mind about such morality thought experiments is: how strongly correlated is what people say they would do in such and such a circumstance, and what they really do? I'm not sure how you could study this question rigorously; obviously you're not going to endanger people to test the reactions of onlookers. But my guess is that people react differently in an actual situation than they do in the BS realm of blog discussions.
There are two different sources of error in reporting what you would do in such and such a circumstance. Some people will overestimate their own ability to do the "morally correct" thing (or what is generally thought of as such in society). Other people will overestimate their own ability to do the "rational" thing (either by a utilitarian criterion, or by a purely "selfish gene" criterion). Others, with less self-confidence, may underestimate both.
So frankly, I don't believe many of the responses to this thought experiment. I'd have to look at some kind of study about what people actually do in extreme circumstances, rather than what they say they would do.
Another point about this thought experiment. Even when it is possible to rationally justify the claim that action A should be morally equivalent to action B, that doesn't mean that the consequences would be the same.
If you push a stranger in front of a train (okay, make the threat something where it is more plausible that a human body could stop it; maybe a knife-wielding maniac) in order to save your friends, it is very likely that you will be charged with murder and/or that the relatives of the stranger will hunt you down and kill you.
I think that moral principles can be thought of as coming from the give-and-take that happens in a society. You know the way that you want other people to treat you. You'd prefer it if total strangers risked their lives to save you from the slightest discomfort. You'd prefer it if people threw themselves on railroad tracks to prevent you from getting your shoes muddy.
But you also know that anything you would demand of others, it's likely that they will demand the same of you. So you scale back what you demand from others to the point where you would agree to the reciprocal demand. So rather than demanding that others risk their lives for you, you must be content with the promise that others won't use your body as a disposable tool for their own purposes (however noble those purposes are).
Daryl, I think you're missing the point. I agree that what people say, or even what they *think* doesn't necessarily reflect what would happen in the actual event, but that's beside the point. Moral questions like this are interesting to ponder. They are, like any other hypothetical question, interesting ways of looking at yourself.
If, regarding the train question, someone thought, "heck, I'd kill all of them if it meant my survival," that would be pretty revealing, even if the person might not actually do that if the situation actually arose.
So if statistics said that 99% of people would let the one person die in order to save the other five, it's still interesting even if in reality only 10% of the people would let that one person die. In fact, the discrepancy between what people think they'd do and what they'd actually do is in itself interesting.
Bryan,
It isn't really right and wrong to murder someone or to become a Nazi. It's just an illusion we have based on religious assumptions that I should actually love my fellow man rather than myself (rather than loving for my own benefit). Does anyone disagree with this and why?
Yes I disagree with it because it really is absolutely wrong to, for example, walk up to a random woman carrying a baby, snatch the baby from her arms, douse it with gasoline, and set it on fire. It is not simply an illusion that it is wrong, it is a moral absolute.
Posted by: heddle | September 18, 2007 12:40 PM
what if it was not random? what if every baby was infected with a horrible plague that was going to eliminate all mankind if it was allowed to mature in the bodies of babies born in the last year.
Would killing the baby be a moral imperative?
1. What are those idiots doing on the tracks in the first place?
2. Are they deaf? Can't they hear the train coming?
3. Are their legs broken? Are they capable of getting their imbecilic butts off the tracks and out of danger?
4. Do you really want me to save people so stupid they'd go stand on railway tracks in constant use?