Robot Ethics and Self-Driving Cars

How Ethical Determinations in Software Will Require a New Legal Framework
by Nick Belay
I. Introduction
Automated decision making in vehicles has played an increasing role in transportation as technology has yielded improvements to machine learning, sensing, and processing.1 Today, cars perform complex tasks related to braking, steering, and object detection, often without the awareness of the driver.2 Multiple major automotive companies already plan on releasing technologies that allow for hands-free driving assistance in the next couple of model years.3 Indeed, Google—one of the leading companies in self-driving cars—has publicly stated its intention to bring entirely autonomous cars to consumer markets within the next five years.4
Once considered science fiction, self-driving cars are becoming more of a reality every day.5 However, along with the numerous benefits to convenience and safety,6 these new technologies pose major ethical dilemmas.7 Perhaps most notably, machines will have to make decisions regarding whom to save or protect in the event of a collision or unforeseen obstacle.8 Inseparable from these ethical considerations is the issue of legal liability,9 for whoever dictates the car’s behavior in these situations will also most likely be subject to the liability surrounding the outcome.10 This article aims to survey the various approaches to the legal and ethical aspects of self-driving cars and offer the best strategy going forward to meet these considerations without deterring innovation in the market.

II. The Trolley Problem Comparison
Consider the following classic thought experiment in ethics: A runaway trolley is barreling down the tracks towards five unsuspecting railroad workers and will kill them if nothing is done.11 Watching from a distance, you see a lever positioned next to you.12 If you pull this lever, the trolley will switch to a separate set of tracks.13 You notice, however, that the alternative tracks have a single railroad worker on them.14 Your options are to either: 1) do nothing and allow the trolley to kill the five workers; or 2) pull the lever to divert the trolley and kill the one worker.15 The experiment illustrates the difficult distinction between affirmative action that causes one death vs. “letting circumstance lie” and allowing five.16
A number of answers and justifications exist to this dilemma, known as “The Trolley Problem,”17 dependent on one’s personal moral values. According to a psychology study conducted at Michigan State University, roughly 90% of individuals would choose to kill the one worker instead of the five.18 However, altering the scenario slightly (i.e., instead of switching the track, you would have to push a bystander in front of the train to save the five people) yields a far less confident response, despite the end result being the same.19 This variability makes it difficult to determine a consistent ethically “correct” course of action. Certainly from a utilitarian perspective, saving five people outweighs the cost of losing one.20 However, what if, for example, the one is a child and the others are adults?21 Five adults might provide a higher net utility than one child, but western society often places a high moral value on saving the latter.22 Moreover, at what point does general welfare impede on notions of personal liberty? The deaths of the five men can be characterized as a product of external factors (the trolley);23 pulling the lever, however, would directly cause a person to die where he otherwise would not have.24
These are the sorts of considerations that both companies developing self-driving cars and their stakeholders will have to solve in order to curb liability and remain ethically sound. In fact, manufacturers of self-driving cars may even face a more difficult situation than that of the Trolley Problem due to the decision being premeditated.25 In the case of a human, tort law provides a malleable scale of accountability for negligence cases (the Reasonable Person Standard)26 to determine whether an individual fell short of his duty to others.27 This test takes into account limitations in a human’s ability to make the best decision given the specific circumstances (e.g., stress, time to react, etc.).28 In the case of self-driving cars, however, the machine makes decisions based on the algorithms coded into its software.29 Which is to say, the car will react in accordance to how the manufacturer predetermined it should react in those circumstances.30
Imagine a scenario where a child runs in front of a car approaching a tunnel.31 The options are to either hit and kill the child or swerve into the wall and kill the driver.32 Or perhaps there is a scenario where a dog runs in front of the car.33 To what degree should the car attempt to swerve (and potentially endanger the driver or others) in order to avoid the dog? Does it make a difference if it is a squirrel?34 Or perhaps there is a scenario where a human driver would ethically be justified in breaking the law, like a husband rushing his wife who is in labor to the hospital.35 Should the car take such situations into account when determining its behavior? It may be tempting to conclude that self-driving cars will encounter these situations so infrequently that they hardly pose an issue.36 However, by nature of operating in imperfect systems filled with human drivers, pedestrians, and animals that behave unpredictably, autonomous vehicles encountering these ethical calculations is all but guaranteed.37 Thus, as long as there exists even the slightest possibility that a self-driving car will have to make an ethical decision, programmers will have to account for the various choices and moral reasoning in the car’s software.38
On a systemic level, this raises the question of who exactly should have the power to determine who lives and who dies or else who will suffer injury to self or property. Should it lie in the legislature in the form of laws and policy that detail whom exactly to save? Should it be left up to the manufacturer of the machine in question? Should it minimize damage from an insurer’s point of view? Or, ultimately, perhaps it should rest with the individual. While laws regarding automated cars are currently scarce,39 an application of legal ethics from established areas of law, like tort law, provides a framework to guide early law and policy as we move into the inevitable future of AI/Human interaction.

III. Examining the Assignment of Responsibility (Legal and Ethical) for the Decisions Made by Machines
A. The Manufacturer
Perhaps the most obvious choice to determine the behavior of self-driving cars in ethical situations is the manufacturer. This designation would be consistent under traditional product liability notions where the manufacturer is “ultimately responsible for the final product.”40 That is, if there is a design defect within the control of the manufacturer that leads to some sort of harm, and the manufacturer knew or should have known of the defect, then they are going to be liable for the harm.41 This raises the issue, however, as to whether an ethical determination that very well could have been made by a human driver in the same situation can be considered a “defect” so as to impose product liability. While tort law varies by state,42 a majority of courts follow a similar two-part test for design defects as laid out by the California Supreme Court:43
First, a product may be found defective in design if the plaintiff establishes that the product failed to perform as safely as an ordinary consumer would expect when used in an intended or reasonably foreseeable manner. Second, a product may alternatively be found defective in design if the plaintiff demonstrates that the product’s design proximately caused his injury and the defendant fails to establish, in light of the relevant factors, that, on balance the benefits of the challenged design outweigh the risk of danger inherent in such design.44
Under this reasoning, a plaintiff would have a difficult time proving the second test given the benefits detailed previously in this article;45 however, a plaintiff could have a case for the first test depending on the circumstances. For example, if the manufacturer programmed the car to minimize overall damage, which resulted in the car injuring the driver instead of multiple pedestrians, this result might be contrary to an ordinary consumer’s expectation that a product would protect the owner first and foremost. In fact, according to a survey conducted by the Open Roboethics Initiative,46 roughly 64% of people polled would prefer the car to protect their lives and those of their passengers before a pedestrian’s.47
From an ethical standpoint, a manufacturer would likely have to apply a one-size-fits-all set of behaviors that may be inconsistent with those of the user.48 For example, the manufacturer might program the car to always try to protect the driver seat,49 but one can imagine a scenario where the driver would rather protect their significant other or child in the passenger seat. Or alternatively, the car might be programmed to save a pedestrian over a passenger when a human might value the opposite.50 Such a system would subject the user to the values of the manufacturer, creating a situation where “cars [would] not respect drivers’ autonomous preferences in . . . deeply personal moral situations.”51
Ultimately, however, the reasoning against making the manufacturer responsible might be much more grounded: if the manufacturer were responsible for all the ethical decisions of a self-driving car, “the liability burden on the manufacturer may be prohibitive of further development.”52 This would potentially deter manufacturers from developing the autonomous vehicle altogether—a socially undesirable result.53

B. The Individual
If not the manufacturer, perhaps the next most intuitive party to hold responsibility over the vehicle’s actions is the individual owner/user.54 This designation would be consistent with the already well-established concept of liability resting with the driver.55 However, as the “driver” in self-driving cars will have theoretically no role in the decision making process,56 assigning liability will surpass the traditional negligence standard associated with vehicles in favor of strict liability.57 Such a system would remove significant ambiguity from the legal side, but is it too much to ask a driver to potentially face full liability for the moral decisions of the car?
Holding the driver responsible creates two major issues that could cripple the self-driving car from ever taking hold. First, a strict liability standard would create a strong disincentive against individuals adopting the new technology. How many people would consistently agree to be at the mercy of liability they do not control, especially when said liability could potentially deal with significant damages to life or property? Strict liability operates best as a deterrent against a specific behavior,58 whereas negligence encourages a greater level of care when conducting that behavior.59 Assuming that self-driving cars are a societally desirable change, as this article does, strict liability would not make sense as the controlling standard. Moreover, strict liability for the driver would (at least in part) remove incentives for the manufacturer to program smart decisions, as the manufacturer would share none of the risk associated with those decisions.60
One counterargument might be that driving is already essentially a strict liability activity.61 Statistically, the average driver is likely to have a collision roughly once every 17.9 years.62 Thus, just by engaging in the activity a driver is agreeing to be liable at some point. Under this rationale, the assignment of liability would not be based on the end result but rather the risk created merely by entering a car (driverless or otherwise).63 Under such a model, owners of self-driving cars would share the responsibility of the risks the car creates.64 This result could be achieved through some sort of tax or mandatory insurance.65 The problem with this position, however, is that it ignores the idea of being morally “blameworthy” currently attributed to driving liability.66 Even if a traditional driver’s fault in an accident is inevitable,67 the reprehensible conduct that led to that specific accident still would exist.68
The other issue is that the driver would be liable for the decisions of the manufacturer but shares no role in determining the ethical values of those decisions.69 One possible solution would be to allow the driver to determine the ethical priorities of the car through a system of adjustable ethics.70 Thus, the users of self-driving cars would be able to customize their car to reflect their own personal moral values.71 In a poll by the Open Roboethics Initiative, 44% of respondents said the passengers in the vehicle should control how it responds in ethical situations.72 Moreover, adjustable ethics might carry the added bonus of making consumers feel more comfortable holding end liability. Still, such a system would not be without drawbacks: it would create a level of unpredictability among self-driving cars, as each would behave uniquely depending on the specific ethics of the user. This might mirror more traditional driving today, but would potentially lessen the safety and efficiency benefits that come with self-driving cars being predictable to both other cars and the environment.73
The simplest solution to both the ethical and legal side of individual responsibility might be to require that a driver always be behind the wheel and ready to take over in emergency situations.74 Under this “duty to intervene” model, the liability would be based on the driver’s failure to pay attention and take over when necessary.75 This model would mirror the traditional decision making process made currently by drivers, thus both making liability clear and removing the need for machines to make ethical determinations in place of a human driver.76 In fact, such a requirement is already consistent with current legislation regarding self-driving cars requiring an operator present in the driver’s seat.77
However, this model poses multiple practical issues. First, requiring an operator would eliminate much of the consumer appeal of a self-driving car.78 Not only would this make impossible the comfortable notion of reading or browsing the internet while your car drives you to a destination,79 but it would also prevent self-driving cars from performing one of their largest selling points: being controlled remotely.80 For example, a consumer would not be able to use the car for tasks like sending it to pick up a child from school81 or bringing someone home from a bar.82 Second, a duty to intervene assumes the capability of humans to properly recognize dangers and react in time—something that may not be possible given the split-second in which a collision can present itself.83 Further, even if a person could react in time, there would be no guarantee that the reaction would be desirable.84 After all, approximately 90% of all accidents are caused by human error.85 Moreover, users may overreact to avoid liability and create risk where there otherwise would have been none;86 for example, if an operator mistakenly believes the car to be nearing a collision and swerves into traffic in response. For these reasons, requiring a duty to intervene may serve as a functional legal tool while the technology behind self-driving cars is still being explored but does not offer a long-term solution.

C. The Insurer
If the aim is to maximize total welfare for society, then attributing responsibility to the insurer of a self-driving car seems to effectively produce that prima facie result. One of the fundamental tenets of an insurance provider is to pool risk and minimize loss.87 This goal falls in line with traditional utilitarian theory,88 which finds that actions that increase total utility are morally justified.89 Thus, a self-driving car under an insurer’s influence will always choose the “lesser of two evils” from an economic standpoint. Moreover, strong statistical evidence and a repeat presence place insurance providers in an advantageous position to justify a car’s behavior from a liability standpoint.90
Two main issues, however, surround the insurer as the responsible party—one moral and one practical. The moral issue remains the same as previously discussed: why should the owner of a vehicle be subject to the ethical values of some other entity when there exists no morally “right” answer?91 Or from a pedestrian’s perspective, why should a working class individual be targeted over a corporate executive in the event the car has to hit one? Certainly, the latter provides greater overall economic utility, but does this not infringe upon the rights of the individual?92
One does not have to entirely rely on the moralistic argument, for there is a practical reason a utilitarian perspective does not work for self-driving cars. Namely, it would create improper incentives.93 An automated car that aims to minimize overall damage will target people and objects that are less likely to suffer costly injuries. Thus, the self-driving car would choose to swerve into a car with high safety ratings rather than one with low safety ratings. Or the car would choose to hit the cyclist wearing a helmet over one without. In effect, this would create an environment where people were placed at greater risk of personal or economic harm because they took more responsible safety measures—the opposite of a societally desired effect.

D. The Legislature
Ultimately, the legislature may be in the best position to meet the legal and ethical demands of self-driving cars. Indeed, self-driving cars are not entirely unique in posing new issues on these fronts. The shift from horse and buggy to cars, for example, posed its own set of legal and ethical challenges.94 Without transitionary laws, liability would have been too great for automobiles to take hold,95 thus highlighting an additional consideration when assigning responsibility: the existence of a transitional period.96 Many of the ethical and legal issues surrounding self-driving cars will become significantly less pressing as more and more people adopt self-driving cars.97 According to the Eno Center for Transportation,98 as many as 4.2 million accidents could be avoided if 90% of vehicles in the U.S. were self-driving.99 Moreover, roughly $450 billion could be saved in related costs.100 While unpredictable behavior from pedestrians and animals would still exist, accidents among passenger vehicles (estimated at 65% of all automobile related deaths)101 pose the largest issue to safety going forward.102 Therefore, the focus in the present should be on minimizing liability for manufactures and consumers to incentivize early adopters and allow the market to grow to the amount ideal for safety and utility.

IV. Framing a Legislative Solution
The key to accomplishing these goals will be consistency in behavior, so the legislature needs to determine a consistent code by which all self-driving cars abide. The idea being that uniformity will relieve the manufacturer and consumer from large lawsuits contingent on how their one car in particular behaved.103 In line with this reasoning, the legislature should determine that all self-driving cars must act in the interest of their passengers over anything else. The idea of self-preservation is both ethically neutral and societally accepted.104 Moreover, it is consistent with current tort law that does not favor an affirmative duty to risk one’s own well-being for others.105 Further, the legislatures should consider the application of a “reasonableness standard” to machines making decisions. While certainly people’s expectation of machines is to act perfectly according to programming, it is unrealistic given the current limitations in computer science and sensory hardware to expect a self-driving car to always execute the best decision in a complex environment.106 The application of a reasonableness standard will allow for situational flexibility as a way of limiting liability as the technology improves.
Overall, an ideal painless solution to the ethical and legal issues posed by self-driving cars may not exist. If we are to see this future become a reality, however, consistent behavior and limited liability is a necessity as we transition away from human-controlled vehicles.
The Journal of the Legal Profession, 40 (Fall 2015), pp. 119–130
Reprinted with permission of the publisher


Endnotes
1 See Noah J. Goodall, “Machine Ethics and Automated Vehicles,” Road Vehicle Automation, (Gereon Meyer & Sven Beiker, eds., Springer International Publishing, 2014), p. 93.
2 See Russ Heaps, “8 Great New Advances in Auto Technology,” Bankrate (May 27, 2009), http://www.bankrate.com/finance/money-guides/8-great-new-advances-in-auto-technology-1.aspx.
3 See, e.g., C.C. Weiss, “Cadillac to Introduce Automated Driving and Vehicle-to-Vehicle Tech in 2016,” Gizmag (Sept. 12, 2014), http://www.gizmag.com/cadillac-super-cruise-v2v-2016/33769/.
4 Donna Tam, “Google’s Sergey Brin: You’ll Ride in Robot Cars within 5 Years,” CNET (Sept. 25, 2012, 2:01 PM), http://www.cnet.com/news/googles-sergey-brin-youll-ride-in-robot-cars-within-5-years/.
5 See “Self-Driving Cars Coming to a Street near You,” The Economist (Sept. 18, 2014), http://www.economist.com/news/business-and-finance/21618531-making-autonomous-vehicles-reality-coming-street-near-you.
6 See Don Howard, “Robots on the Road: The Moral Imperative of the Driverless Car,” Sci. Matters (Nov. 13, 2014), http://donhoward-blog.nd.edu/2013/11/07/robots-on-the-road-the-moral-imperative-of-the-driverless-car/#.VGVY6FfF8QS; See also Christopher Mims, “The Potential Benefits of Driverless Cars Are Stunning,” Quartz (Oct. 22, 2013), http://qz.com/138367/the-potential-benefits-of-driverless-cars-are-stunning/.
7 See Adam Gopnik, “A Point of View: The Ethics of the Driverless Car,” BBC News Magazine, http://www.bbc.com/news/magazine-25861214 (last updated Jan. 24, 2014).
8 Id., supra note 7.
9 See Alexis C. Madrigal, “If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable?,” The Atlantic (Aug. 13, 2014), http://www.theatlantic.com/technology/archive/2014/08/if-a-self-driving-car-gets-in-an-accident-who-is-legally-liable/375569/ [hereinafter “If a Self-Driving Car Gets in an Accident”].
10 Id., supra note 9.
11 Judith J. Thomson, “The Trolley Problem,” 94 Yale Law Journal, 1395–96 (1985).
12 Id., supra note 11.
13 Id.
14 Id.
15 Id.
16 Id. at 1396–97.
17 Id., supra note 11, at 1395.
18 David C. Navarrete et al., “Virtual Morality: Emotion and Action in a Simulated Three-Dimensional ‘Trolley Problem,’” 12 Emotion 364, 367 (2012).
19 See Thomson, supra note 11, at 1409–10.
20 Id. at 1408.
21 Id. at 1405.
22 See generally Viviana A. Zelizer, Pricing the Priceless Child: The Changing Social Value of Children 22–58 (Princeton U. Press, 1994).
23 See Thomson, supra note 110, at 1397.
24 Id. at 1395–96.
25 See Patrick Lin, “The Ethics of Autonomous Cars,” The Atlantic (Oct. 8, 2013), http://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/?single_page=true.
26 Restatement (Second) of Torts, § 283 (1965).
27 See Restatement (Third) of Torts, § 7 (2010).
28 See Restatement (Second) of Torts, § 283 (1965).
29 Alexis C. Madrigal, “The Trick That Makes Google’s Self-Driving Cars Work,” The Atlantic (May 15, 2014), http://www.theatlantic.com/technology/archive/2014/05/all-the-world-atrack-the-trick-that-makes-googles-self-driving-cars-work/370871/ [hereinafter “The Trick”].
30 See “The Trick,” supra note 29.
31 See Jason Millar, “Should Your Robot Driver Kill You to Save a Child’s Life?,” The Conversation (Aug. 1, 2014), http://theconversation.com/should-your-robot-driver-kill-you-to-save-a-childs-life-29926.
32 Id., supra note 31.
33 See Gopnik, supra note 7.
34 See id.
35 Dave Dickinson, “5 Issues Concerning Driverless Cars,” Listosaur (Nov. 27, 2014), http://listosaur.com/science-a-technology/5-issues-concerning-driverless-cars/.
36 See Goodall, supra note 1, at 94–98.
37 See id.
38 See id.
39 See id. at 97.
40 Gary E. Marchant and Rachel A. Lindor, “The Coming Collision between Autonomous Vehicles and the Liability System,” 52 Santa Clara Law Review 1321, 1329 (2012).
41 See id., supra note 40, at 1329.
42 See Thomson Reuters, 50 State Statutory Surveys: Civil Laws: Torts, 0020 Surveys 29 (Westlaw) (2015).
43 See id., supra note 42.
44 Barker v. Lull Engineering Co., 573 P.2d 443, 455-56 (1978).
45 See “Self-Driving Cars Coming to a Street near You,” supra note 5.
46 Open Roboethics Initiative, http://robohub.org/author/ori/ (last visited Nov. 13, 2014).
47 Open Roboethics Initiative, “If Death by Autonomous Car Is Unavoidable, Who Should Die? Reader Poll Results,” Robohub (June 23, 2014), http://robohub.org/if-a-death-by-an-autonomous-car-is-unavoidable-who-should-die-results-from-our-reader-poll/.
48 See Millar, supra note 31.
49 See Kyle Stock, “The Problem with Self-Driving Cars: They Don’t Cry,” Bloomberg Business Week (April 03, 2014), http://www.businessweek.com/articles/2014-04-03/the-problem-with-self-driving-cars-they-dont-cry.
50 See id., supra note 49.
51 Millar, supra note 31.
52 Marchant and Lindor, supra note 40, at 1334.
53 See id.
54 Alexander Hevelke and Julian Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis,” Science and Engineering Ethics 620, 623–627 (June 11, 2014), available at http://link.springer.com/article/10.1007%2Fs11948-014-9565-5.
55 Tim Worstall, “When Should Your Driverless Car from Google Be Allowed to Kill You?,” Forbes (June 18, 2014), http://www.forbes.com/sites/timworstall/2014/06/18/when-should-your-driverless-car-from-google-be-allowed-to-kill-you/.
56 See Samuel Gibbs, “Google’s Self-Driving Car: How Does It Work and When Can We Drive One?,” The Guardian (May 29, 2014), http://www.theguardian.com/technology/2014/may/28/google-self-driving-car-how-does-it-work.
57 See generally Hans-Bernd Schäfer and Andreas Schönenberger, Strict Liability Versus Negligence (Munich Pers. RePEc Archive, Working Paper No. 5, 2008), available at http://mpra.ub.uni-muenchen.de/40195/1/MPRA_paper_40195.pdf.
58 See id., supra note 57, at 6–8.
59 See id. at 6.
60 See Lin, supra note 25.
61 See Worstall, supra note 55.
62 Des Toups, “How Many Times Will You Crash Your Car?,” Forbes (July 27, 2011), http://www.forbes.com/sites/moneybuilder/2011/07/27/how-many-times-will-you-crash-your-car/.
63 Hevelke and Nida-Rümelin, supra note 54, at 626–627.
64 Id. at 626.
65 Id. at 626–627.
66 Id. at 627.
67 See Toups, supra note 62.
68 Hevelke and Nida-Rümelin, supra note 54, at 627.
69 See id. at 626–627.
70 David Tuffley, “Self-Driving Cars Need ‘Adjustable Ethics’ Set by Owners,” The Conversation (Aug. 24, 2014), http://theconversation.com/self-driving-cars-need-adjustable-ethics-set-by-owners-30656.
71 Id., supra note 70.
72 Open Roboethics Initiative, supra note 46.
73 See Howard, supra note 6.
74 See Hevelke and Nida-Rümelin, supra note 54, at 623–624.
75 Id.
76 See id.
77 Cal. Veh. Code § 38750(b)(2) (West 2015).
78 Hevelke and Nida-Rümelin, supra note 54, at 624.
79 See Sherry Stokes, Consumers Expect to Use Mobile Devices, Read and Eat in Self-Driving Cars of Tomorrow, Carnegie Mellon University (Jan. 22, 2015), http://engineering.cmu.edu/media/press/2015/01_22_autonomous_vehicle_survey.html.
80 See Kevin Maney, “Google Has Shown That Self-Driving Cars Are Inevitable—and the Possibilities Are Endless,” The Independent (June 18, 2014), http://www.independent.co.uk/lifestyle/motoring/features/google-has-shown-that-self-driving-cars-are-inevitable —and-the-possibilities-are-endless-9547231.html.
81 Hevelke & Nida-Rümelin, supra note 54, at 624.
82 Id.
83 Id.
84 See Bryant W. Smith, “Human Error as a Cause of Vehicle Crashes,” The Center for Internet and Society (Dec. 18, 2013), http://cyberlaw.stanford.edu/blog/2013/12/human-error-cause-vehicle-crashes.
85 Id., supra note 84, at 624.
86 See Hevelke and Nida-Rümelin
87 Brian Boone, “How Auto Insurance Companies Work,” Howstuffworks (May 30, 2012), http://money.howstuffworks.com/personal-finance/auto-insurance/auto-insurance-company2.htm.
88 “Utilitarianism.” BusinessDictionary.com., WebFinance, Inc., http://www.businessdictionary.com/definition/utilitarianism.html (last visited Nov. 13, 2014) [defined as “(a)n ethical philosophy in which the happiness of the greatest number of people in the society is considered the greatest good. According to this philosophy, an action is morally right if its consequences lead to happiness (absence of pain), and wrong if it ends in unhappiness (pain)”].
89 See generally John C. Harsanyi, “Morality and the Theory of Rational Behavior,” in Utilitarianism and Beyond 39, 62 (Amartya Sen and Bernard Williams, Cambridge U. Press, 1982).
90 “Self-Driving Cars and Insurance,” Insurance Information Institute (Feb. 2015), http://www.iii.org/issue-update/self-driving-cars-and-insurance.
91 See Tuffley, supra note 70.
92 See id.
93 See id.
94 See generally Eric Morris, From Horse to Horsepower: The External Costs of Transportation in the 19th Century City (2006) (M.A. Thesis, UCLA), available at http://www.uctc.net/access/30/Access%2030%20-%2002%20-%20Horse%20Power.pdf.
95 Morris, supra note 94.
96 See id.
97 Phil LeBeau, “Take the Wheel Please, I’m Done Driving,” CNBC (Aug. 18, 2014), http://www.cnbc.com/id/101913796#.
98 Eno Center for Transportation, https://www.enotrans.org/ (last visited Nov. 13, 2014).
99 “Preparing a Nation for Autonomous Vehicles,” Eno Center for Transportation 8 (Oct. 2013), https://www.enotrans.org/wp-content/uploads/wpsc/downloadables/AV-paper.pdf.
100 “Preparing a Nation for Autonomous Vehicles,” supra note 99, at 17.
101 NHTSA, 2012 Motor Vehicle Crashes: Overview, U.S. Department of Transportation (2012), http://wwwnrd.nhtsa.dot.gov/Pubs/811856.pdf.
102 NHTSA, supra note 101, at 8.
103 See Hevelke and Nida-Rümelin, supra note 54, at 629.
104 See Erich Fromm, Man for Himself: An Inquiry into the Psychology of Ethics 19 (Open Road Media, 2013).
105 Restatement (Third) of Torts, § 7 (2010).
106 See Lin, supra note 24; See also Stock, supra note 49.

Sacrificing the Few to Save the Many
Rabbi J. David Bleich
…for one life is not set aside for another
Oholot 7:6

The date 9/11 is indelibly imprinted upon the national consciousness of America. The horrific loss of life in the terrorist attacks upon the Twin Towers of the World Trade Center was an unforgettable tragedy; the attack upon the Pentagon, the nerve-center of military security, exposed the vulnerability of the nation’s defense apparatus. But it was the fourth, thwarted, attack that is remembered for the heroism of the victims.
A domestic passenger flight flying from Newark International Airport to San Francisco, United Airlines Flight 93, was hijacked by terrorists some forty-five minutes into the flight. The hijackers breached the cockpit, overpowered the pilots and, taking control of the aircraft, directed it towards Washington, D.C. The hijackers’ intended target is thought to have been the White House or possibly the Capitol.
The terrorists’ master plan apparently called for carrying out that attack simultaneously with the attacks upon the World Trade Center and the Pentagon. However, due to airport congestion, the airplane was delayed on the ground more than half an hour. During the course of the hijacking, flight attendants and passengers, using GTE air phones and cell phones succeeded in making numerous calls to family and friends as a result of which they learned of the other terrorist attacks.
The passengers, apparently on the basis of a vote, determined to seize the controls of the plane from the hijackers. Of the ensuing events little is known with certainty. Early reports conjectured that the passengers were successful in overtaking the plane and that they knowingly caused the plane to crash in order to prevent greater loss of life. Strikingly, the mother of one of the passengers, herself a United Airlines flight attendant, left a message on her son’s cell phone urging an attempt to take over the aircraft. Much later, a report issued by a government investigating commission gave no indication that the passengers broke through the cockpit door but made it clear that the passengers’ actions thwarted the plans of the terrorists. Recordings of the cockpit conversations reveal that the terrorists feared that they would imminently lose control and debated whether to crash the plane immediately or whether to delay such action. The passengers’ death certificates state the cause as homicide and those of the hijackers list suicide as a cause of death. It is unclear whether the hijackers ultimately did crash the plane deliberately or whether they simply lost control. Had the passengers been successful in gaining control of the plane, the ending might well have been much happier. Among the passengers was an aviation executive who had extensive experience in a cockpit as a private pilot. Another passenger was experienced as an air traffic controller with the Air National Guard. Given those facts, there is scant reason to question the halakhic propriety of the course of action taken by the passengers.
Far more complex is the issue of purposely shooting down the plane and thereby causing the death of the innocent passengers.1 Air Force and Air National Guard fighter jets were unable to intercept the planes headed to the World Trade Center and the Pentagon but indications are that they would have reached the fourth plane in time to prevent it from reaching Washington. The option of shooting down the commercial jet was certainly given serious consideration and a decision to do so may actually have been reached.
The propriety of purposely causing the immediate death of the passengers aboard the plane in order to prevent further loss of life hinges upon the applicability of a provision of Halakha recorded by Rambam, Hilkhot Yesodei ha-Torah 5:5:
…if the heathen said to them, “Give us one of your company and we shall kill him; if not we will kill all of you,” let them all be killed but let them not deliver to them [the heathens] a single Jewish soul. But if they specified [the victim] to them and said, “Give us so and so or we shall kill all of you,” if he had incurred the death penalty as Sheba the son of Bichri, they may deliver him to them…but if he has not incurred the death penalty let them all be killed but let them not deliver a single Jewish soul.
Rambam’s ruling is based upon the explication of the narrative of II Samuel 20:4-22 found in the Palestinian Talmud. Terumot 8:12. .Joab, commander of King David’s troops, had pursued Sheba the son of Bichri and besieged the town of Abel in which Sheba sought refuge. Thereupon Joab demanded that Sheba be delivered to the king’s forces; otherwise, Joab threatened to destroy the entire city. On the basis of the verse, “Sheba the son of Bichri has lifted up his hand against the king, against David” (II Samuel 20:21), Resh Lakish infers that acquiescence with a demand of such nature can be sanctioned only in instances in which the victim’s life is lawfully forfeit, as was the case with regard to Sheba the son of Bichri who is described as being guilty of lèse majesté. However, in instances in which the designated victim is guiltless, all must suffer death rather than become accomplices to murder. R. Yohanan maintains that the question of guilt is irrelevant; rather, the crucial factor is the singling out of a specific individual by the oppressor. Members of a group have no right to select one of their number and deliver him to death in order to save their own lives. Since the life of each individual is of inestimable value there is no basis for preferring one life over another. However, once a specific person has been marked for death in any event, either alone if surrendered by his companions or together with the entire group if they refuse to comply, those who deliver him are not to be regarded as accessories. Rambam´s ruling is in accordance with the opinion of Resh Lakish. Both opinions are cited by Rema, Shulhan Arukh, Yoreh De’ah 157:1.2
Sheba ben Bichri was doomed to die in any event. Since Sheba would perish together with the other inhabitants of the besieged city, refusal to deliver him to the hands of the enemy would have served to spare his life for only a brief period. It is evident that the discussion in the Palestinian Talmud is predicated upon the premise that it is forbidden to cause the loss of even hayyei sha’ah, i.e., a brief or limited period of longevity anticipation, of a particular individual in order to preserve the normal longevity anticipation of a multitude of individuals.
Hazon Ish, Gilyonot le-Hiddushei Rabbenu Hayyim ha-Levi, Hilkhot Yesodei ha-Torah 5:1, s.v. u-mi-kol makom, demonstrates the same principle on the basis of his analysis of the well-known controversy between R. Akiva and Ben Petura recorded by the Gemara, Bava Mezi’a 62a. The case involves two people who are stranded in a desert with but a single container of water. There is sufficient water to sustain one person until he reaches safety; however, if the water is shared, neither will survive. Ben Petura declares that they should share the water and “let not one witness the death of his fellow.” R. Akiva rules that the owner of the water should drink it himself in order to save his own life. In support of that ruling R. Akiva cites the verse, “that your brother may live with you” (Leviticus 25:36). That biblical command requires that a person enable his brother to live with him but not that he prefer his brother over himself. If so, the life of another person should not be preferred over one’s own life with the result, as announced by R. Akiva. That “your life takes precedence.”
Hazon Ish, as were others before him, was troubled by the fact that Ben Petura’s conflicting ruling is apparently refuted by R. Akiva’s quite cogent inference drawn from the verse, “that your brother may live with you”—but not at the expense of your life.3 Accordingly, Hazon Ish asserts that Ben Petura is actually in agreement with the basic principle enunciated by R. Akiva with the result that, in a situation in which only one person can receive any longevity benefit, one’s own life takes precedence over that of another. Thus, for example, if two individuals have been poisoned and one of the two is in possession of a sufficient quantity of an antidote to save one person but, if divided, the quantity available will prolong the life of neither, Ben Petura would agree that the owner of the antidote must administer it to himself. Ben Petura would also concede that, if the antidote belongs to a third party, the halakhic rules of triage would apply. Ben Petura, asserts Hazon Ish, disagrees only in the case of the container of water because, if the water is shared, the life of each of the stranded persons will be prolonged at least minimally, whereas administering less than the requisite dose of the antidote would be entirely without purpose. Ben Petura, in disagreeing with R. Akiva, does so because he recognizes a duty of rescue with regard to even hayyei sha’ah. Since sharing the water serves to prolong the life of both at least minimally, such sharing is an act of loving one’s fellow as oneself, i.e., both lives are rendered equal. Put in somewhat different terms, since every moment of life is of infinite value and all infinities are equal, the fact that if one person drinks the entire quantity of water he will survive and enjoy a normal life-span is of no consequence. R. Akiva disagrees in maintaining that loving one’s fellow as oneself, but not more than oneself, requires preservation of one’s own normal longevity anticipation even if such rescue precludes prolongation of the hayyei sha’ah of another.
Consequently, argues Hazon Ish, even according to R. Akiva, it is only self-preservation that can excuse ignoring the hayyei sha’ah of another. It would then follow that, if the container of water belongs to a third party who is not in danger of perishing as a result of dehydration, that person, even according to R. Akiva, must divide the water equally between the two persons at risk. The principle that emerges is that a person dare not ignore the hayyei sha’ah of one putative victim even to carry out the complete rescue of another victim or even of many such victims.4 A fortiori, an overt act having the effect of extinguishing even an ephemeral period of life-anticipation of even a single individual cannot be countenanced in order to save the lives of many.
The only situation in which the life of another individual may be sacrificed in order to rescue a putative victim is the case of rodef, or pursuer. In such cases, as codified by Rambam, Hilkhot Rozeah 1:9, intervention in order to preserve the life of the victim is mandatory. In the case under discussion, the airplane pilot certainly must be categorized as a rodef even though he acts under duress. However, the passengers inside the airplane are in no way complicit in the potential death of the innocent people in the building targeted for destruction. Although the life of the rodef is forfeit, provided that taking the life of the rodef is necessary in order to rescue the victim, it is not permissible to cause collateral damage in the nature of the death of an innocent third party in eliminating the threat posed by the rodef. The pilot, who is intent upon using the airplane to bring death upon innocent victims, may be prevented from doing so even if it is necessary to kill him in order to accomplish that end5 but it is not permissible to cause the death of innocent passengers who are no more than passive bystanders even for the purpose of preserving the lives of a greater number of people.6
Some time ago, Phillippa Foot presented a moral dilemma in formulating the “Trolley Problem.”7 The situation involved the driver of a trolley rounding a bend. Five track workmen are seen to be engaged in repairing the track. The track is surrounded on two sides by a steep mountain and the trolley is travelling much faster than the workmen could possibly run. The driver steps on the brake in order to avoid striking the five men. Tragically, the brakes fail. The driver sees a spur of track in front of him leading off to one side. The driver can quite easily steer the trolley so that it will travel down the spur and thus save the lives of the five men on the track straight ahead. Unfortunately, there is a single workman who is repairing that spur. The workman cannot possibly get off the track before the trolley hits him. If the driver does nothing, five men will perish. If he turns the trolley onto the spur only one person will die but that person is at this moment in no danger whatsoever. Is it morally permissible to turn the trolley so that it claims the life of a single person in order to rescue five individuals?8
A situation quite similar to that of the runaway trolley in described by Hazon Ish, Sanhedrin, no. 25, s.v. ve-zeh le-ayyein, Hazon Ish describes a situation in which a bystander witnesses the release of an arrow aimed at a large group of people. The bystander has the ability to rescue the intended victims by deferring the arrow; however, if he does so, the arrow will claim a single victim who heretofore was endangered in no way whatsoever. Hazon Ish expresses doubt with regard to the permissibility of such intervention. If the hypothetical is changed from an arrow to a hand grenade, the moral dilemma acquires contemporary relevance.
Hazon Ish’s perplexity seems to be based upon the possibility of considering the situation to be analogous to triage. A person who comes upon multiple innocent victims must perforce choose which of the many victims he will attend. Triage for the purpose of rescue is quite different from selection for death. A person does not have license to designate another individual for death even if his motive is the rescue of a far larger number of lives. However, if the same person can save one, but not all, of the victims, he is required to intervene. In doing so, he is in no way complicit in the death of others. His selection is for rescue rather than for death: acts of rescue are profoundly different from acts of homicide. A person capable of doing do may, and indeed must, save as many lives as possible even if such rescue entails abandoning one or more victims to their fate.
To be sure, in the case of the runaway trolley, the flying arrow, or the hand grenade, the original act to which the ultimate death of the victims is to be attributed was already completed before the moral agent confronts a choice between intervening for the purpose of deflecting a lethal weapon and passive non-intervention. The motive for intervention is certainly the rescue of those who will be saved, not the death of the person who will actually die in their stead. It is, however, difficult to understand how intervention in the form of deflecting a lethal weapon can be regarded as a simple act of rescue governed by principles of triage. In the given hypothetical, it is the act of deflection that endangers a previously unendangered person. The intervener has not simply rescued a larger group of victims; he has in an active, overt manner caused the death of an innocent person. Hazon Ish himself emphasizes that even R. Yohanan, who maintains that in the event that the victim has already been designated for execution he may be delivered to the enemy, would agree that one may not actually kill the designated victim in order to spare other victims. Hazon Ish argues that R. Yohanan permits only indirectly hastening the death of a designated victim by delivering him to the enemy but does not sanction a direct act of homicide.9 It seems to this writer that deflection of a lethal weapon constitutes a direct rather than an indirect act. That objection has been raised by R. Benjamin Rabinowitz—Te’umim, No’am, VII (5724), 375.10 R. Eli’ezer Waldenberg, Ziz Eli’ezer, XV, no. 70, addresses Hazon Ish’s hypothetical and concludes that passive non-intervention is the only acceptable mode of conduct.
Putting that point aside, any argument in support of deflecting the arrow must be based on the premise that a distinction can be drawn between intervention for purposes of neutralizing a direct danger and overt delivery to death in order to ward off future deaths.
There are authorities who, in certain limited cases, permit sacrificing one life to save another when failure to intervene would result in the death of both persons. The question of killing a neonate whose forehead has emerged from the birth canal in order to save the life of the mother when, otherwise, the lives of both would be forfeit is raised by R. Akiva Eger, Tosafot R. Akiva Eger, Oholot 7:6, no. 16, but left unresolved. R. Akiva Eger does however cite Teshuvot Panim Me’irot, III, no.8, who rules that such a course of action is permissible. R. Israel Lipschutz, Tiferet Yisra’el, Oholot 7:6, Bo’az, sec. 10, similarly comments that, “perhaps it is permissible to destroy the infant in such circumstances in order to rescue the mother.”11
R. Saul Mordecai Arieli, Or Yisra’el, vol. 8, no.3 (Adar Sheni 5763), compares the case of the airplane seized by terrorists planning to crash it into the Pentagon to the situation of the mother in childbirth in arguing that it is permissible to cause the death of innocent passengers by shooting down the plane since the latter are doomed in any event. Nevertheless, it is virtually self-evident that such a comparison is inapt. If correct, that reasoning should apply as well to the situation described in the Palestinian Talmud in which one person has already been designated for death and failure to deliver him to the enemy will result in the death of all. R. Yohanan and Resh Lakish disagree with regard to whether it is permissible to become even indirectly complicit in the death of the doomed victim by delivering him for execution, but all agree that he may not actively be put to death in order to spare the lives of others.12 The case of the woman in childbirth is quite different because both mother and child are reciprocal pursuers, i.e., each is in the process of causing the death of the other. Non-intervention is mandated as explained by the Babylonian Talmud, Sanhedrin 72b, because “heaven is pursuing her” or, in the words of the Palestinian Talmud, because “you do not know who is pursuing whom.” The rationale for non-intervention is either that a natural physiological process is an “act of God,” and hence the person who is the instrument of danger cannot be categorized as a pursuer, or that intervention in cases of pursuit is warranted only in order to preserve the life of the victim but in a case of mutual pursuit each party is both victim and aggressor and, consequently, the rule is non-intervention.
The authorities who permit intervention in childbirth do so only because they regard the exception to the law of the pursuer formulated by the Gemara to be limited to activities of mutual aggression in which, barring intervention, one will prevail while the other will perish but the bystander is in no position to determine which of the two will live and which will die. In such situations the understanding that the bystander may not choose between the two is on the basis of the law of pursuit. However, they contend, if in the absence of intervention, both will die, each of the two must be regarded with certainty as being a rodef. Accordingly, those authorities regard intervention in order to rescue one of the parties to be permissible since, fundamentally, intervention is for the purpose of eliminating a rodef whose life is forfeit. Consequently, they regard intervention as a permissible act of rescue rather than as an act of selection for death. However, it is quite evident that this line of reasoning is not applicable in the case of innocent airplane passengers who are not at all engaged in pursuit. There are no grounds actively to cause the death of passengers in order to save others despite the fact that they, too, are doomed.
Me’iri, Sanhedrin 72b, expresses the novel view that, in the case of a woman in “hard travail,” although a third party is barred from destroying the infant in order to save the mother because of inapplicability of the law of pursuit, nevertheless the mother herself may exercise the right of self-defense to save her own life. A similar view is held by R. Joseph Saul Nathanson, Teshuvot Sho´el u-Meshiv, I, no. 22.13
Nor is the situation comparable to that described by the Gemara, Bava Kamma 26b, in which a child is thrown from a roof but, before landing and perishing from the fall, he is stabbed to death by another person. The majority view is that neither party incurs capital punishment. As explained by the medieval commentaries, the first performed a lethal act but did not actually kill. The second administered a coup de grace, but only after a lethal act had already been initiated and thus his act is comparable to extinguishing the life of a treifah. Causing the death of a treifah is an act of non-capital homicide because the cause of death has already been set in motion and capital punishment is administered only if the perpetrator has extinguished kol nefesh, i.e., “an entire life.” The airplane passengers cannot be considered as being in that category because no lethal act that would result in the loss of their lives has actually been initiated.14 Moreover, in terms of normative Halakhah, a treifah may not be killed to preserve the life of another individual.15
Rescue of human life is a divine mandate, but that imperative does not constitute license to commit an overt act of homicide. At times, passive non-intervention is the only morally acceptable option.
Contemporary Halakhic Problems, Volume VI (Jersey City: Ktav Publishing, 2012), ch. 2, pp. 39–50
Reprinted with permission of the author

Endnotes
1 Voluntary self-sacrifice in order to rescue the community of Israel is sanctioned by Yeshu’ot Ya’akov, Yoreh De’ah 157:1, on the basis of a report of such an act recorded in Bava Batra 10b and the approbation lavished upon those individuals. Cf., however, R. Abraham I. Kook, Mishpat Kohen, no. 143, who states that the incident described in Bava Batra demonstrates only that persons who would themselves perish together with the community may act in that manner. Nevertheless, Mishpat Kohen, nos. 143–144, presents another argument in sanctioning martyrdom for the purpose of rescuing the community of Israel. See also R. Meir Don Plocki, Hemdat Yisra’el, Parashat Pinhas, sec. 1. Cf., however, R. Meir Simcha ha-Kohen of Dvinsk, Or Sameach, Hilkhot Rozeah 7:8 and idem, Meshekh Hakhmah, Parashat Shemot, s.v. lekh shuv mizraymah. Nevertheless, voluntary self-sacrifice is not sanctioned simply to rescue a larger number of people. Cf., Ve-Ha’arev Na, 106–107.
2 In the case of the hijacked airplane the passengers may be regarded as specified because, in the absence of intervention, they would all die. If intervention were sanctioned, sacrificing their lives would save others but there was no possibility of sacrificing others in order to save them. Nevertheless, even if victims are specified, R. Yohanan does not sanction actively hastening death in order to save others.
3 Cf., Ramban, Commentary on the Bible, Leviticus 19:18, who observes that, according to R. Akiva, “and you shall love your neighbor as yourself” should be understood in a similar manner.
4 Cf., however, Hazon Ish’s apparently contradictory comments, Hoshen Mishpat Nezikin, Likkutim, no. 20, Bava Mezi’a 62a.
5 The plane itself also has the status of a rodef despite the absence of any element of moral culpability because it is the weight and velocity of the plane that will cause the victims to perish. However, it is unlikely that the incremental weight of the passengers presents a danger that would otherwise not exist; hence, none of the passengers can be considered a rodef. See R. Shlomoh Zefrani, Shimru Mishpat (January 5, 5764), no. 128.
6 A case of such nature arose during World War II. A German “spy ring” consisting of double agents supplied the German government with information concerning the areas in which V1 and V2 rockets were falling. It was proposed that the double agent transmitting information to the German military report that most aimed a number of miles to the south. The purpose was to assure that future rockets would fall in Kent, Surrey or Sussex where there would be far fewer casualties than in London. The proposal was reportedly placed before the Cabinet by Herbert Morrison, the Home Secretary, but with the negative comment that the report would soon be known by the British populace to be untrue and “doubts would be cast upon the accuracy of Government statements generally.” Churchill was abroad at the time, but the Cabinet rejected the proposal on the grounds that the British government was not justified in choosing to sacrifice unendangered citizens in order to save others. See Sefton Delmer, The Counterfeit Spy (London, 1971), p. 209. Despite the Cabinet veto, the deception team continued with their efforts to trick the Germans into correcting the aim and range of the German rockets. Morrison, with furious indignation, once again brought the matter before the Cabinet. He is reported to have exclaimed, “Who are we to act as God? Who are we to decide that one man shall die because he lives on the South Coast while another survives because he lives in London?” Morrison’s view ultimately prevailed. See ibid., p. 214.
7 The problem was first advanced by Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” Oxford Review, no. 5 (Oxford, 1967), pp.5–15, reprinted in Philippa Foot, Virtues and Vices and Other Essays in Moral Philosophy, 2nd ed. (Oxford, 2002), pp. 19–32.
8 Philippa Foot’s example is somewhat complicated by the fact that, if the five were already on the track before the locomotive was set in motion, it is the driver who set in motion the chain of events that will lead to loss of life. Can he “remedy” an act that would cost five lives by modifying the act so that it will result in the loss of a different, but single, life? The hypothetical could just as readily involve a situation in which the driver has had a seizure and it is now a passenger who must take over the controls. The difference is that the passenger has heretofore been in no way involved in the performance of an act that will lead to fatality. May he now cause the death of one person in order to save five others? That dilemma is discussed by Professor Foot in “Killing and Letting Die,” Abortion: Moral and Legal Perspectives, J. Garfield, ed. (Amherst, 1985), pp. 177–85, reprinted in Philippa Foot, Moral Dilemmas and Other Topics in Moral Philosophy (Oxford . 2002), pp. 78–87, seemingly without an awareness of the difference between the two hypotheticals. See Moral Dilemmas, p. 85. It is not at all clear that a distinction should be made between the two cases. If a distinction is to be made it would be on the basis of the following consideration: Although overtly causing the death of one person to save a group of individuals is forbidden because each life is of infinite value, nevertheless, if the potential intervener himself set the chain of events in motion, even unintentionally, he has a duty to minimize his own transgression. That consideration may be sufficient grounds to deflect the arrow or trolley so that he is responsible for fewer acts of homicide. A discussion of that issue is beyond the scope of this endeavor but should focus upon an analysis of Shabbat 4a.
9 Absent designation, even indirectly causing the death of another in order to save one’s own life is included in the category of yehareg ve-al ya-avor. See R Yeruchem Yehuda Perilman, Or Gadol (Vilna, 5684), no. 1, Hilkhot Yesodei ha-Torah 5:2, s.v. veh-ha-nireh (p. 5a) and s.v. ve-ha-iker (p. 10b) and Hazon Ish, Gilyonot le-Hiddushei Rabbenu Hayyim ha-Levi, Hilkhot Yesodei ha-Torah 5:1, s.v. ve-yesh lomar. Cf., however, Galya Masekhta (Vina, 5605), Yoreh De’ah, no. 5, s.v. u-ke-ein he’arah (p. 92a).
10 Capital culpability is an entirely different matter. The arrow, for example, reaches its goal only because of the combination of the force imparted by the archer and the channeling of that force by the intervener. The situation seems analogous to a case of an arsonist who lights a fire that is carried by the wind and causes death. In that case, it is the act of lighting a fire together with the force of wind that causes death. Capital culpability in such circumstances is dependent upon the Talmudic controversy regarding isho mishum hizav or isho mishum memmono.
11 For fuller discussion of this question see this writer’s Contemporary Halakhic Problems, Vol. 1 (New York, 1977), 355–361.
12 See also Ve’Ha’arev Na, I, 106.
13 That view is dismissed by R. Moshe Yonah Zweig, No’am, Vll, 55, as “the opinion of an individual” having no standing in the determination of Halakhah.
14 Depiction of such a victim as a gavra ketila, or “dead man,” is equally specious. That classification is applicable to a transgressor actually under sentence of death pronounced by a bet din. Causing a death of such an individual is no more than giving effect to the sentence of the bet din.
15 See R. Ezekiel Landau, Teshuvot Noda bi-Yehuda, Mahadura Tinyana, Hoshen Mishpat, no. 59 and R. Schneur Zalman of Lublin, Teshuot Torat Hesed. Even ha-Ezer, no. 42. Cf., however, Minhat Hinnukh, no. 296, R. Judah Rosannes, Parashat Derakhim, Derush 17, and R. Jacob Emden, Migdal Oz, Even Bohen 1:79, and R. Meir Arak, Tel Torah, Yerushalem, Terumot 8.3.

Comments