Skip to content
Take a Demo: Get a Free AP
Explore Mist

Moral machine trolley problem

Moral machine trolley problem. Thus, any attempt to solve the normative Trolley Problem begins with an attempt to solve the descriptive problem, to identify the features of actions that elicit our moral Solving the Trolley Problem Mar 8, 2022 · The Moral Machine Experiment is a multilingual online ‘game’ for gathering human perspectives on moral dilemmas—specifically, trolley-style problems in the context of autonomous vehicles. The experiment presents volunteers with scenarios Mar 11, 2020 · The “Moral Machine” Is Bad News for AI Ethics. Sep 22, 2022 · So begins what is perhaps the most fecund thought experiment of the past several decades since its invention by Philippa Foot. If you pull a lever, the trolley will be diverted onto In addition to reprising the most famous "trolley problem" (all of the Moral Machine problems involve a car experiencing a sudden break failure on a two-lane road), the scenarios introduce variations, such as number of people, their age, sex, social status, passengers vs. The normative and descriptive Trolley Problems are closely related. We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well-being they create, and the harm Dec 11, 2023 · By Chuck Dinerstein, MD, MBA — December 11, 2023. Origin of the Trolley Problem Historical background and its emergence in ethical discussions. Google Scholar Digital Library; Michael Inners and Andrew L. 8 As Rakowski observes, “[t]his is a truly remarkable exchange between the two leading contributors to this moral philosophical debate”; no doubt “these lectures, commentaries, and replies [will be] absolutely invalu-able for future work on the trolley problem” (p. Project Contact: scalable-contact@media. There are many dierent versions of the trolley problem, all aimed at a moral dilemma (Foot, 1967; Thomson, 1985). 9 Dec 3, 2007 · Both of these grave dilemmas constitute the trolley problem, a moral paradox first posed by Philippa Foot in her 1967 paper, "Abortion and the Doctrine of Double Effect," and later expanded by Judith Jarvis Thomson. edu. You have access to a lever that could switch the trolley Oct 24, 2018 · Article 24 April 2019. May 22, 2021 · Solving the Single-Vehicle Self-Driving Car Trolley Problem Using Risk Theory and Vehicle Dynamics. Or maybe you won’t be. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two The answer, in my view, is that there is no definitive solution. 5). However Jun 23, 2016 · These are new technological quagmires for an old moral quandary: the so-called the trolley problem. Expand. After that, we explain the three main differences we see between the ethics of accident-management in self-driving cars, on the one hand, and the trolley problem and trolleyology, on the other hand (sections 3–5). “Unavoidable harm dilemmas – such as the classic trolley problems – are fundamentally hard to solve to everyone’s satisfaction,” says Sohan. Mar 10, 2021 · The “Trolley Problem” and Self-Driving Cars: Your Car’s Moral Settings Noah Levin 1 “We have decided to put the moral decisions related to our crash-avoidance and self-driving features into the hands of the consumer. Martin's Press, 2018. Thomson’s “bystander” and “footbridge” versions of the trolley problem induce different intuitive judgments about what to choose in the ethical dilemma. Oct 31, 2018 · A driverless car is speeding down a road and can’t stop. Abstract. A few years ago, MIT released an online tool — the Moral Machine — intended to “crowdsource” ethical decisions. It’s a thought experiment that’s analyzed and dissected in ethics class. The Trolley Problem and What Your Answer Tells Us About Right and Wrong. Against this background, this article investigates the permissibility of trade Oct 24, 2018 · In total, the Moral Machine collected nearly 40 million individual decisions, which the researchers then analyzed as a whole, or in groups defined by the age, education, gender, income and Sep 17, 2019 · Footnote 1 While, the article does not focus on confronting specific examples, such as MIT’s Moral Machines experiment, it defends the need to consider the more immediate, realistic, and important ethical questions that machine perception and decision-making present. The dark side of the 'Moral Machine' and the fallacy of computational ethical decision-making The Traditional “Trolley Problem” in Philosophy. The traditional “trolley problem” in philosophy became famous through a debate between Philippa Foot and Judith Jarvis Thomson. Second, we document individual variations in preferences, based on respondents’ demographics. Azim Shariff, a co-author on the study Mar 23, 2021 · Engineering. Unlike humans, self-driving cars don’t have an internal moral code; Oct 5, 2016 · A group at MIT built an app called the Moral Machine, "a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars". After all, it is your vehicle and you will be behind the wheel. Autonomous Vehicles. represent a crucial challenge for artificial intelligence ethics, whose. 61 million decisions in 233 countries, dependencies, or territories (Fig. In 2021, Germany passed the first law worldwide that regulates dilemma situations with autonomous cars. Introduction of the two moral choices: action and inaction. The problem of abortion and the doctrine of double effect. To the best of the authors’ knowledge, no Sep 6, 2023 · “The Moral Machine” — An interactive online platform by MIT that presents users with different variations of the trolley problem and collects data on people’s ethical preferences, aiming The Moral Machine attracted worldwide attention, and allowed us to collect 39. trolley-style problems in the context of autonomous vehicles needing to ‘choose’ between two undesirable alternatives (2. In the main interface of the Moral Machine, users are shown unavoidable accident scenarios with two possible outcomes, depending on Dec 25, 2022 · The Moral Machine Experiment (Awad et al. For this purpose, studies like the Moral Machine Experiment have collected data about human decision-making in trolley-like traffic dilemmas. Oct 26, 2018 · Their Moral Machine has revealed how attitudes differ across the world. The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. 66 Bonnefon et al. It is now up to the research and industrial sectors to enhance the development of autonomous vehicles based on such guidelines. An out-of-control trolley is coming down the track. The Oxford philosopher Philippa Foot published an article in 1967 which had an enormous impact on philosophical debates in the following decades worldwide. Oct 24, 2018 · The classic trolley problem goes like this: You see a runaway trolley speeding down the tracks, about to hit and kill five people. Oct 20, 2013 · In the trolley problem, the direct effect of switching tracks seems to be saving lives, while the secondary effect is sacrificing a life. This chapter begins with a brief overview of what are known as standard Trolley Problem Cases. The term is often used more loosely with regard to any choice that seemingly has a trade-off between what is good Oct 24, 2018 · The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem. To the left, one person is trapped on the tracks, and to the right, five people. 5 days ago · A platform called Moral Machine [42] was created by MIT Media Lab to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Psychologists use variations on this hypothetical situation to gauge people's gut reactions about the classic trolley problem [16], the intricate situations posed by the MM experiment offer a more profound exploration of LLM moral reasoning. This stirs our moral intuitions against the action of pushing the man off the bridge. Jul 28, 2016 · We then say more about the trolley problem and the main issues discussed in the literature on the trolley problem (section 2). May 8, 2023 · Abstract. g. This Sep 11, 2019 · The largest project, the Moral Machine experiment, The dilemmas studied in research on the moral programming of autonomous vehicles represent adaptions of the original trolley problem 18. Google Scholar; Eubanks, V. By the time of publication, the Moral Machine Experiment had collected nearly 40 million data points from around the world. The problem was presented as multiple variations Keywords Moral machine project · Trolley problem · Autonomous vehicles In the moral machine project 1, participants are asked to form judgments about variations of this well-known example: Trolley: There is a runaway trolley. Other Press Inquiries. [43] Welcome to the Moral Machine! A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. Dec 12, 2023 · The actual trolley problem. Listen. [1] [2] The platform is the idea of Iyad Rahwan and social psychologists Azim Shariff The normative Trolley Problem begins with the assumption that our natural responses to these cases are generally, if not uniformly, correct. A simple and widely used version is as follows: Imagine you are standing at a switch and a trolley is speeding towards ve people tied up on the rails. The Feb 28, 2020 · Inspired by the “trolley problem,” Bonnefon et al. The experiment, launched in 2014, defied all expectations, receiving over 40 million responses from 233 countries, making it one of the largest moral surveys ever conducted. (n 26) 2. conducted the now-famous Moral Machine Experiment. ArticlePDF Available. 2). ” “The trolley problem presents a situation in which someone has to decide whether to intentionally kill one person (which violates a moral norm) in order to Oct 24, 2018 · A new paper published today by MIT analyzes the results of an online quiz — the Moral Machine — that tasked respondents with making ethical choices regarding fictional driving scenarios. In 2016, scientists launched the Moral Machine, an online survey that asks people variants of the trolley problem to explore moral decision-making regarding autonomous vehicles. Feb 28, 2020 · autonomous vehicle, self-driving car, AI ethics, Moral Machine, trolley problem. Google Scholar Digital Library; Foot, P. Results revealed participants' choices depended on the level of economic inequality in their country, wherein more economic inequality meant they were The trolley problem, as it came to be known, was first identified as such by the American philosopher Judith Jarvis Thomson, whose essay “Killing, Letting Die, and the Trolley Problem” (1976) spawned a vast academic literature on the topic. We caution against this by identifying four problems. Civil Engineering. May 14, 2002 · The Trolley Problem and the Doing/Allowing Distinction At this point, it is worth discussing the relationship between the doing/allowing distinction and the famous Trolley Problem. Image credits: pixabay. would you kill one person to save five?), but calibrated for the autonomous car. , 2019). Keywords Moral machine project · Trolley problem · Autonomous vehicles In the moral machine project 1, participants are asked to form judgments about variations of this well-known example: Trolley: There is a runaway trolley. Over Jun 23, 2016 · The researchers based their survey queries largely on an ethics thought experiment known as “the trolley problem,” according to Azim Shariff, an assistant professor of psychology at the Nov 24, 2021 · Through the Moral Machine experiment, researchers posed various self-driving car scenarios that compelled participants to decide whether to kill a homeless pedestrian or an executive pedestrian. In this paper, we consider three competing explanations of the empirical finding that people’s causal attributions are responsive to normative details, such as whether an agent’s action violated an. The machine uses modified forms of the Trolley Problem, a faux ethical dilemma, in which a rail switch operator must decide who an oncoming train will kill. On the other hand, in the footbridge case, the direct effect of the questioned action is to kill a man. , , 2018) is a multilingual online ‘game’ for gathering human perspectives on moral dilemmas—specifically, trolley-style problems in the context of autonomous vehicles. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Weighing up whom a self-driving car should kill is a modern twist on an old ethical dilemma known as the trolley problem. Like most philosophical problems, the Trolley Problem is not designed to have a solution. It’s a pretty straightforward bit of moral math. The problem immediately suggested a broader application of the doctrine of double effect beyond the Apr 29, 2016 · The Trolley Problem arises from a set of moral dilemmas, most of which involve tradeoffs between causing one death and preventing several more deaths. Jan 2020; However, the Moral Machine Experiment cannot tell these preferences apart. This section concludes with a discussion of moral dilemmas more generally, high- Jul 5, 2017 · The Problem Space: MIT’s Moral Machine Conceptualized by Iyad Rahwan , Jean-Francois Bonnefon , and Azim Shariff , the Moral Machine is an interactive website reimagining the classic trolley Driverless Dilemma. Kun. Trolley Problems. The scenarios have the avor of the trolley problem, the philosophical thought Dec 12, 2016 · The trolley problem highlights a fundamental tension between two schools of moral thought. We generate moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. Oct 1, 2023 · Prior research relying on Trolley-like, moral-impersonal dilemmas suggests that people might apply similar norms to humans and machines but judge their identical decisions differently. First, we summarize global moral preferences. The classic ethical dilemma for autonomous driving is the "trolley problem. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. 2016. ” The problem involves scenarios in which an accident involving a vehicle is imminent, and the vehicle must opt for one of two potentially fatal options. That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an Here we describe the results of this experiment. Aug 9, 2017 · Since the trolley problem is purely hypothetical, it can be easily adapted for other contexts — the Moral Machine project, of course, adapts it for the road. However, the comprehensive application of this evaluative framework remains underrepresented in contemporary studies, signaling it to be a pivotal subject for future research. For the different uses of trolley cases and their limitations see Himmelreich (Citation 2018). The original problem was made by Philippa Foot in a philosophical paper about abortion and the doctrine of the double effect. The MIT “Moral Machine” study that prompted much recent coverage Feb 1, 2017 · The Moral Machine is interesting, as there’s no objectively correct response to any of the scenarios, so your own judgement may differ greatly with that of other people or autonomous cars. . In Proceedings of the International Workshop on Software Fairness (FairWare '18). The utilitarian perspective dictates that most appropriate action is the one that achieves the greatest What to Know. Users are presented with ap-proximately 13 scenarios, and asked to choose one of two outcomes in each. Most of us would sacrifice one person to save five. We contend that the success of AVs depends, in part, upon the transparency about how much of the trolley problem continues to exist. If you pull a lever, the trolley will be diverted onto Oct 26, 2018 · Their Moral Machine has revealed how attitudes differ across the world. In the current state of the art, we find studies on how ethical theories can be integrated. problem and develop the public discourse towards the actual problems of AVs. (Inception/Warner Bros. In essence, then, the Moral Machine project seeks to crowdsource guidelines for the programming of autonomous vehicles. ) It's called the trolley problem, and it's all about how far you'd be willing to go to save lives in an emergency – even if it meant killing somebody. When Philippa Foot came up with the thought experiment in 1967, it was to show that we can’t know the exact outcomes of the ethical decisions we make. a. (1) Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. It is, rather, intended to provoke thought, and create an intellectual discourse in which the difficulty of resolving moral dilemmas is appreciated, and our limitations as moral agents are Oct 25, 2018 · The many variations that exist of this problem can help people explore whom they might choose to live or die. But should this dramatic “edge case” be the model for autonomous vehicles’ decision-making? says on the tin, moral machines. Imagine this: Five people are tied to a trolley track. As an outside observer, people judge which outcome they think is Mar 17, 2019 · Abstract. Main. But if we have to actually kill that person ourselves, the math gets fuzzy. Mar 1, 2023 · An interesting phenomenon was observed in MIT’s study of the moral machine. We examined people's moral norm imposed on humans and robots (Study 1) and moral judgment of their decisions (Study 2) in Trolley and Footbridge dilemmas. The Trolley Problem dates back to Philippa Foot’s (1978) discussion of a pair of examples: In the first case, a judge must choose between framing and killing an Moral Machine is an online platform, developed by Iyad Rahwan 's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes. There are many versions, but here is one: A trolley is rolling down the tracks and reaches a branchpoint. Either it hits an elderly woman crossing the street, or it swerves out of the way and kills its passenger, a young child. But this is not the trolley problem. Finally Sep 22, 2022 · The trolley problem is a classic thought experiment that evokes an ethical dilemma. By the time of publication, the Moral Machine Experiment had collected nearly 40 40 40 40 million data points from around the Oct 31, 2018 · A new paper from MIT published last week in Nature attempts to come up with a working solution to the trolley problem, crowdsourcing it from millions of volunteers. Indeed, it could be the sine qua non ethical issue for philosophers, lawyers, and engineers alike. The driver ‘should’ (moral obligation) turn the trolley for Foot, whereas he ‘may’ (moral permission) for Thomson (n 6) 206–7. Oct 24, 2018 · A new paper published today by MIT probes public thought on these questions, collating data from an online quiz launched in 2016 named the Moral Machine. , Goodall, 2016b, Nyholm and Smids, 2016, Himmelreich, 2018, Trussell, 2018, Winfield et al. But the trolley problem endures because it allows us to ask the why and how of moral decisions surrounding May 8, 2023 · The imminent proliferation of autonomous vehicles raises a host of ethical questions. Since then, moral philosophers have applied the “trolley problem” as a thought experiment to study many different ethical conflicts—and chief among them is the programming of autonomous vehicles (AVs). Sytsma Jonathan Livengood. We generate moral. To the public eye, most will view the trolley problems as a good reflection on real-life accidents. Jan 1, 2022 · While the moral machine experiment relies on numerous trolley-type problems with the aim to measure societal expectations in moral dilemmas of AVs, other researchers tend to deny the relevance of trolley problems for AVs (see, e. St. Questions consist mainly of the trolley problem and they found that different countries usually differ in their opinions. Dec 10, 2016 · The trolley problem is a moral dilemma used first by philosophers starting in 1967 and then adopted by psychologists to examine moral thinking in humans (Cathcart 2013 ). It asked users to make a series of ethical May 2, 2019 · Avoiding the Intrinsic Unfairness of the Trolley Problem. Now, scientists have tested this famous thought experiment in real life for the first time: with almost 200 human participants, caged mice, electric Oct 24, 2018 · The Moral Machine game is similar to the infamous trolley problem (a. Dec 5, 2023 · 🤔Outline Introduction Brief overview of the Trolley Problem. The normative Trolley Problem begins with the assumption that authors' natural responses to these cases are generally, if not Nov 29, 2023 · The “trolley problem” is instead the conceptual problem for moral theory to identify the principle that underlies our moral responses to different variants of trolley cases (Königs 2022 , 3). However, it can be questioned how robust these intuitive judgments are. " It's a binary choice of intentionally killing one person to avoid the deaths of multiple individuals – or yourself – while driving. Furthermore, (3 Oct 29, 2018 · Second, the sensors in the cars are not good enough to make the kind of judgements called for in trolley problem formulations. They want to see how humans solve trolley problem scenarios. If you do nothing, the trolley will hit and kill ve people. Dec 17, 2018 · Applied to autonomous cars, at first glance, the Trolley Problem seems like a natural fit. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. The so-called Trolley Problem was first discussed by Philippa Foot in 1967 as a way to test moral intuitions regarding the doctrine of double effect, Kantian principles and utilitarianism. The Scenario Unveiled Detailed description of the classic Trolley Problem scenario. Psychology. Dec 12, 2023 · This problem, one of the most famous thought experiments in all of philosophy, was proposed by British philosopher Phillipa Foot in 1967 as a way to consider tough ethical choices in many fields Mar 23, 2021 · 26 Examples include Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2008); Anders Sandberg and Heather Bradshaw-Martin, ‘What Do Cars Think of Trolley Problems: Ethics for Autonomous Cars?’, Proceedings of the 2013 International Conference, Beyond AI, 2013; Noah J Goodall, ‘Machine Ethics and Automated Vehicles’ in G Meyer and Autonomous machines and decision-making can lead to potentially fatal errors. mit. Highlighting the ethical dilemma it presents. ACM, New York, NY, USA, 32--37. Autonomous vehicles (AVs) 1. Far from solving the dilemma, the trolley problem launched a wave of further investigation into the philosophical quandary it raises. Ever since, a great number of philosophers and psychologists have come up with alternative scenarios to further test intuitions and the relevance Aug 9, 2016 · The Moral Machine adds new variations to the trolley problem: do you plow into a criminal or swerve and hit an executive? Seven pregnant women (who are jay-walking) or five elderly men (one of Dec 11, 2023 · “the trolley case is an inadequate experimental paradigm for informing AVs’ ethical settings because it is reductively binary and lacks ecological validity. The moral machine is a study consisting of polling from over one hundred countries and gathering their opinions on self-driving cars. ” Ecological validity refers to the fact that the trolley problem is rare, if it occurs at all, making the findings of the Moral Machine less generalizable to real-world situations. 1 (a)). The trolley problem is an May 17, 2018 · Trolley cases are widely considered central to the ethics of autonomous vehicles. k. Fourth, we show that these differences correlate with It is hard to ignore that the trolley problem, to an extent, applies to self-driving cars, the scenarios brought up from the original trolley problem and moral machine do seem like a likely event that will happen on the roads. Project Website. Apr 29, 2016 · Intervention, Bias, Responsibility and the Trolley Problem. , Citation 2018; Bonnefon et al. An initial survey completed on Amazon’s Mechanical Turk platform found that although a large majority of participants were in favor of AVs programmed to minimize the number of total deaths, significantly fewer Sep 3, 2021 · We often encounter the trolley problem today in popular media as shorthand for talking about the unintended effects of artificial intelligence (AI)—to the point that it has become a cliché for the oversimplification of machine ethics. Transcript. 65 Charles Taylor, ‘Political Identity and the Problem of Democratic Exclusion’ ABC Religion and Ethics (2016). Philosophical Perspectives Exploration of Dec 4, 2023 · The work is designed to capture a more realistic array of moral challenges in traffic than the widely discussed life-and-death scenario inspired by the so-called “trolley problem. To assess basic moral preferences for the behavior of these vehicles, Awad et al. Furthermore, (2) trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. (1967). Nov 1, 2013 · A trolley is hurtling down a track, and if nobody intervenes it will hit and kill five people. Some scholars in the ethics of autonomous vehicles hope to align such intelligent systems with human moral judgment. Jun 9, 2014 · Here we will use trolley problems to introduce Kantian Ethics, which is the ethical theory developed by Immanuel Kant (1724-1804), and introduce deontological ethical theories in general. Apr 17, 2023 · In earlier work, some of the Moral Machine experiment’s authors acknowledge the issues with Trolley Problems as being simplified scenarios compared to reality, and that real life provides statistical problems that should be solved instead . By Oct 24, 2018 · The survey has global reach and a unique scale, with over 2 million online participants from over 200 countries weighing in on versions of a classic ethical conundrum, the “Trolley Problem. May 14, 2018 · By Peter Dockrill. Whose life should be spared? As driverless cars become a reality, the answer to the famed “Trolley problem” becomes increasingly pressing. Princeton University Press, 2013. I then describe a well-known use case of trolley-style problems in machine ethics: the Moral Machine Experiment (2. (2015) investigated the psychology of individuals faced with AV moral dilemmas. 3). The researchers asked participants how an autonomous car should behave when a brake failure occurs, and both swerving and staying in the lane would result in fatalities. The trolley problem is a thought experiment in ethics about a fictional scenario in which an onlooker has the choice to save 5 people in danger of being hit by a trolley, by diverting the trolley to kill just 1 person. " There is a case that many lives could be saved if society embraces machine learning and commits itself to deploying robotics technologies responsibly. pedestrians. “The Trolley Problem” is another thought experiment, one that arose in moral philosophy. Deaths occurring due to robotic errors produce moral dilemmas, much like "the trolley problem. But the trolley problem as imagined by the Moral Machine Experiment and its critics alike, is not simply about quantitative judgments, but qualitative ones. And here’s where we return to the question in the intro. Experts on both AI and ethics agree that while the Moral Machine experiment can serve as a starting Mar 30, 2018 · The trolley problem has become so popular in autonomous-vehicle circles, in fact, that MIT engineers have built a crowdsourced version of it, called Moral Machine, which purports to catalog human Oct 27, 2021 · The model view, by contrast, is typically used in a way that suggests a grounding in ethical intuitionism – with the Moral Machine being one example (Awad et al. The Nature paper is behind a paywall, but this MIT Apr 12, 2021 · In 2017, the German ethics commission for automated and connected driving released 20 ethical guidelines for autonomous vehicles. Next, it points out that many cases that are presented as Trolley Problem Cases in the AI ethics literature in fact raise moral issues distinctive from the issues raised by standard Trolley Problem Cases. Nov 29, 2023 · The imminent deployment of autonomous vehicles requires algorithms capable of making moral decisions in relevant traffic situations. 1. It asks us, for example, if we would be more willing to kill an old person or a young one, a law-abider or a criminal, a rich person or a poor person. , Citation 2016, Citation 2020). The trolley problem has nothing to do with deciding the fates of hypothetical people. Beyond Liability: Legal Issues of Human-Machine Interaction for Automated Vehicles. Transportation Engineering. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries. 2017. J. com. A platform called Moral Machine was created by MIT Media Lab to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. wd cc wr mp xj aj dh fs zv uk