{"id":9399,"date":"2020-02-20T09:21:19","date_gmt":"2020-02-20T09:21:19","guid":{"rendered":"https:\/\/revistaidees.cat\/?p=9399"},"modified":"2020-03-05T09:55:25","modified_gmt":"2020-03-05T09:55:25","slug":"thinking-about-ethics-in-the-ethics-of-ai","status":"publish","type":"post","link":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/","title":{"rendered":"Thinking About \u2018Ethics\u2019 in the Ethics of AI"},"content":{"rendered":"\n<p>A major international consultancy firm identified \u2018AI ethicist\u2019 as an essential position for companies to successfully implement artificial intelligence (AI) at the start of 2019. It declares that AI ethicists are needed to help companies navigate the ethical and social issues raised by the use of AI <span class=\"note-item\"><a href=\"#note-01\" class=\"scroll-to\">[1]<\/a><span class=\"note-item-tooltip\">1 \u2014 KPMG (2019) Top 5 AI hires companies need to succeed in 2019.\n<\/span><\/span>. The view that AI is beneficial but nonetheless potentially harmful to individuals and society is widely shared by the industry, academia, governments, and civil society organizations. Accordingly and in order to realize its benefits while avoiding ethical pitfalls and harmful consequences, numerous initiatives have been established to a) examine the ethical, social, legal and political dimensions of AI and b) develop ethical guidelines and recommendations for design and implementation of AI <span class=\"note-item\"><a href=\"#note-02\" class=\"scroll-to\">[2]<\/a><span class=\"note-item-tooltip\">2 \u2014 AlgorithmWatch has compiled a list of ethical frameworks and guidelines available at:<br \/>\n<a href=\"https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/\" rel=\"nofollow\">https:\/\/algorithmwatch.org\/en\/project\/ai-ethics-guidelines-global-inventory\/<\/a>.\n<\/span><\/span>.<\/p>\n\n\n\n<p>However, terminological issues sometimes hinder the sound examination of ethical issues of AI. The definitions of \u2018intelligence\u2019 and \u2018artificial intelligence\u2019 often remain elusive, and different understandings of these terms foreground different concerns. To avoid confusion and the risk of people talking past each other, any meaningful discussion of AI Ethics requires the explication of the definition of AI that is being employed as well as a specification of the type of AI being discussed. Regarding the definition, we refer to the European Commission High-Level Expert Group on Artificial Intelligence, which defines AI as \u201csoftware (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected [\u2026] data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions\u201d <span class=\"note-item\"><a href=\"#note-03\" class=\"scroll-to\">[3]<\/a><span class=\"note-item-tooltip\">3 \u2014 European Commission High-Level Expert Group on Artificial Intelligence [AI HLEG] (2019) Ethics guidelines for trustworthy AI. European Commission.\n<\/span><\/span>.<\/p>\n\n\n\n<p> To provide specific guidance and recommendations, the ethical analysis of AI further needs to specify the <em>technology<\/em>, e.g. autonomous vehicles, recommender systems, etc., the <em>methods<\/em>, e.g. deep learning, reinforcement learning, etc., and the sector(s) of application, e.g. healthcare, finance, news, etc. In this article, we shall focus on the ethical issues related to <em>autonomous AI<\/em>, i.e. artificial agents, which can decide and act independently of human intervention, and we shall illustrate the ethical questions of autonomous AI with plenty of examples.<\/p>\n\n\n\n<p>Consider first the case of autonomous vehicles (AVs). The possibility of accident scenarios involving AVs, in which they would unavoidably harm either the passengers or pedestrians, has forced researchers and developers to consider questions about the ethical acceptability of the decisions made by AVs, e.g. what decisions should AVs make in those scenarios, how can those decisions be justified, which values are reflected by AVs and their choices, etc <span class=\"note-item\"><a href=\"#note-04\" class=\"scroll-to\">[4]<\/a><span class=\"note-item-tooltip\">4 \u2014 The type of accident scenarios is known as \u2018the trolley problem\u2019. It is only one of the topics discussed in the ethics of autonomous vehicles, and we only use it as an example to illustrate one of the many ethical issues autonomous AI could raise. See:\n\nLin, P. (2016) Why ethics matters for autonomous cars. In M. Maurer, J. Gerdes, B. Lenz, &amp; H. Winner (Eds.), Autonomous Driving: Technical, Legal and Social Aspects (pp. 69-85). Berlin: Springer.\nKeeling, G. (2019) Why trolley problems matter for the ethics of automated vehicles. Science and Engineering Ethics.\n\n<\/span><\/span>.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<blockquote class=\"wp-block-quote is-style-large is-layout-flow wp-block-quote-is-layout-flow\"><p>Hiring algorithms typically function by using the criteria they learned from a training dataset. Unfortunately, such training data can be biased, leading to potentially discriminatory models<\/p><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>Or, consider the case of hiring algorithms, which have been introduced to automate the process of recommending, shortlisting, and possibly even selecting job candidates. Hiring algorithms typically function by using the criteria they learned from a training dataset. Unfortunately, such training data can be biased, leading to potentially discriminatory models <span class=\"note-item\"><a href=\"#note-05\" class=\"scroll-to\">[5]<\/a><span class=\"note-item-tooltip\">5 \u2014 Bogen, M. (2019) All the ways hiring algorithms can introduce bias. Harvard Business Review, May 6, 2019.\n<\/span><\/span>.<\/p>\n\n\n\n<p>In order to ensure protection from discrimination, which is not only a human right, but also part of many countries\u2019 constitutions, we therefore have to make sure that such algorithms are at least non-discriminatory but ideally also <em>fair<\/em>. There are, however, different understandings of fairness: people disagree not only what fairness means, the adequate conception of fairness may also depend upon the context. Moreover, it has also been shown that different fairness metrics cannot be attained simultaneously <span class=\"note-item\"><a href=\"#note-06\" class=\"scroll-to\">[6]<\/a><span class=\"note-item-tooltip\">6 \u2014 See:\n\nFriedler, S., Scheidegger, C., &amp; Venkatasubramanian, S. (2016) On the (Im)possibility of fairness. arXiv:1609.07236.\nChouldechova, A. (2017). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2): 153-163.\nWong, P.-H. (2019) Democratizing algorithmic fairness. Philosophy &amp; Technology.\n\n<\/span><\/span>. This raises the question how values such as fairness should be conceived in which context and how they can be implemented.<\/p>\n<\/div>\n<\/div>\n\n\n\n<p>One of the fundamental questions in the ethics of AI, therefore, can be formulated as a problem of value alignment: how can we build autonomous AI that is aligned with societally held values <span class=\"note-item\"><a href=\"#note-07\" class=\"scroll-to\">[7]<\/a><span class=\"note-item-tooltip\">7 \u2014 The AI alignment problem is first explicitly formulated by Stuart Russell in 2014, see: Peterson, M. (2019) The value alignment problem: a geometric approach. Ethics and Information Technology 21 (1): 19-28.\n<\/span><\/span>. Virginia Dignum has characterized three dimensions of AI Ethics, namely \u201cEthics by Design\u201d, \u201cEthics in Design\u201d, and \u201cEthics for Design\u201d <span class=\"note-item\"><a href=\"#note-08\" class=\"scroll-to\">[8]<\/a><span class=\"note-item-tooltip\">8 \u2014 Dignum, V. (2018) Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology 20 (1): 1-3.\n<\/span><\/span>, and they are useful in identifying two different responses to the value alignment problem. We shall structure the following discussion based on the three dimensions above and explore the two different directions to answer the value alignment problem in more detail.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Building Ethical AI: Prospects and Limitations<\/h5>\n\n\n\n<p>Ethics by Design is \u201cthe technical\/algorithmic integration of reasoning capabilities as part of the behavior of [autonomous AI]\u201d <span class=\"note-item\"><a href=\"#note-09\" class=\"scroll-to\">[9]<\/a><span class=\"note-item-tooltip\"><\/span><\/span>. This line of research is also known as \u2018machine ethics\u2019. The aspiration of machine ethics is to build artificial moral agents, which are artificial agents with ethical capacities and thus can make ethical decisions without human intervention <span class=\"note-item\"><a href=\"#note-010\" class=\"scroll-to\">[10]<\/a><span class=\"note-item-tooltip\">10 \u2014 See:\n\nWinfield, A., Michael, K., Pitt, J., &amp; Evers, V. (2019) Machine ethics: the design and governance of ethical AI and autonomous systems. Proceedings of the IEEE 107 (3): 509-517.\nWallach, W., &amp; Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press.\nMisselhorn, C. (2018) Artificial morality. concepts, issues and challenges. Society 55 (2): 161-169.\n\n<\/span><\/span>. Machine ethics thus answers the value alignment problem by building autonomous AI that by itself aligns with human values. To illustrate this perspective with the examples of AVs and hiring algorithms: researchers and developers would strive to create AVs that can reason about the ethically right decision and act accordingly in scenarios of unavoidable harm. Similarly, the hiring algorithms are supposed to make non-discriminatory decision without human intervention. <\/p>\n\n\n\n<p>Wendell Wallach and Colin Allen classified three types of approaches to machine ethics in their seminal book <em>Moral machines<\/em> <span class=\"note-item\"><a href=\"#note-011\" class=\"scroll-to\">[11]<\/a><span class=\"note-item-tooltip\">11 \u2014 Wallach, W., &amp; Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press.\n<\/span><\/span>. The three types of approaches are, respectively, (i) top-down approaches, (ii) bottom-up approach, and (iii) hybrid approaches that merge the top-down and bottom-up approach. In the simplest form, the top-down approach attempts to formalize and implement a specific ethical theory in autonomous AI, whereas the bottom-up approach aims to create autonomous AI that can learn from the environment or from a set of examples what is ethically right and wrong; finally, the hybrid approach combines techniques and strategies of both the top-down and bottom-up approach <span class=\"note-item\"><a href=\"#note-012\" class=\"scroll-to\">[12]<\/a><span class=\"note-item-tooltip\">12 \u2014 \u00cdbid., p. 79-81\n<\/span><\/span>.<\/p>\n\n\n\n<p>A These approaches, however, are subject to various <em>theoretical<\/em> and <em>technical<\/em> limitations. For instance, top-down approaches need to overcome the challenge to find and defend an uncontroversial ethical theory among <em>conflicting<\/em> philosophical traditions. Otherwise the ethical AI will risk being built on an <em>inadequate<\/em>, or even <em>false<\/em>, foundation. Bottom-up approaches, on the other hand, infer what is ethical from what is <em>popular<\/em>, or from what is <em>commonly held as<\/em> being ethical, in the environment or among examples. Yet such inferences do not ensure that autonomous AI acquire <em>genuine<\/em> ethical principles or rules because neither popularity nor being considered ethical offers an appropriate ethical <em>justification<\/em> <span class=\"note-item\"><a href=\"#note-013\" class=\"scroll-to\">[13]<\/a><span class=\"note-item-tooltip\">13 \u2014 For a review of the difficulty of machine ethics, see: Cave, S., Nyrup, R., Vold, K., &amp; Weller, A. (2019) Motivations and risks of machine ethics. Proceedings of the IEEE 107 (3): 562-74.\n<\/span><\/span>. Furthermore, there is the <em>technical<\/em> challenge of building an ethical AI that can effectively discern <em>ethically relevant <\/em>from <em>ethically irrelevant<\/em> information among the multitude of information available within a given context. This capacity would be required for the successful application of ethical principles in top-down approaches as well as for the successful acquisition of ethical principles in bottom-up approaches <span class=\"note-item\"><a href=\"#note-014\" class=\"scroll-to\">[14]<\/a><span class=\"note-item-tooltip\">14 \u2014 This is also known as the moral frame problem, see: Horgan, T., &amp; Timmons, M. (2009) What does the frame problem tell us about moral normativity? Ethical Theory and Moral Practice 12 (1): 25-51.\n<\/span><\/span>.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<blockquote class=\"wp-block-quote is-style-large is-layout-flow wp-block-quote-is-layout-flow\"><p>Autonomous AI in general, and ethical AI in particular, may significantly undermine human autonomy because the decisions made by them for us or about us will be beyond our control<\/p><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>Besides the theoretical and technical challenges, several <em>ethical <\/em>criticisms have been leveled at building autonomous AI with ethical capacities. First, autonomous AI in general, and ethical AI in particular, may significantly undermine human autonomy because the decisions made by them <em>for us<\/em> or <em>about us<\/em> will be beyond our control, thereby reducing our independence from external influences <span class=\"note-item\"><a href=\"#note-015\" class=\"scroll-to\">[15]<\/a><span class=\"note-item-tooltip\">15 \u2014 Danaher, J. (2018) Toward an ethics of AI assistants: an initial framework. Philosophy &amp; Technology 31 (4): 629-653.\n<\/span><\/span>. Second, it remains unclear who or what should be responsible for wrongful decisions of autonomous AI, leading to concerns over their impacts on our moral responsibility practices <span class=\"note-item\"><a href=\"#note-016\" class=\"scroll-to\">[16]<\/a><span class=\"note-item-tooltip\">16 \u2014 Matthias, A. (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6 (3): 175-83.\n<\/span><\/span>. Finally, researchers have argued that turning autonomous AI into moral agents or moral patients unnecessarily complicates our moral world by introducing in it unfamiliar things that are foreign to our moral understanding, thereby imposing an unnecessary ethical burden on human beings by requiring us to pay undue moral attention to autonomous AI <span class=\"note-item\"><a href=\"#note-017\" class=\"scroll-to\">[17]<\/a><span class=\"note-item-tooltip\">17 \u2014 Bryson, J. J. (2018) Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology 20 (1): 15-26.\n<\/span><\/span>.<\/p>\n<\/div>\n<\/div>\n\n\n\n<h5 class=\"wp-block-heading\">Machine Ethics, Truncated Ethics<\/h5>\n\n\n\n<p>Our review of the theoretical, technical, and ethical challenges to machine ethics does not intend to be exhaustive or conclusive, and these challenges could indeed be overcome in future research and development of autonomous AI. However, we think that these challenges do warrant a pause and reconsideration of the prospects of building ethical AI. In fact, we want to advance a more fundamental critique of machine ethics before exploring another path for answering the value alignment problem.<\/p>\n\n\n\n<p>Recall the objective of machine ethics is to build an autonomous AI that can make ethical decisions and act ethically without human intervention. It zooms in on imbuing autonomous AI the capacities to make ethical decisions and perform ethical actions, which reflects a peculiar understanding of \u2018ethics\u2019 we take to problematize. More specifically, focusing <em>only <\/em>on capacities for ethical decision-making and action, machine ethics is susceptible to a <em>truncated <\/em>view of ethics that sees ethical decisions and actions as separable from their social and relational contexts. Philosopher and novelist Iris Murdoch, for example, has long ago argued that morality is not about \u201ca series of overt choices which take place in a series of specifiable situations\u201d <span class=\"note-item\"><a href=\"#note-018\" class=\"scroll-to\">[18]<\/a><span class=\"note-item-tooltip\">18 \u2014 Murdoch, I. (1956) Vision and choice in morality. Proceedings of the Aristotelian Society, Supplementary 30: 32-58. p. 34\n<\/span><\/span>, but about \u201cself-reflection or complex attitudes to life which are continuously displayed and elaborated in overt and inward speech but are not separable temporally into situations\u201d <span class=\"note-item\"><a href=\"#note-019\" class=\"scroll-to\">[19]<\/a><span class=\"note-item-tooltip\">19 \u2014 \u00cdbid., p. 40\n<\/span><\/span>. For Murdoch, what is ethical is inherently tied to a background of values. Therefore, it is essential, in thinking about \u2018ethics\u2019, to look <em>beyond<\/em> the capacities for ethical decision-making and action and the moments of ethical choice and action and <em>into<\/em> the background of values and the stories behind the choice and action. Similar arguments have been made to affirm the role of social and relational contexts in limiting ethical choices and shaping moral outcomes, and thus the importance to account for them in our ethical reflection <span class=\"note-item\"><a href=\"#note-020\" class=\"scroll-to\">[20]<\/a><span class=\"note-item-tooltip\">20 \u2014 Walker, M. U. (2007) Moral Understandings: A Feminist Study in Ethics. Oxford: Oxford University Press.\n<\/span><\/span>.<\/p>\n\n\n\n<p>Following this line of criticism, the emphasis on imbuing autonomous AI\u2019s ethical capacities in machine ethics can be viewed as wrongheaded insofar as the emphasis overshadows the fact that ethical outcomes from autonomous AI are shaped by multiple, interconnected factors external to its ethical reasoning capacities and that there is an extended process of social and political negotiation on the criteria for rightness and wrongness underlining the eventual ethical decisions and actions made by autonomous AI. \u2018The Moral Machine experiment\u2019 conducted by researchers at the MIT Media Lab is a case in point <span class=\"note-item\"><a href=\"#note-021\" class=\"scroll-to\">[21]<\/a><span class=\"note-item-tooltip\">21 \u2014 Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., &amp; Rahwan, I. (2018) The Moral Machine experiment. Nature 563: 59-64.\n<\/span><\/span>. In the experiment, the MIT researchers attempt to crowdsource ethical decisions in different accident scenarios involving AVs, and the results are intended to inform the ethical design of AVs. What is missing, however, are the social, cultural, political backgrounds and personal stories involved in <em>real<\/em> accidents that accident scenarios in the experiment do not, and often cannot, properly describe <span class=\"note-item\"><a href=\"#note-022\" class=\"scroll-to\">[22]<\/a><span class=\"note-item-tooltip\"><\/span><\/span>. In this respect, \u2018The Moral Machine\u2019 experiment is also based on a truncated view of ethics, which <em>only<\/em> considers the choice to be made in specific situations and neglect the background of values and contextual details that are essential for making ethical judgments.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<blockquote class=\"wp-block-quote is-style-large is-layout-flow wp-block-quote-is-layout-flow\"><p>In thinking about \u2018ethics\u2019, it is essential to look beyond the capacities for ethical decision-making and action and the moments of ethical choice and action, and into the background of values and the stories behind the choice and action<\/p><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>Indeed, social and relational contexts matter to the ethical analysis of autonomous AI both <em>before<\/em> and <em>after<\/em> its implementation. For example, one can devise an impartial hiring algorithm, which assesses job candidates <em>only<\/em> on the basis of the qualities required by an opening. This impartial hiring algorithm could nonetheless remain discriminatory, and therefore ethically dubious, if the specific qualities required by the opening are inadvertently linked to race, gender, and social class. In this case, care must be taken not to reproduces the <em>pre-existing social bias<\/em> in the hiring algorithm. Moreover, even the best-intended technologies can bring serious adverse impacts to their (non-)users as bias and harm could <em>emerge<\/em> from the interaction between technology and the users and society <span class=\"note-item\"><a href=\"#note-023\" class=\"scroll-to\">[23]<\/a><span class=\"note-item-tooltip\">23 \u2014 Friedman, B., &amp; Nissenbaum, H. (1996) Bias in computer systems. ACM Transactions on Information Systems 14 (3): 330-347.\n<\/span><\/span>. Imagine an app which residents can use to report incidents, such as road damages to the local city council, which then uses an algorithm to sort and rank local problems based on those reports. If we assume that access to smartphones and thus to the app is unequally distributed, this may lead to underreporting of problems in areas with poorer residents. If not taken into account in the algorithmic sorting and ranking, this bias in the input data could then further increase inequalities between more and less affluent areas in the city <span class=\"note-item\"><a href=\"#note-024\" class=\"scroll-to\">[24]<\/a><span class=\"note-item-tooltip\">24 \u2014 Simon J (2012) E-Democracy and Values in Design. Proceedings of the XXV World Congress of IVR 2012.\n<\/span><\/span>.<\/p>\n<\/div>\n<\/div>\n\n\n\n<p>The key lesson from the two examples is that having some ethical principles or rules inscribed in autonomous AI is insufficient to resolve the value alignment problem because the backgrounds and contexts <em>do<\/em> contribute to our overall judgment of what is ethical. We should remind ourselves that autonomous AI is <em>always<\/em> situated in some broader social and relational contexts, and so we cannot <em>only<\/em> focus on its <em>capacities<\/em> for moral decision-making and action. We need to consider not only <em>what<\/em> decisions and actions autonomous AI should produce, but also (i) <em>why<\/em> we\u2014or, the society\u2014think those decisions and actions are ethical, (ii) <em>how<\/em> we arrive at such views, and (iii) <em>whether<\/em> we are justified in thinking so. Accordingly, \u2018The Moral Machines\u2019 experiment is objectionable as it unjustifiably assumes that the most <em>intuitive<\/em> or <em>popular<\/em> response to the accident scenarios is the <em>ethical<\/em> response. Indeed, the reframing of questions gives us two advantages. First, we can now easily include other parties and factors <em>beyond<\/em> the autonomous AI in our ethical reflection. Second, it also makes explicit the possibility of (re-)negotiating which ethical principles or rules should be inscribed in autonomous AI (or even questioning the use of autonomous AI in a specific context altogether).<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">A Distributed Ethics of AI<\/h5>\n\n\n\n<p>To be clear, we do not deny the need to examine the values embedded in technology and the importance to design and build technology with values that are aligned with human interests <span class=\"note-item\"><a href=\"#note-025\" class=\"scroll-to\">[25]<\/a><span class=\"note-item-tooltip\"><\/span><\/span>. As the examples in this article show, autonomous AI can play a role in ethical decision-making and may lead to ethically relevant outcomes, so it is necessary to both examine the values embedded in it and to use shared societal values to guide its design and development. We do, however, want to question the aspiration of <em>delegating<\/em> ethical reasoning and judgment to machines, thereby stripping such reasoning and judgment from the social and relational contexts. A proper account of the ethics of AI should expand its scope of reflection and include other parties and factors that are relevant to the ethical decision-making and have contributed to the ethical outcomes of autonomous AI. To this end, it is essential for the ethics of AI to include various stakeholders, e.g. policy-makers, company leaders, designers, engineers, users, non-users, and the general public, in ethical reflection of autonomous AI. Indeed, only by doing so can we sufficiently address the questions: (i) <em>why<\/em> we think the decisions and outcomes of AI are ethical, (ii) <em>how<\/em> we arrive at such views, and (iii) <em>whether<\/em> we are justified in our judgements.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<blockquote class=\"wp-block-quote is-style-large is-layout-flow wp-block-quote-is-layout-flow\"><p>the design and implementation of AI should take existing societal inequalities and injustices into consideration, account for them, and at best even aim at alleviating them through their design decisions<\/p><\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p> We shall call this expanded AI Ethics a <em>distributed ethics of AI<\/em>. The term \u2018distributed\u2019 aims to capture the fact that multiple parties and factors are relevant to and have contributed to the ethical outcomes of autonomous AI, and thus the responsibility for them are \u2018distributed\u2019 between the relevant and contributing parties and factors <span class=\"note-item\"><a href=\"#note-026\" class=\"scroll-to\">[26]<\/a><span class=\"note-item-tooltip\">26 \u2014 See:\n\nFloridi, L. (2013) Distributed morality in an information society. Science and Engineering Ethics 19 (3): 727-743.\nSimon, J. (2015) Distributed epistemic responsibility in a hyperconnected era. In L. Floridi (Ed.), The Onlife Manifesto (pp. 145-159). Cham, Springer.\n\n<\/span><\/span>. To use the examples of AVs and hiring algorithms: poor urban planning and road facilities should be legitimate concerns in the ethics of AVs, in the same way as existing social and cultural biases are valid considerations for ethical hiring algorithms. Hence, the design and implementation of AI should take <em>existing <\/em>societal inequalities and injustices into consideration, account for them, and at best even aim at alleviating them through their design decisions.<\/p>\n<\/div>\n<\/div>\n\n\n\n<p> The distributed ethics of AI needs what Dignum has labeled \u201cEthics <em>in<\/em> Design\u201d, i.e. \u201cthe regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures\u201d as well as \u201cEthics <em>for<\/em> Design\u201d, i.e. \u201cthe codes of conduct, standards and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage artificial intelligent systems\u201d <span class=\"note-item\"><a href=\"#note-027\" class=\"scroll-to\">[27]<\/a><span class=\"note-item-tooltip\">27 \u2014 Dignum, V. (2018) Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology 20 (1): p. 2.\n<\/span><\/span>. Ethical questions of autonomous AI cannot be solved by \u2018better\u2019 <em>individual(istic)<\/em> ethical capacities but only through <em>collectiveefforts<\/em>. To guide such collective efforts, <em>ethicalguidelines<\/em> offer useful means to stir value- und principle-based reflection in regards in autonomous AI and to effectively coordinate the efforts among different relevant and contributing parts <span class=\"note-item\"><a href=\"#note-028\" class=\"scroll-to\">[28]<\/a><span class=\"note-item-tooltip\">28 \u2014 Floridi, L. (2019) Establishing the rules for building trustworthy. Nature Machine Intelligence 1: 261-262.\n<\/span><\/span>.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Conclusions: sobre la IA fiable de la UE<\/h5>\n\n\n\n<p> In April 2019, the High-Level Expert Group released the \u2018<a rel=\"noreferrer noopener\" aria-label=\"Ethics Guidelines for Trustworthy AI (s'obre en una nova pestanya)\" href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\" target=\"_blank\">Ethics Guidelines for Trustworthy AI<\/a>\u2019 which concretize the Europe\u2019s vision of AI. According to these Guidelines, Europe should research and develop <em>Trustworthy AI<\/em>, which is <em>lawful<\/em>, <em>ethical<\/em>, and <em>robust<\/em>.<\/p>\n\n\n\n<p> There are two points in the Guidelines that deserve special mentioning in the present subject of discussion. First, it is interesting to note that the concerns for trust in the Guidelines are about \u201cnot only the technology\u2019s inherent properties, but also the qualities of the socio-technical systems involving AI applications [\u2026]. Striving towards Trustworthy AI hence concerns not only the trustworthiness of the AI system itself, but requires a holistic and systemic approach, encompassing the trustworthiness of all actors and processes that are part of the system\u2019s socio-technical context throughout its entire life cycle.\u201d In this respect, the vision of Trustworthy AI clearly matches with the distributed ethics of AI as previously described. Second, it is also interesting to note that the four ethical principles identified in the Guidelines are <em>mid-level principles<\/em>, i.e.<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li> The principle of respect for human autonomy.<\/li><li> The principle of prevention of harm.<\/li><li> The principle of fairness.<\/li><li> The principle of explicability<\/li><\/ol>\n\n\n\n<p>The formulation of ethical principles based on <em>mid-level principles<\/em> is particularly illuminating, because mid-level principles <em>require<\/em> human interpretation and ordering in their application, and they are not intended to\u2014and, indeed cannot\u2014be implemented within autonomous AI. The need for interpretation and ordering also points to the social and relational contexts, where the resourcesfor interpretation and ordering lies. <\/p>\n\n\n\n<p>While the European vision of Trustworthy AI and the Guidelines have a conceptually sound foundation, there a number of open problems with them. For instance, the use of mid-level principles in the Guidelines allows considerable room for interpretation, which, in turn, can be misused by malevolent actors to cherry-pick the interpretations and excuse themselves from their responsibility. This problem is further compounded by the Guidelines\u2019 emphasis on self-regulation, where politicians and companies can pay lip service to the European vision with <em>cheap<\/em> and <em>superficial <\/em>measures, such as propaganda and setting up symbolic advisory boards, without <em>substantively<\/em> addressing the negative impacts of AI. Hence, there are significant issues concerning the <em>actual<\/em> regulatory and institutional framework for AI Ethics and for realizing this European vision. Particularly, there is the need to create a clear framework to <em>fairly<\/em> distribute the benefits and risks of AI and the need to introduce \u2018hard\u2019 laws and regulations against the violation of basic ethical values and human rights.<\/p>\n\n\n\n<p>Notwithstanding these problems, the Guidelines&#8217; focus <em>on humans <\/em>and <em>beyond technology<\/em> should be taken as an appropriate <em>normative<\/em> standpoint for the AI Ethics and the European vision. To end this article, we want to remind that the ethical questions about autonomous AI are distributed in nature, and that we\u2014or, the society\u2014should have a voice in their design and deployment. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>A major international consultancy firm identified \u2018AI ethicist\u2019 as an essential position for companies to successfully implement artificial intelligence (AI) at the start of 2019. It declares that AI ethicists are needed to help companies navigate the ethical and social issues raised by the use of AI . The view that AI is beneficial but nonetheless potentially harmful to individuals and society is widely shared by the industry, academia, governments, and civil society organizations. Accordingly and in order to realize its benefits while avoiding ethical pitfalls and harmful consequences, numerous initiatives have been established to a) examine the ethical, social,\u2026<\/p>\n","protected":false},"author":6,"featured_media":9397,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[187],"tags":[],"segment":[],"subject":[],"class_list":["post-9399","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethical-challenges-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES\" \/>\n<meta property=\"og:description\" content=\"A major international consultancy firm identified \u2018AI ethicist\u2019 as an essential position for companies to successfully implement artificial intelligence (AI) at the start of 2019. It declares that AI ethicists are needed to help companies navigate the ethical and social issues raised by the use of AI . The view that AI is beneficial but nonetheless potentially harmful to individuals and society is widely shared by the industry, academia, governments, and civil society organizations. Accordingly and in order to realize its benefits while avoiding ethical pitfalls and harmful consequences, numerous initiatives have been established to a) examine the ethical, social,\u2026\" \/>\n<meta property=\"og:url\" content=\"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"IDEES\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-20T09:21:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-03-05T09:55:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"800\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Guille Velasco\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Guille Velasco\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"17 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/\"},\"author\":{\"name\":\"Guille Velasco\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/#\\\/schema\\\/person\\\/adfa7c9b46b4f5aba1a2db263fdfd38f\"},\"headline\":\"Thinking About \u2018Ethics\u2019 in the Ethics of AI\",\"datePublished\":\"2020-02-20T09:21:19+00:00\",\"dateModified\":\"2020-03-05T09:55:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/\"},\"wordCount\":3296,\"image\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/i0.wp.com\\\/revistaidees.cat\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1\",\"articleSection\":[\"Ethical Challenges\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/\",\"url\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/\",\"name\":\"Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/i0.wp.com\\\/revistaidees.cat\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1\",\"datePublished\":\"2020-02-20T09:21:19+00:00\",\"dateModified\":\"2020-03-05T09:55:25+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/#\\\/schema\\\/person\\\/adfa7c9b46b4f5aba1a2db263fdfd38f\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#primaryimage\",\"url\":\"https:\\\/\\\/i0.wp.com\\\/revistaidees.cat\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1\",\"contentUrl\":\"https:\\\/\\\/i0.wp.com\\\/revistaidees.cat\\\/wp-content\\\/uploads\\\/2020\\\/02\\\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1\",\"width\":2000,\"height\":800,\"caption\":\"Araya Peralta\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/thinking-about-ethics-in-the-ethics-of-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Inici\",\"item\":\"https:\\\/\\\/revistaidees.cat\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Thinking About \u2018Ethics\u2019 in the Ethics of AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/#website\",\"url\":\"https:\\\/\\\/revistaidees.cat\\\/\",\"name\":\"IDEES\",\"description\":\"Contemporary global issues\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/revistaidees.cat\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/revistaidees.cat\\\/#\\\/schema\\\/person\\\/adfa7c9b46b4f5aba1a2db263fdfd38f\",\"name\":\"Guille Velasco\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g\",\"caption\":\"Guille Velasco\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/","og_locale":"en_US","og_type":"article","og_title":"Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES","og_description":"A major international consultancy firm identified \u2018AI ethicist\u2019 as an essential position for companies to successfully implement artificial intelligence (AI) at the start of 2019. It declares that AI ethicists are needed to help companies navigate the ethical and social issues raised by the use of AI . The view that AI is beneficial but nonetheless potentially harmful to individuals and society is widely shared by the industry, academia, governments, and civil society organizations. Accordingly and in order to realize its benefits while avoiding ethical pitfalls and harmful consequences, numerous initiatives have been established to a) examine the ethical, social,\u2026","og_url":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/","og_site_name":"IDEES","article_published_time":"2020-02-20T09:21:19+00:00","article_modified_time":"2020-03-05T09:55:25+00:00","og_image":[{"width":2000,"height":800,"url":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","type":"image\/jpeg"}],"author":"Guille Velasco","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Guille Velasco","Est. reading time":"17 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#article","isPartOf":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/"},"author":{"name":"Guille Velasco","@id":"https:\/\/revistaidees.cat\/#\/schema\/person\/adfa7c9b46b4f5aba1a2db263fdfd38f"},"headline":"Thinking About \u2018Ethics\u2019 in the Ethics of AI","datePublished":"2020-02-20T09:21:19+00:00","dateModified":"2020-03-05T09:55:25+00:00","mainEntityOfPage":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/"},"wordCount":3296,"image":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","articleSection":["Ethical Challenges"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/","url":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/","name":"Thinking About \u2018Ethics\u2019 in the Ethics of AI &#8211; IDEES","isPartOf":{"@id":"https:\/\/revistaidees.cat\/#website"},"primaryImageOfPage":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#primaryimage"},"image":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","datePublished":"2020-02-20T09:21:19+00:00","dateModified":"2020-03-05T09:55:25+00:00","author":{"@id":"https:\/\/revistaidees.cat\/#\/schema\/person\/adfa7c9b46b4f5aba1a2db263fdfd38f"},"breadcrumb":{"@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#primaryimage","url":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","contentUrl":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","width":2000,"height":800,"caption":"Araya Peralta"},{"@type":"BreadcrumbList","@id":"https:\/\/revistaidees.cat\/en\/thinking-about-ethics-in-the-ethics-of-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Inici","item":"https:\/\/revistaidees.cat\/en\/"},{"@type":"ListItem","position":2,"name":"Thinking About \u2018Ethics\u2019 in the Ethics of AI"}]},{"@type":"WebSite","@id":"https:\/\/revistaidees.cat\/#website","url":"https:\/\/revistaidees.cat\/","name":"IDEES","description":"Contemporary global issues","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/revistaidees.cat\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/revistaidees.cat\/#\/schema\/person\/adfa7c9b46b4f5aba1a2db263fdfd38f","name":"Guille Velasco","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/629007751c4a3e3bc4a875f83b1492bf27b7e7eff053528d6942b03ce18e75ad?s=96&d=mm&r=g","caption":"Guille Velasco"}}]}},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/revistaidees.cat\/wp-content\/uploads\/2020\/02\/AAFF-REFLEXION-SOBRE-LA-ETICA-2000X800.jpg?fit=2000%2C800&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/posts\/9399","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/comments?post=9399"}],"version-history":[{"count":5,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/posts\/9399\/revisions"}],"predecessor-version":[{"id":10276,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/posts\/9399\/revisions\/10276"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/media\/9397"}],"wp:attachment":[{"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/media?parent=9399"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/categories?post=9399"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/tags?post=9399"},{"taxonomy":"segment","embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/segment?post=9399"},{"taxonomy":"subject","embeddable":true,"href":"https:\/\/revistaidees.cat\/en\/wp-json\/wp\/v2\/subject?post=9399"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}