Institute of Electrical and Electronics Engineers (IEEE), Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2 (2018)
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) ist ein weltweit aktiver Berufsverband für Ingenieure/innen vor allem der Elektro- und der Informationstechnik mit über 400000 Mitgliedern in über 160 Ländern. Er hat die IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems gegründet und in einem mehrjährigen Prozess unter breiter Beteiligung seiner Mitglieder 2018 sein differenziertes 266 seitiges Arbeitspapier Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems in überarbeiteter zweiter Version publiziert und zur Diskussion gestellt.
Eine endgültige Version ist für 2019 angekündigt. Sie soll ein Handbuch mit abgestimmten “Empfehlungen“ werden, zu dem auch Bildungsmaterialien entwickelt werden sollen. Übergreifende Aufgabe sei, “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.“ (S. 3)
Zu Beginn wird eine wichtige terminologische Abgrenzung versucht: “There is no need to use the term artificial intelligence in order to conceptualize and speak of technologies and systems that are meant to extend our human intelligence or be used in robotics applications. For this reason, we use the term, autonomous and intelligent systems (or A/IS).“ (S. 12) (Es besteht keine Notwendigkeit, den Begriff künstliche Intelligenz zu verwenden, um Technologien und Systeme zu konzipieren und zu beschreiben, die dazu bestimmt sind, unsere menschliche Intelligenz zu erweitern oder in robotischen Anwendungen eingesetzt zu werden. Aus diesem Grund verwenden wir den Begriff "autonome und intelligente Systeme" (oder A/IS).)
Unter Berücksichtigung zahlreicher Arbeits- und Positionspapiere anderer Organisationen werden übergreifende und themenbereichsspezifische ethische Anliegen und klärungsbedürftige Fragen formuliert. Diese werden jeweils ausführlich begründet, um Hinweise auf weiterführende Literatur ergänzt und mit vorläufigen, noch zu beschließenden Empfehlungen der Arbeitsgruppen versehen. Sie sollen gelten “to all types of autonomous and intelligent systems (A/IS*), regardless of whether they are physical robots (such as care robots or driverless cars) or software systems (such as medical diagnosis systems, intelligent personal assistants, or algorithmic chat bots).“ S.20 (für alle Arten von autonomen und intelligenten Systemen (A/IS), unabhängig davon, ob es sich um physische Roboter (wie Pflegeroboter oder fahrerlose Autos) oder Softwaresysteme (wie medizinische Diagnosesysteme, intelligente persönliche Assistenten oder algorithmische Chatbots) handelt.)
Die differenzierte, vielfältige Aspekte berücksichtigende Bearbeitung der komplexen Thematik in mehreren Gremien des IEEE wird in der folgenden Übersicht deutlich:
General Principles
Principle 1 — Human Rights
- Issue: How can we ensure that A/IS do not infringe upon human rights?
Principle 2 — Prioritizing Well-being
- Issue: Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
Principle 3 — Accountability
- Issue: How can we assure that designers, manufacturers, owners, and operators of A/IS are responsible and accountable?
Principle 4 — Transparency
- Issue: How can we ensure that A/IS are transparent?
Principle 5 — A/IS Technology Misuse and Awareness of It
- Issue: How can we extend the benefits and minimize the risks of A/IS technology being misused?
Einbettung von Werten in autonome intelligente Systeme
Section 1 — Identifying Norms for Autonomous Intelligent Systems
- Issue 1: Which norms should be identified?
- Issue 2: The need for norm updating.
- Issue 3: A/IS will face norm conflicts and need methods to resolve them.
Section 2 — Implementing Norms in Autonomous Intelligent Systems
- Issue 1: Many approaches to norm implementation are currently available, and new ones are being developed.
- Issue 2: The need for transparency from implementation to deployment.
- Issue 3: Failures will occur.
Section 3 — Evaluating the Implementation of A/IS
- Issue 1: Not all norms of a target community apply equally to human and artificial agents.
- Issue 2: A/IS can have biases that disadvantage specific groups.
- Issue 3: Challenges to evaluation by third parties.
Methodologies to Guide Ethical Research and Design
Section 1 — Interdisciplinary Education and Research
- Issue: Inadequate integration of ethics in A/IS-related degree programs.
- Issue: The need for more constructive and sustained interdisciplinary collaborations to address ethical issues concerning autonomous and intelligent systems (A/IS).
- Issue: The need to differentiate culturally distinctive values embedded in AI design.
Section 2 — Corporate Practices and A/IS
- Issue: Lack of value-based ethical culture and practices for industry.
- Issue: Lack of values-aware leadership.
- Issue: Lack of empowerment to raise ethical concerns.
- Issue: Organizations should examine their cultures to determine how to flexibly implement value-based design.
- Issue: Lack of ownership or responsibility from the tech community.
- Issue: Need to include stakeholders for adequate ethical perspective on A/IS.
Section 3 — Research Ethics for Development and Testing of A/IS Technologies
- Issue: Institutional ethics committees are under-resourced to address the ethics of R&D in the A/IS fields.
Section 4 — Lack of Transparency
- Issue: Poor documentation hinders ethical design.
- Issue: Inconsistent or lacking oversight for algorithms.
- Issue: Lack of an independent review organization.
- Issue: Use of black-box components.
Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)
Section 1 — Technical
- Issue: As A/IS become more capable, as measured by the ability to perform with greater autonomy across a wider variety of domains, unanticipated or unintended behavior becomes increasingly dangerous.
- Issue: Designing for safety may be much more difficult later in the design lifecycle rather than earlier.
Section 2 — General Principles
- Issue: Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly capable A/IS
- Issue: Future A/IS may have the capacity to impact the world on a scale not seen since the Industrial Revolution.
Personal Data and Individual Access Control
Section 1 — Digital Personas
- Issue: Individuals do not understand that their digital personas and identity function differently than in real life. This is a concern when personal data is not accessible by an individual and the future iterations of their personas or identity cannot be controlled by them, but by the creators of the A/IS they use.
- Issue: How can an individual define and organize his/her personal data and identity in the algorithmic era?
Section 2 — Regional Jurisdiction
- Issue: Country-wide, regional, or local legislation may contradict an individual’s values or access and control of their personal data.
Section 3 — Agency and Control
- Issue: To understand the role of agency and control within A/IS, it is critical to have a definition and scope of personally identifiable information (PII).
- Issue: What is the definition of control regarding personal data, and how can it be meaningfully expressed?
Section 4 — Transparency and Access
- Issue: It is often difficult for users to determine what information a service provider or A/IS application collects about them at the time of such aggregation/collection (at the time of installation, during usage, even when not in use, after deletion). It is difficult for users to correct, amend, or manage this information.
- Issue: How do we create privacy impact assessments related to A/IS?
- Issue: How can AI interact with government authorities to facilitate law enforcement and intelligence collection while respecting rule of law and transparency for users?
Section 5 — Symmetry and Consent
- Issue: Could a person have a personalized privacy AI or algorithmic agent or guardian?
- Issue: Consent is vital to information exchange and innovation in the algorithmic age. How can we redefine consent regarding personal data so it respects individual autonomy and dignity?
- Issue: Data that is shared easily or haphazardly via A/IS can be used to make inferences that an individual may not wish to share.
- Issue: Many A/IS will collect data from individuals they do not have a direct relationship with, or the systems are not interacting directly with the individuals. How can meaningful consent be provided in these situations?
- Issue: How do we make better user experience and consent education available to consumers as standard to express meaningful consent?
- Issue: In most corporate settings, employees do not have clear consent on how their personal information (including health and other data) is used by employers. Given the power differential between employees and employers, this is an area in need of clear best practices.
- Issue: People may be losing their ability to understand what kinds of processing is done by A/IS on their private data, and thus may be becoming unable to meaningfully consent to online terms. The elderly and mentally impaired adults are vulnerable in terms of consent, presenting consequence to data privacy
Reframing Autonomous Weapons Systems
- Issue 1: Confusions about definitions regarding important concepts in artificial intelligence (AI), autonomous systems (AS), and autonomous weapons systems (AWS) stymie more substantive discussions about crucial issues.
- Issue 2: The addition of automated targeting and firing functions to an existing weapon system, or the integration of components with such functionality, or system upgrades that impact targeting and automated weapon release should be considered for review under Article 36 of Additional Protocol I of the Geneva Conventions.
- Issue 3: Engineering work should conform to individual and professional organization codes of ethics and conduct. However, existing codes of ethics may fail to properly address ethical responsibility for autonomous systems, or clarify ethical obligations of engineers with respect to AWS. Professional organizations should undertake reviews and possible revisions or extensions of their codes of ethics with respect to AWS.
- Issue 4: The development of AWS by states is likely to cause geopolitical instability and could lead to arms races.
- Issue 5: The automated reactions of an AWS could result in the initiation or escalation of conflicts outside of decisions by political and military leadership. AWS that engage with other AWS could escalate a conflict rapidly, before humans are able to intervene.
- Issue 6: There are multiple ways in which accountability for the actions of AWS can be compromised.
- Issue 7: AWS offer the potential for severe human rights abuses. Exclusion of human oversight from the battlespace can too easily lead to inadvertent violation of human rights. AWS could be used for deliberate violations of human rights.
- Issue 8: AWS could be used for covert, obfuscated, and non-attributable attacks.
- Issue 9: The development of AWS will lead to a complex and troubling landscape of proliferation and abuse.
- Issue 10: AWS could be deployed by domestic police forces and threaten lives and safety. AWS could also be deployed for private security. Such AWS may have very different design and safety requirements than military AWS.
- Issue 11: An automated weapons system might not be predictable (depending upon its design and operational use). Learning systems compound the problem of predictable use.
Economics and Humanitarian Issues
Section 1 — Economics
- Issue: A/IS should contribute to achieving the UN Sustainable Development Goals.
- Issue: It is unclear how developing nations can best implement A/IS via existing resources.
- Issue: The complexities of employment are being neglected regarding A/IS
- Issue: Automation is often viewed only within market contexts.
- Issue: Technological change is happening too fast for existing methods of (re)training the workforce.
Section 2 — Privacy and Safety
- Issue: There is a lack of access and understanding regarding personal information.
Section 3 — Education
- Issue: How best to incorporate the “global dimension of engineering” approach in undergraduate and postgraduate education in A/IS.
Section 4 — Equal Availability
- Issue: AI and autonomous technologies are not equally available worldwide.
Law
Section 1 — Legal Status of A/IS
- Issue: What type of legal status (or other legal analytical framework) is appropriate for application to A/IS, given the legal issues raised by deployment of such technologies?
Section 2 — Governmental Use of A/IS: Transparency and Individual Rights
- Issue: International, national, and local governments are using A/IS. How can we ensure the A/IS that governments employ do not infringe on citizens’ rights?
Section 3 — Legal Accountability for Harm Caused by A/IS
- Issue: How can A/IS be designed to guarantee legal accountability for harms caused by these systems?
Section 4 — Transparency, Accountability, and Verifiability in A/IS
- Issue: How can we improve the accountability and verifiability in autonomous and intelligent systems?
Affective Computing
Systems Across Cultures
- Issue: Should affective systems interact using the norms appropriate for verbal and nonverbal communication consistent with the societal norms where they are located?
- Issue: Long-term interaction with affective artifacts lacking cultural sensitivity could alter the way people interact in society.
- Issue: When affective systems are inserted across cultures, they could affect negatively the cultural/socio/religious values of the community where they are inserted.
When Systems Become Intimate
- Issue: Are moral and ethical boundaries crossed when the design of affective systems allows them to develop intimate relationships with their users?
- Issue: Can and should a ban or strict regulations be placed on the development of sex robots for private use or in the sex industry?
System Manipulation/Nudging/Deception
- Issue: Should affective systems be designed to nudge people for the user’s personal benefit and/or for the benefit of someone else?
- Issue: Governmental entities often use nudging strategies, for example to promote the performance of charitable acts. But the practice of nudging for the benefit of society, including through the use of affective systems, raises a range of ethical concerns.
- Issue: A nudging system that does not fully understand the context in which it is operating may lead to unintended consequences.
- Issue: When, if ever, and under which circumstances is deception performed by affective systems acceptable?
Systems Supporting Human Potential (Flourishing)
- Issue: Extensive use of artificial intelligence in society may make our organizations more brittle by reducing human autonomy within organizations, and by replacing creative, affective, empathetic components of management chains.
- Issue: The increased access to personal information about other members of our society, facilitated by artificial intelligence, may alter the human affective experience fundamentally, potentially leading to a severe and possibly rapid loss in individual autonomy.
- Issue: A/IS may negatively affect human psychological and emotional well-being in ways not otherwise foreseen.
Systems With Their Own Emotions
- Issue: Synthetic emotions may increase accessibility of AI, but may deceive humans into false identification with AI, leading to overinvestment of time, money, trust, and human emotion.
Policy
- Objective: Ensure that A/IS support, promote, and enable internationally recognized legal norms.
- Objective: Develop and make available to government, industry, and academia a workforce of well-qualified A/IS personnel.
- Objective: Support research and development needed to ensure continued leadership in A/IS
- Objective: Provide effective regulation of A/IS to ensure public safety and responsibility while fostering a robust AI industry.
- Objective: Facilitate public understanding of the rewards and risks of A/IS.
Classical Ethics in A/IS
Section 1 — Definitions for Classical Ethics in Autonomous and Intelligent Systems Research
- Issue: Assigning foundations for morality, autonomy, and intelligence.
- Issue: Distinguishing between agents and patients.
- Issue: There is a need for an accessible classical ethics vocabulary.
- Issue: Presenting ethics to the creators of autonomous and intelligent systems. Issue: Access to classical ethics by corporations and companies.
- Issue: Impact of automated systems on the workplace.
Section 2 — Classical Ethics From Globally Diverse Traditions
- Issue: The monopoly on ethics by Western ethical traditions.
- Issue: The application of classical Buddhist ethical traditions to AI design.
- Issue: The application of Ubuntu ethical traditions to A/IS design.
- Issue: The application of Shinto-influenced traditions to A/IS design.
Section 3 — Classical Ethics for a Technical World
- Issue: Maintaining human autonomy.
- Issue: Applying goal-directed behavior (virtue ethics) to autonomous and intelligent systems.
- Issue: A requirement for rule-based ethics in practical programming.
Mixed Reality in Information and Communications
Section 1 — Social Interactions
- Issue: Within the realm of A/IS-enhanced mixed reality, how can we evolve, harness, and not eradicate the positive effects of serendipity?
- Issue: What happens to cultural institutions in a mixed reality, AI-enabled world of illusion, where geography is largely eliminated, tribe-like entities and identities could spring up spontaneously, and the notion of identity morphs from physical certainty to virtuality?
- Issue: With alternative realities at reach, we will have alternative ways of behaving individually and collectively, and perceiving ourselves and the world around us. These new orientations regarding reality could enhance an already observed tendency toward social reclusiveness that detaches many from our common reality. Could such a situation lead to an individual opting out of “societal engagements? ”
- Issue: The way we experience (and define) physical reality on a daily basis will soon change.
- Issue: We may never have to say goodbye to those who have graduated to a newer dimension (i.e., death).
- Issue: Mixed reality changes the way we interact with society and can also lead to complete disengagement.
- Issue: A/IS, artificial consciousness, and augmented/mixed reality has the potential to create a parallel set of social norms.
- Issue: An MR/A/IS environment could fail to take into account the neurodiversity of the population.
Section 2 — Mental Health
- Issue: How can AI-enhanced mixed reality explore the connections between the physical and the psychological, the body and mind for therapeutic and other purposes? What are the risks for when an AI-based mixed-reality system presents stimuli that a user can interact with in an embodied, experiential activity? Can such MR experiences influence and/or control the senses or the mind in a fashion that is detrimental and enduring? What are the short- and long-term effects and implications of giving over one’s senses to software? Moreover, what are the implications for the ethical development and use of MR applications designed for mental health assessment and treatment in view of the potential potency of this media format compared to traditional methodologies?
- Issue: Mixed reality creates opportunities for generated experiences and high levels of user control that may lead certain individuals to choose virtual life over the physical world. What are the clinical implications?
Section 3 — Education and Training
- Issue: How can we protect worker rights and mental well-being with the onset of automation-oriented, immersive systems?
- Issue: AR/VR/MR in training/operations can be an effective learning tool, but will alter workplace relationships and the nature of work in general.
- Issue: How can we keep the safety and development of children and minors in mind?
- Issue: Mixed reality will usher in a new phase of specialized job automation.
- Issue: A combination of mixed reality and A/IS will inevitably replace many current jobs. How will governments adapt policy, and how will society change both expectations and the nature of education and training?
Section 4 — The Arts
- Issue: There is the possibility of commercial actors to create pervasive AR/VR environments that will be prioritized in user’s eyes/vision/experience.
- Issue: There is the possibility that AR/VR realities could copy/emulate/hijack creative authorship and intellectual and creative property with regard to both human and/or AI-created works.
Section 5 — Privacy Access and Control
- Issue: Data collection and control issues within mixed realities combined with A/IS present multiple ethical and legal challenges that ought to be addressed before these realities pervade society.
- Issue: Like other emerging technologies, AR/VR will force society to rethink notions of privacy in public and may require new laws or regulations regarding data ownership in these environments.
- Issue:Users of AI-informed mixed-reality systems need to understand the known effects and consequences of using those systems in order to trust them.
Well-being
Section 1 — An Introduction to Well-being Metrics
- Issue: There is ample and robust science behind well-being metrics and use by international and national institutions, yet many people in the A/IS field and corporate communities are unaware that well-being metrics exist, or what entities are using them.
(siehe auch The State of Well-being Metrics)
Section 2 — The Value of Well-being Metrics for A/IS
- Issue: Many people in the A/IS field and corporate communities are not aware of the value well-being metrics offer.
- Issue: By leveraging existing work in computational sustainability or using existing indicators to model unintended consequences of specific systems or applications, well-being could be better understood and increased by the A/IS community and society at large.
- Issue: Well-being indicators provide an opportunity for modeling scenarios and impacts that could improve the ability of A/IS to frame specific societal benefits for their use.
Section 3 — Adaptation of Well-being Metrics for A/IS
- Issue: How can creators of A/IS incorporate measures of well-being into their systems?
- Issue: A/IS technologies designed to replicate human tasks, behavior, or emotion have the potential to either increase or decrease well-being.
- Issue: Human rights law is sometimes conflated with human well-being, leading to a concern that a focus on human well-being will lead to a situation that minimizes the protection of inalienable human rights, or lowers the standard of existing legal human rights guidelines for non-state actors.
- Issue: A/IS represents opportunities for stewardship and restoration of natural systems and securing access to nature for humans, but could be used instead to distract attention and divert innovation until the planetary ecological condition is beyond repair.
- Issue: The well-being impacts of A/IS applied to human genomes are not well understood. (“There is an urgent need to concurrently discuss how the convergence of A/IS and genomic data interpretation will challenge the purpose and content of relevant legislation that preserve well-being ...“ S. 263)
Im Gesamtpapier gibt es zu jedem “Issue“ noch je einen Abschnitt “Background“ , “Candidate Recommendation“ und “Further Resources“.
Das komplette Papier kann als PDF-Datei heruntergeladen werden.
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 United States License.