Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Implementing ethics into artificial intelligence: a contribution, from a legal perspective, to the development of an AI governance regime

Walz, A., & Firth-Butterfield, K. (2019). Implementing ethics into artificial intelligence: a contribution, from a legal perspective, to the development of an AI governance regime. Duke Law & Technology Review, 18(1), 180-231.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Walz, Axel1, Autor           
Firth-Butterfield, Kay2, Autor
Affiliations:
1MPI for Innovation and Competition, Max Planck Society, ou_2035291              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: The increasing use of AI and autonomous systems will have revolutionary impacts on society. Despite many benefits, AI and autonomous systems involve considerable risks that need to be managed. Minimizing these risks will emphasize the respective benefits while at the same time protecting the ethical values defined by fundamental rights and basic constitutional principles, thereby preserving a human centric society. This Article advocates for the need to conduct in-depth risk-benefit-assessments with regard to the use of AI and autonomous systems. This Article points out major concerns in relation to AI and autonomous systems such as likely job losses, causation of damages, lack of transparency, increasing loss of humanity in social relationships, loss of privacy and personal autonomy, potential information biases and the error proneness, and susceptibility to manipulation of AI and autonomous systems. This critical analysis aims to raise awareness on the side of policy-makers to sufficiently address these concerns and design an appropriate AI governance regime with a focus on the preservation of a human-centric society. Raising awareness for eventual risks and concerns should, however, not be misunderstood as an anti-innovative approach. Rather, it is necessary to consider risks and concerns adequately and sufficiently in order to make sure that new technologies such as AI and autonomous systems are constructed and operate in a way which is acceptable for individual users and society as a whole. To this end, this article develops a graded governance model for the implementation of ethical concerns in AI systems reflecting the often-misjudged fact that, actually, there is a variety of policy-making instruments which policy-makers can make use of. In particular, ethical concerns do not only need to be addressed by legislation or international conventions. Depending on the ethical concern at hand, alternative regulatory measures such as technical standardization or certification may even be preferable. To illustrate the practical impact of this graded governance model for the implementation of ethical concerns in AI systems, two concrete global approaches are presented herein, in addition, which regulators, governments and industry could refer to as a basis for regulating ethical concerns associated with the use of AI and autonomous systems.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2019
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Duke Law & Technology Review
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: 18 (1) Artikelnummer: - Start- / Endseite: 180 - 231 Identifikator: ZDB: 2126778-9