English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Implementing ethics into artificial intelligence: a contribution, from a legal perspective, to the development of an AI governance regime

MPS-Authors
/persons/resource/persons198197

Walz,  Axel
MPI for Innovation and Competition, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Walz, A., & Firth-Butterfield, K. (2019). Implementing ethics into artificial intelligence: a contribution, from a legal perspective, to the development of an AI governance regime. Duke Law & Technology Review, 18(1), 180-231.


Cite as: https://hdl.handle.net/21.11116/0000-0005-7901-2
Abstract
The increasing use of AI and autonomous systems will have revolutionary impacts on society. Despite many benefits, AI and autonomous systems involve considerable risks that need to be managed. Minimizing these risks will emphasize the respective benefits while at the same time protecting the ethical values defined by fundamental rights and basic constitutional principles, thereby preserving a human centric society. This Article advocates for the need to conduct in-depth risk-benefit-assessments with regard to the use of AI and autonomous systems. This Article points out major concerns in relation to AI and autonomous systems such as likely job losses, causation of damages, lack of transparency, increasing loss of humanity in social relationships, loss of privacy and personal autonomy, potential information biases and the error proneness, and susceptibility to manipulation of AI and autonomous systems. This critical analysis aims to raise awareness on the side of policy-makers to sufficiently address these concerns and design an appropriate AI governance regime with a focus on the preservation of a human-centric society. Raising awareness for eventual risks and concerns should, however, not be misunderstood as an anti-innovative approach. Rather, it is necessary to consider risks and concerns adequately and sufficiently in order to make sure that new technologies such as AI and autonomous systems are constructed and operate in a way which is acceptable for individual users and society as a whole. To this end, this article develops a graded governance model for the implementation of ethical concerns in AI systems reflecting the often-misjudged fact that, actually, there is a variety of policy-making instruments which policy-makers can make use of. In particular, ethical concerns do not only need to be addressed by legislation or international conventions. Depending on the ethical concern at hand, alternative regulatory measures such as technical standardization or certification may even be preferable. To illustrate the practical impact of this graded governance model for the implementation of ethical concerns in AI systems, two concrete global approaches are presented herein, in addition, which regulators, governments and industry could refer to as a basis for regulating ethical concerns associated with the use of AI and autonomous systems.