Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Rethinking open source generative AI: open-washing and the EU AI Act

Liesenfeld, A., & Dingemanse, M. (2024). Rethinking open source generative AI: open-washing and the EU AI Act. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24) (pp. 1774-1784). ACM.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Dateien

einblenden: Dateien
ausblenden: Dateien
:
liesenfeld_dingemanse_2024_FAccT_generative_AI_open-washing_EU_AI_Act.pdf (Verlagsversion), 906KB
Name:
liesenfeld_dingemanse_2024_FAccT_generative_AI_open-washing_EU_AI_Act.pdf
Beschreibung:
-
OA-Status:
Keine Angabe
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
CC-BY-4.0

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Liesenfeld, Andreas1, Autor
Dingemanse, Mark1, Autor           
Affiliations:
1Center for Language Studies, External Organizations, ou_55238              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: The past year has seen a steep rise in generative AI systems that claim to be open. But how open are they really? The question of what counts as open source in generative AI is poised to take on particular importance in light of the upcoming EU AI Act that regulates open source systems differently, creating an urgent need for practical openness assessment. Here we use an evidence-based framework that distinguishes 14 dimensions of openness, from training datasets to scientific and technical documentation and from licensing to access methods. Surveying over 45 generative AI systems (both text and text-to-image), we find that while the term open source is widely used, many models are `open weight' at best and many providers seek to evade scientific, legal and regulatory scrutiny by withholding information on training and fine-tuning data. We argue that openness in generative AI is necessarily composite (consisting of multiple elements) and gradient (coming in degrees), and point out the risk of relying on single features like access or licensing to declare models open or not. Evidence-based openness assessment can help foster a generative AI landscape in which models can be effectively regulated, model providers can be held accountable, scientists can scrutinise generative AI, and end users can make informed decisions.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2024-01-142024-05-052024-06-05
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1145/3630106.3659005
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Seventh Annual ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2024)
Veranstaltungsort: Rio de Janeiro, Brazil
Start-/Enddatum: 2024-06-03 - 2024-06-06

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24)
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: ACM
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 1774 - 1784 Identifikator: -