DC ElementWertSprache
dc.contributor.advisorUsbeck, Ricardo-
dc.contributor.advisorSimon, Judith-
dc.contributor.authorKraft, Angelie-
dc.date.accessioned2026-02-26T15:30:16Z-
dc.date.available2026-02-26T15:30:16Z-
dc.date.issued2025-
dc.identifier.urihttps://ediss.sub.uni-hamburg.de/handle/ediss/12231-
dc.description.abstractThis thesis critically investigates whether or not AI-based knowledge technology built on language models, knowledge graphs, and/or knowledge-enhanced language models deserves the epistemic authority it happens to receive and analyzes its epistemic and ethical "goodness". To this end, this thesis utilizes computer science approaches and philosophical analysis and discusses technical features, as well as engineering and research practices in the field of AI by drawing from feminist epistemological accounts, in particular. The core of this cumulative dissertation comprises three articles that address the following sets of questions: (RQ1) What types of social bias are embedded in knowledge graphs? How are they measured? And what do we know about their causes? (RQ2) Can knowledge enhancement make language models less biased with regards to their knowledge content? Can it help to make language models more objective? (RQ3) How are the measures created that are used to determine a language model's accuracy in reproducing knowledge? How is the quality and representativeness of these measures? This thesis finds that AI-based knowledge technology has several epistemically and ethically problematic characteristics, which cannot be solved solely through technological means. AI development and evaluation must be conducted in a contextualized manner and account for the situatedness of knowledge processes. In drawing from feminist epistemologies, this thesis argues that the AI community must promote emancipatory values and foreground marginalized standpoints to facilitate epistemically and ethically better systems.en
dc.language.isoende_DE
dc.publisherStaats- und Universitätsbibliothek Hamburg Carl von Ossietzkyde
dc.relation.haspartdoi:10.18653/v1/2022.aacl-main.49de_DE
dc.relation.haspartdoi.org:10.1145/3630106.3658981de_DE
dc.relation.hasparthttps://aclanthology.org/2025.ijcnlp-long.79/de_DE
dc.rightshttp://purl.org/coar/access_right/c_abf2de_DE
dc.subjectLanguage Modelsen
dc.subjectKnowledge Graphsen
dc.subjectKnowledge-enhanced Language Modelingen
dc.subjectAlgorithmic Biasen
dc.subjectEpistemic Injusticeen
dc.subjectFeminist Epistemologyen
dc.subject.ddc004: Informatikde_DE
dc.titleOn Knowledge in AI: Epistemic and Ethical Limitations of Language Models and Knowledge Graphsen
dc.typedoctoralThesisen
dcterms.dateAccepted2026-02-04-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/de_DE
dc.rights.rshttp://rightsstatements.org/vocab/InC/1.0/-
dc.subject.bcl54.08: Informatik in Beziehung zu Mensch und Gesellschaftde_DE
dc.subject.bcl54.72: Künstliche Intelligenzde_DE
dc.subject.bcl54.82: Textverarbeitungde_DE
dc.subject.gndKünstliche Intelligenzde_DE
dc.subject.gndGroßes Sprachmodellde_DE
dc.subject.gndWissensgraphde_DE
dc.subject.gndBiasde_DE
dc.subject.gndFeministische Philosophiede_DE
dc.type.casraiDissertation-
dc.type.dinidoctoralThesis-
dc.type.driverdoctoralThesis-
dc.type.statusinfo:eu-repo/semantics/publishedVersionde_DE
dc.type.thesisdoctoralThesisde_DE
tuhh.type.opusDissertation-
thesis.grantor.departmentInformatikde_DE
thesis.grantor.placeHamburg-
thesis.grantor.universityOrInstitutionUniversität Hamburgde_DE
dcterms.DCMITypeText-
dc.identifier.urnurn:nbn:de:gbv:18-ediss-135459-
item.fulltextWith Fulltext-
item.advisorGNDUsbeck, Ricardo-
item.advisorGNDSimon, Judith-
item.creatorGNDKraft, Angelie-
item.grantfulltextopen-
item.creatorOrcidKraft, Angelie-
item.languageiso639-1other-
Enthalten in den Sammlungen:Elektronische Dissertationen und Habilitationen
Dateien zu dieser Ressource:
Datei Beschreibung Prüfsumme GrößeFormat  
dissertation_angelie_kraft.pdf0c821d846af3e3dae78246c2c50d5caf3.02 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Kurzanzeige

Info

Seitenansichten

Letzte Woche
Letzten Monat
geprüft am null

Download(s)

Letzte Woche
Letzten Monat
geprüft am null
Werkzeuge

Google ScholarTM

Prüfe