DC ElementWertSprache
dc.contributor.advisorWermter, Stefan-
dc.contributor.authorZhao, Xufeng-
dc.date.accessioned2025-10-17T07:53:17Z-
dc.date.available2025-10-17T07:53:17Z-
dc.date.issued2025-
dc.identifier.urihttps://ediss.sub.uni-hamburg.de/handle/ediss/11966-
dc.description.abstractWith the rapid advancement of Artificial intelligence (AI), autonomous systems have gained increasing attention due to their growing potential across both virtual and real-world applications. Developing embodied agents that can follow human instructions requires not only semantic understanding but also efficient policy learning. To achieve further autonomy, an agent must explore its environment and adapt its capabilities beyond the initial design, which motivates research into world modeling and robotic self-determination. This thesis begins by presenting a unified conceptual foundation for autonomous embodiment, followed by contributions that integrate multiple aspects of this foundation. First, the thesis introduces multimodal cues as intrinsic motivation to enable reinforcement learning agents to engage in self-determined exploration and representation learning, warming up their policies beyond immediate task demands. Second, the thesis proposes a decision-level interactive perception approach based on Large Language Models (LLMs), enabling agents to semantically reason about multimodal inputs for improved exploration and environmental understanding. Third, to strengthen the reasoning abilities of LLMs, the thesis explores logic-guided inference exploration to enhance performance on complex reasoning tasks without requiring additional fine-tuning. Fourth, the thesis addresses long-term embodied autonomy by enabling agents to reason about affordances in their environment and discover novel skills through self-determined policy learning. Finally, the thesis concludes with collaborative research on object-centric planning, bimanual coordination, and explainability in embodied systems, further extending and contextualizing the contributions within broader research on embodied intelligence.en
dc.language.isoende_DE
dc.publisherStaats- und Universitätsbibliothek Hamburg Carl von Ossietzkyde
dc.rightshttp://purl.org/coar/access_right/c_abf2de_DE
dc.subject.ddc004: Informatikde_DE
dc.titleEnvironment Exploration and Autonomous Adaptation in Embodied Agentsen
dc.typedoctoralThesisen
dcterms.dateAccepted2025-10-08-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/de_DE
dc.rights.rshttp://rightsstatements.org/vocab/InC/1.0/-
dc.subject.bcl54.72: Künstliche Intelligenzde_DE
dc.subject.gndRobotikde_DE
dc.subject.gndKünstliche Intelligenzde_DE
dc.subject.gndDeep learningde_DE
dc.subject.gndGroßes Sprachmodellde_DE
dc.subject.gndBestärkendes Lernen <Künstliche Intelligenz>de_DE
dc.type.casraiDissertation-
dc.type.dinidoctoralThesis-
dc.type.driverdoctoralThesis-
dc.type.statusinfo:eu-repo/semantics/publishedVersionde_DE
dc.type.thesisdoctoralThesisde_DE
tuhh.type.opusDissertation-
thesis.grantor.departmentInformatikde_DE
thesis.grantor.placeHamburg-
thesis.grantor.universityOrInstitutionUniversität Hamburgde_DE
dcterms.DCMITypeText-
dc.identifier.urnurn:nbn:de:gbv:18-ediss-131949-
item.creatorOrcidZhao, Xufeng-
item.fulltextWith Fulltext-
item.creatorGNDZhao, Xufeng-
item.grantfulltextopen-
item.languageiso639-1other-
item.advisorGNDWermter, Stefan-
Enthalten in den Sammlungen:Elektronische Dissertationen und Habilitationen
Dateien zu dieser Ressource:
Datei Beschreibung Prüfsumme GrößeFormat  
Xufeng-Thesis-V6-signed.pdfSigned Xufeng Zhao's thsis upload.0dde7660001f2e40bb5ffca8f250411321.48 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Kurzanzeige

Info

Seitenansichten

Letzte Woche
Letzten Monat
geprüft am null

Download(s)

Letzte Woche
Letzten Monat
geprüft am null
Werkzeuge

Google ScholarTM

Prüfe