拉斯维加斯赌城

图片

Daksitha Withanage Don M.Sc.

Research Associate
Lehrstuhl für Menschzentrierte Künstliche Intelligenz
Telefon: +49 821 – 598 2305
E-Mail:
Raum: 2044 (N)
Adresse: Universit?tsstra?e 6a, 86159 Augsburg

Research Interests

  • Affective Computing
  • Artificial Emotional Intelligence
  • Socially Interactive Agents
  • Generative AI?
  • Self-supervised Learning

Bachelor/Master Thesis or Project Module

Thesis Guidelines for Prospective Students

If you are interested in writing a Bachelor’s or Master’s thesis with me, please read the guidelines below before applying.

How to Apply

Open Topics

Please first check the Open Topics section below. If a topic matches your interests, send me:

  • the topic you are interested in,
  • a short motivation explaining why it fits your interests and background,
  • your planned thesis timeframe, including start and end dates.
Supervision depends on my current capacity and the relevance of the topic.
Own Topic or External Proposal

You may also propose your own topic or an external/company thesis. In this case, please include:

  • a short topic description,
  • how it aligns with my research,
  • your motivation and planned timeframe,
  • for company topics: the original topic description and any requirements, such as NDAs or company-side supervision.
If the topic is suitable, I will guide you through the next steps.
Next Steps

If accepted, we will refine the topic, define the research question, discuss the methodology, and clarify the expected outcome. You are responsible for checking university requirements such as registration, submission deadlines, and defense or presentation rules.

If the topic is not suitable, you may revise your proposal or contact another supervisor.

Evaluation Criteria

Your thesis will be evaluated based on:

  • literature review,
  • scientific approach and methodology,
  • structure and documentation,
  • novelty and significance, especially for Master’s theses,
  • quality of implementation, experiment, or study design,
  • critical reflection on results and limitations.
Contact
Feel free to contact me if you have questions or would like to apply.

I look forward to working with motivated students on exciting research projects.

Open Topics

?

Evaluation of Speech-Driven 3D Gesture Generation
Bachelor

Short description
This thesis focuses on training a lightweight model that generates upper-body gestures from speech input. The model can use speech audio, voice activity, and optional transcript information to predict 3D motion for a virtual character.

Research focus
The main goal is to build a reproducible baseline pipeline for speech-driven gesture generation.

Possible tasks

  • Review speech-driven gesture generation literature
  • Preprocess audio and 3D motion data
  • Train a simple temporal model such as LSTM, GRU, CNN, or Transformer
  • Generate upper-body gesture sequences
  • Evaluate motion smoothness, diversity, and speech alignment

Related work areas

  • Co-speech gesture generation
  • Speech-to-motion learning
  • Motion representation and SMPL-H
  • Automatic evaluation of generated gestures

Expected outcome
A working baseline model and a documented training pipeline for generating 3D gestures from speech.

?

Related Links to Read

?

Retargeting Generated Gestures to Virtual Characters
Bachelor

Short description
This thesis develops a tool for visually inspecting generated gestures together with speech audio, transcript timing, voice activity, and character animation.

Research focus
Support researchers in debugging and comparing generated gestures across different virtual characters.

Possible tasks

  • Review visual analytics and gesture-evaluation literature
  • Design a simple inspection interface
  • Visualise audio, transcript, VAD, and motion timelines
  • Show generated gestures on one or more avatars
  • Add simple rating or comparison features

Related work areas

  • Gesture-generation evaluation
  • Visual analytics for motion data
  • Speech–gesture alignment
  • Human-centered AI tools

Related Links to Read

Expected outcome
A lightweight visual inspection tool for analysing generated gestures and avatar animations.

Multimodal Gesture Generation for MetaHuman-Based Virtual Agents
Master

Short description
This thesis focuses on training a multimodal gesture-generation model for realistic virtual characters such as MetaHumans. The model can use speech audio, transcripts, word-level timestamps, and voice activity to generate upper-body gestures.

Research focus
Investigate how different input modalities improve gesture quality and speech alignment.

Possible tasks

  • Review multimodal speech-driven gesture-generation literature
  • Preprocess audio, transcript, VAD, and 3D motion data
  • Train and compare several model variants
  • Retarget generated gestures to a MetaHuman or similar character
  • Evaluate naturalness, smoothness, and speech alignment

Related work areas

  • Multimodal gesture generation
  • Speech-driven animation
  • Transformer and diffusion models for motion generation
  • MetaHumans and embodied conversational agents

Related Links to Read

Expected outcome
A complete pipeline from multimodal input to generated and animated gestures on a realistic virtual character.

?

?

Socially Aware Gesture Generation Using Interlocutor Context
Master

Short description
This thesis investigates gesture generation in dyadic interaction. Instead of using only the speaker’s own speech, the model also considers the interlocutor’s speech, voice activity, or motion.

Research focus
Study whether interlocutor context improves the timing, naturalness, and social appropriateness of generated gestures.

Possible tasks

  • Review dyadic gesture generation and social signal processing literature
  • Prepare speaker and interlocutor input features
  • Train speaker-only and dyadic-context models
  • Compare generated gestures across different input settings
  • Evaluate motion quality and social appropriateness

Related work areas

  • Dyadic interaction modelling
  • Listener-aware behaviour generation
  • Turn-taking and backchannel behaviour
  • Socially aware virtual agents

Related Links to Read

Expected outcome
A gesture-generation model that considers conversational context and supports more socially responsive virtual characters.

?

Supervised Theses

  • Automated ICEP-R Annotation of Infant-Caregiver Interactions Using V-Jepa Self-Supervised Learning (2024, Ahmed)
  • Augmenting Social Interactive Agents: Integrating Long-Term Memory in Large Language Models (2024, Lama)
  • Interactive Agent Realism: Mediapipe 3D Blendshapes for Low Resource-Intensive Listening Behavior Modeling (2024, Sarah)
  • Development and Evaluation of a Retrieval-Augmented Generation System for a Interactive Virtual Assistant in a Museum (Seefeld,2025)
  • Design and Development of an Open-Source Platform for Synthesizing Virtual Listener and Speaker Behaviors in Conversational Agents (Hawater,2025)
  • SyncSight: Measuring Dyadic Non-Verbal Synchrony with Foundation Vision Models (Ibrahim,2025)

Projects

DEEP: Mehrschichtige Verarbeitung von Emotionen für Soziale Agenten Kombination einer Interpretation von Sozialen Signalen und einem Computermodell für Emotionen von Dialogpartnern
Auswirkungen der Covid-19-Pandemie auf Elternschaft und kindliche Entwicklung (SCHWAN)

拉斯维加斯赌城