Payam Jome-Yazdian portrait

I am Payam Jome-Yazdian, a Ph.D. candidate, and research & teaching assistant in Computing Science at Simon Fraser University. I work with Dr. Angelica Lim in the ROSIE Lab and collaborate with the MARS Lab under the supervision of Dr. Mo Chen, conducting collaborative Ph.D. research with HUAWEI on generative and multimodal methods that ground expressive human behavior in AI systems. My work spans computer vision and machine learning, from self-supervised representations and diffusion models to multimodal language alignment, applied to digital humans, robotics, and interactive media.

During my Ph.D., I interned with the Digital Humans team at Netflix Eyeline Studios – Powered by Netflix as a machine learning researcher, working with Jim Su, Ahmet Levent Taşel, and Paul Debevec on controllable motion and video generation. I later joined Electronic Arts SEED as a machine learning research intern, collaborating with Konrad Tollmar, Han Liu, Hau Nghiep Phan, and Ray Phan to advance panoramic video diffusion pipelines.

Research interests include:

  • Video diffusion and generative models for immersive media
  • Multimodal learning for human motion, gestures, and style transfer
  • LLM-driven pipelines that connect language, perception, and action
Payam Jome-Yazdian

SFU logo ROSIE Lab logo MARS Lab logo

pjomeyaz@sfu.ca
(778) 251-4174

Ph.D. Candidate · Research & Teaching Assistant
Simon Fraser University
Vancouver, Canada

LinkedIn
Email
Google Scholar
GitHub

News

Aug 2025
  • Wrapping up a panoramic video generation internship at SEED Electronic Arts, delivering a 360° diffusion fine-tuning pipeline.
Jul 2025
  • MotionScript accepted to IROS 2025, showcasing text-to-motion descriptions for expressive digital humans.
Jun 2025
  • Co-authored Tokenizing Nonverbal Communication in Salsa Dance for the ICML 2025 Tokenization Workshop.
Nov 2024
  • Extended internship at Netflix Eyeline Studios to lead controllable AI video generation initiatives.
Oct 2022
  • 🏆 Received the IROS 2022 Cognitive Robotics Best Paper Award for Gesture2Vec.
Jun 2022
  • Gesture2Vec accepted to IROS 2022, presenting our representation learning framework for co-speech gestures.
Plain Academic