Seokhyeon Hong

Hello, world! I am a Ph.D. student at KAIST Visual MediaLab, advised by Prof. Junyong Noh. My research interests lie in computer graphics and character animation, including generation, editing, in-betweening, retargeting, rigging, and all the fun stuff that makes characters move.

Email  /  CV  /  Scholar  /  Github  /  Blog  /  YouTube

profile photo

Publications

SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing
Seokhyeon Hong, Chaelin Kim, Serin Yoon, Junghyun Nam, Sihun Cha, Junyong Noh
CVPR 2025
project page / paper / code / video

Skeleton-aware latent diffusion that incorporates interactions between skeletal joints, motion frames, and textual words for text-driven motion generation and zero-shot editing.

AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
Kwan Yun, Seokhyeon Hong, Chaelin Kim, Junyong Noh
CVPR 2025
project page / paper / code

Motion in-betweening for arbitrary characters using video diffusion models fine-tuned with ICAdapt and motion-video mimicking, recuding dependency on character-specific datasets.

PontTuset ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior
Seokhyeon Hong*, Soojin Choi*, Chaelin Kim, Sihun Cha, Junyong Noh
(* equal contribution)
Eurographics 2025; CGF
project page / paper / code / video

Rigging and skinning 3D character meshes by leveraging cross-attention modules and 2D generative priors for robust generalization across diverse skeletal and mesh configurations.

PontTuset Geometry-Aware Retargeting for Two Skinned Characters Interaction
Inseo Jang, Soojin Choi, Seokhyeon Hong, Chaelin Kim, Junyong Noh
SIGGRAPH Asia 2024; TOG
project page / paper / video

Retargeting interaction motions of two skinned characters via sparse mesh-agnostic geometry representations and spatio-cooperative transformers.

PontTuset Long-term Motion In-betweening via Keyframe Prediction
Seokhyeon Hong, Haemin Kim, Kyungmin Cho, Junyong Noh
SCA 2024; CGF
paper / code / video

Keyframe prediction using two-stage hierarchical transformers for long-term motion in-betweening.

PontTuset Recurrent Motion Refiner for Locomotion Stitching
Haemin Kim, Kyungmin Cho, Seokhyeon Hong, Junyong Noh
Eurographics 2024; CGF
paper / video

Training a recurrent motion refiner for neural locomotion stitching.