Dongki Jung

jdk9405@umd.edu   |   jdk9405@gmail.com

I am a Computer Science Ph.D. student at University of Maryland, College Park (UMD), working with Prof. Dinesh Manocha at the GAMMA Lab. At UMD, I've worked on 3D reconstruction and neural rendering.

Previously, I was a research scientist at NAVER LABS. I've worked on visual localization and mapping for robotics.
I graduated with my MS at KAIST, where I was advised by Changick Kim.
I did my bachelors at the Korea University.

Email  /  CV  /  Google Scholar  / 

profile photo
Research

I'm interested in the research for combining classical geometry and recent deep learning methods for 3D vision.

Preprints
UAV4D: Dynamic Neural Rendering of Human-Centric UAV Imagery using Gaussian Splatting
Jaehoon Choi, Dongki Jung, Christopher Maxey, Sungmin Eum, Yonghan Lee, Dinesh Manocha, and Heesung Kwon
[arXiv] [Project]
We introduce UAV4D, a framework for enabling photorealistic rendering for dynamic real-world scenes captured by UAVs.





UAVTwin: Neural Digital Twins for UAVs using Gaussian Splatting
Jaehoon Choi, Dongki Jung, Yonghan Lee, Sungmin Eum, Dinesh Manocha, and Heesung Kwon
[arXiv] [Project]
We present a method for creating digital twins from real-world environments and facilitating data augmentation for training downstream models embedded in unmanned aerial vehicles (UAVs).
Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering
Yonghan Lee, Jaehoon Choi, Dongki Jung, Jaeseong Yun, Soohyun Ryu, Dinesh Manocha, and Suyong Yeon
[arXiv]

We propose a novel 3D Gaussian Splatting algorithm that integrates monocular depth network with anchored Gaussian Splatting, enabling robust rendering performance on sparse-view datasets.

Publications
IM360: Large-scale Indoor Mapping with 360 Cameras
Dongki Jung*, Jaehoon Choi*, Yonghan Lee, Dinesh Manocha
ICCV, 2025
[Project]
We propose a complete pipeline for indoor mapping using omnidirectional images, consisting of three key stages: (1) Spherical SfM, (2) Neural Surface Reconstruction, and (3) Texture Optimization.
EDM: Equirectangular Projection-Oriented Dense Kernelized Feature Matching
Dongki Jung, Jaehoon Choi, Yonghan Lee, Somi Jeong, Taejae Lee, Dinesh Manocha, Suyong Yeon
CVPR, 2025
[Project]
We propose the first learning-based dense matching algorithm for omnidirectional images.
WayIL: Image-based Indoor Localization with Wayfinding Maps
Obin Kwon, Dongki Jung, Youngji Kim, Soohyun Ryu, Suyong Yeon, Songhwai Oh, Donghwan Lee
ICRA, 2024

We address robot localization in large-scale indoor environments using wayfinding maps.

TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using Differentiable Rendering
Jaehoon Choi, Dongki Jung, Taejae Lee, Sangwook Kim, Youngdong Jung, Dinesh Manocha, Donghwan Lee
CVPR, 2023
[Project]
We present a new pipeline for acquiring a textured mesh in the wild with a mobile device.


SelfTune: Metrically Scaled Monocular Depth Estimation through Self-Supervised Learning
Jaehoon Choi*, Dongki Jung*, Yonghan Lee, Deokhwa Kim, Dinesh Manocha, Donghwan Lee
ICRA, 2022

We have developed a fine-tuning method for metrically accurate depth estimation in a self-supervised way.



DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes
Dongki Jung*, Jaehoon Choi*, Yonghan Lee, Deokhwa Kim, Changick Kim, Dinesh Manocha, Donghwan Lee
ICCV, 2021

We present a novel approach for estimating depth from a monocular camera as it moves through complex and crowded indoor environments.

Just a Few Points are All You Need for Multi-view Stereo: A Novel Semi-supervised Learning Method for Multi-view Stereo
Taekyung Kim, Jaehoon Choi, Seokeon Choi, Dongki Jung, Changick Kim
ICCV, 2021

We first introduce a novel semi-supervised multi-view stereo framework.



SelfDeco: Self-Supervised Monocular Depth Completion in Challenging Indoor Environments
Jaehoon Choi, Dongki Jung, Yonghan Lee, Deokhwa Kim, Dinesh Manocha, Donghwan Lee
ICRA, 2021

We present a novel algorithm for self-supervised monocular depth completion in challenging indoor environments.



SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware Feature Extraction
Jaehoon Choi*, Dongki Jung*, Donghwan Lee, Changick Kim
NeurIPS Workshop on Machine Learning for Autonomous Driving, 2020

We propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss.

Arbitrary Style Transfer Using Graph Instance Normalization
Dongki Jung, Seunghan Yang, Jaehoon Choi, Changick Kim
ICIP, 2020

We present a novel learnable normalization technique for style transfer using graph convolutional networks.





Partial Domain Adaptation Using Graph Convolutional Networks
Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim
arXiv, 2020

We propose a graph partial domain adaptation network, which exploits Graph Convolutional Networks.


Thanks to Jon Barron!