iLRM

An Iterative Large 3D Reconstruction Model

1Sungkyunkwan University, 2Yonsei University, 3Rembrand

iLRM Overview

Abstract

Feed-forward 3D modeling has emerged as a promising approach for rapid and high-quality 3D reconstruction. In particular, directly generating explicit 3D representations, such as 3D Gaussian splatting, has attracted significant attention due to its fast and high-quality rendering, as well as numerous applications. However, many state-of-the-art methods, primarily based on transformer architectures, suffer from severe scalability issues because they rely on full attention across image tokens from multiple input views, resulting in prohibitive computational costs as the number of views or image resolution increases.

Toward a scalable and efficient feed-forward 3D reconstruction, we introduce an iterative Large 3D Reconstruction Model (iLRM) that generates 3D Gaussian representations through an iterative refinement mechanism, guided by three core principles: (1) decoupling the scene representation from input-view images to enable compact 3D representations; (2) decomposing fully-attentional multi-view interactions into a two-stage attention scheme to reduce computational costs; and (3) injecting high-resolution information at every layer to achieve high-fidelity reconstruction. Experimental results on widely used datasets, such as RE10K and DL3DV, demonstrate that iLRM outperforms existing methods in both reconstruction quality and speed.

Notably, iLRM exhibits superior scalability, delivering significantly higher reconstruction quality under comparable computational cost by efficiently leveraging a larger number of input views.

Results on the DL3DV dataset using 32 input images with a resolution of 540×960.

Core architectural design

Our method decouples scene representation from input-view images, enabling efficient computation and compact 3D reconstruction. The example above uses half-resolution views, significantly reducing the attention cost while maintaining high-quality reconstruction.

Comparison

RealEstate10K

We compare our method with the state-of-the-art methods on the RealEstate10K dataset with various numbers of input images.

DL3DV low-resolution (256x448)

We compare our method with the state-of-the-art methods on the DL3DV dataset. The quantitative results are shown in the table below, performed with a 50-frame coverage following the DepthSplat protocol. The qualitative comparisons are presented in the accompanying video, with both methods using 24 input images and evaluated over full-frame coverage. We also show the encoding time and memory consumption of each method. Note that, our method generates only 1/4 Gaussians compared to the baseline method.

DL3DV high-resolution (512x960)

We compare our method with the state-of-the-art methods on the DL3DV dataset. In qualitative comparisons, both methods use 12 input images at a resolution of 512x960, covering a 100-frame interval. Note that, our method generates only 1/4 Gaussians compared to the baseline method.

iLRM (Ours)
DepthSplat

DL3DV high-resolution (540x960, Undistorted)

We compare our method against the current state-of-the-art wide-coverage feed-forward 3D reconstruction model, LongLRM, as well as optimization-based methods, 3D-GS and Mip-Splatting. Undistorted refers to the undistorted version of the DL3DV dataset based on this script. The quantitative results are presented in the table below, evaluated with 32 input images under full-frame coverage following the LongLRM protocol. Both 3D-GS and Mip-Splatting are trained for 30k iterations. LongLRM10 means finetuning 10 epochs initialized from the LongLRM’s generated Gaussians. Since we generate more compact 3D Gaussian representations, our finetuning converges much faster than LongLRM. We utilized FlashAttention-3 for the zero-shot inference.

BibTeX


@article{kang2025ilrm,
  title={iLRM: An Iterative Large 3D Reconstruction Model},
  author={Kang, Gyeongjin and Nam, Seungtae and Sun, Xiangyu and Khamis, Sameh and Mohamed, Abdelrahman and Park, Eunbyung},
  journal={arXiv preprint arXiv:2507.23277},
  year={2025},
}