Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models

Shengli Zhou1* Minghang Zheng2 Feng Zheng1 Yang Liu2,3✉
1Department of Computer Science and Engineering, Southern University of Science and Technology 2Wangxuan Institute of Computer Technology, Peking University 3State Key Laboratory of General Artificial Intelligence, Peking University *Work done as an intern at Peking University. Corresponding author.

Overview

Spatial reasoning focuses on locating target objects based on spatial relations in 3D scenes, which plays a crucial role in developing intelligent embodied agents. Due to the limited availability of 3D scene-language paired data, it is challenging to train models with strong reasoning ability from scratch. Previous approaches have attempted to inject 3D scene representations into the input space of Large Language Models (LLMs) and leverage the pretrained comprehension and reasoning abilities for spatial reasoning. However, models encoding absolute positions struggle to extract spatial relations from prematurely fused features, while methods explicitly encoding all spatial relations (which is quadratic in object count) as input tokens suffer from poor scalability.

To address these limitations, we propose QuatRoPE, a novel positional embedding method with an input length that is linear to the number of objects, and explicitly calculates pairwise spatial relations through the dot product in attention layers. QuatRoPE's holistic vector encoding of 3D coordinates guarantees a high degree of spatial consistency, maintaining fidelity to the scene's geometric integrity. Additionally, we introduce the Isolated Gated RoPE Extension (IGRE), which effectively limits QuatRoPE's influence to object-related tokens, thereby minimizing interference with the LLM's existing positional embeddings and maintaining the LLM's original capabilities.

Additionally, we construct the Attribute-free Spatial Reasoning (ASR) benchmark to exclusively evaluate 3D spatial reasoning by eliminating object attribute cues. Extensive experiments on ScanRefer, Multi3DRef, SQA3D, and the ASR benchmark show consistent large-margin gains over strong baselines, validating the effectiveness of QuatRoPE and IGRE.

Method Overview

QuatRoPE

  • Encodes each object's 3D bounding box center as holistic quaternion vectors (avoids single-axis bias)
  • Applies quaternion rotation to query/key vectors in attention layers based on 3D coordinates
  • Converts absolute coordinates to pairwise relative positions via dot product in attention score calculation
  • Linear input complexity O(n) while preserving all O(n²) spatial relations
  • Aligns with human cognitive "Maxim of Relation" (higher attention for spatially proximate objects)

IGRE & ASR Benchmark

Isolated Gated RoPE Extension (IGRE)

  • Isolates QuatRoPE to object-related tokens via dedicated dimensions (zero-padding for non-object tokens)
  • Gates QuatRoPE's effect to only object-object token interactions
  • Preserves LLM's original language understanding and reasoning capabilities

Attribute-free Spatial Reasoning (ASR) Benchmark

  • Eliminates object attribute cues (category/color/shape) to force pure spatial reasoning
  • Converted to 3D VG format to avoid language generation bias

ASR Benchmark Construction Pipeline

ASR Pipeline

Figure 1: The construction pipeline of the ASR benchmark.

Excluded Cases

Figure 2: Examples of questions that will be excluded when filtering on the ScanQA dataset.

① The question "What is the object surrounding the table?" should be excluded as there are multiple correct answers.

② The question "What is the grey object next to the table under the TV?" should be excluded as the question tells that the target object is grey and the model can simply identify the target object by color without spatial reasoning.

Experimental Results

We evaluate QuatRoPE on strong 3D LLM baselines (Chat-Scene, 3DGraphLLM) across 3D VG (ScanRefer, Multi3DRef) and 3D VQA (SQA3D) benchmarks, with additional validation on our ASR benchmark. All results show consistent gains from QuatRoPE.

Main Benchmark Results (GT Segmentation, 1B)

Model ScanRefer Acc@0.25 ScanRefer Acc@0.5 Multi3DRef F1@0.25 Multi3DRef F1@0.5 SQA3D EM@1
Chat-Scene-1B 50.7 50.3 53.3 52.9 50.7
Chat-Scene-1B + QuatRoPE (Ours) 55.4 55.0 58.1 57.7 53.1
3DGraphLLM-1B 55.9 55.8 58.6 58.4 51.1
3DGraphLLM-1B + QuatRoPE (Ours) 58.3 58.2 60.7 60.5 53.2

Main Benchmark Results (Mask3D Segmentation, 7B)

Model ScanRefer Acc@0.25 ScanRefer Acc@0.5 Multi3DRef F1@0.25 Multi3DRef F1@0.5 SQA3D EM@1
Chat-Scene-7B 55.5 50.2 57.1 52.4 54.6
Chat-Scene-7B + QuatRoPE (Ours) 57.8 52.2 59.5 54.8 54.7
3DGraphLLM-7B 57.0 51.3 60.1 55.4 53.1
3DGraphLLM-7B + QuatRoPE (Ours) 58.2 52.5 60.6 56.0 55.2

Results on the ASR Benchmark

Model Acc @ 0.25 Gain @ 0.25 Acc @ 0.5 Gain @ 0.5
Chat-Scene-1B 22.92 -- 22.92 --
Chat-Scene-1B + QuatRoPE (Ours) 27.38 4.46 (19.48%) 27.38 4.46 (19.48%)
3DGraphLLM-1B 25.89 -- 25.60 --
3DGraphLLM-1B + QuatRoPE (Ours) 29.76 3.87 (14.94%) 29.76 4.17 (16.28%)
3DGraphLLM-8B 37.50 -- 36.90 --
3DGraphLLM-8B + QuatRoPE (Ours) 41.96 4.46 (11.90%) 41.96 5.06 (13.71%)

Key Insight: QuatRoPE achieves consistent gains across all metrics on the ASR benchmark, proving its strong ability to enhance pure 3D spatial reasoning (no object attribute cues).

Code & Resources

QuatRoPE & IGRE implementation code

ASR benchmark dataset and evaluation code

Pre-trained model checkpoints

Replication scripts for experiments

Data processing pipelines for the ASR benchmark

Citation

If you find our work useful in your research, please cite:

@misc{zhou2026scalableobjectrelationencoding,
      title={Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models}, 
      author={Shengli Zhou and Minghang Zheng and Feng Zheng and Yang Liu},
      year={2026},
      eprint={2603.24721},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.24721}, 
}