Can MLLMs Guide Me Home? A Benchmark Study on Fine-Grained Visual Reasoning from Transit Maps

2025

1Westlake University, Hangzhou, China 2National University of Singapore, Singapore 3Zhejiang University, Hangzhou, China 4Huazhong University of Science and Technology, Wuhan, China
*Corresponding author: wanghuan@westlake.edu.cn

WLU
NUS
ZJU
HZU
ENCODE Lab
Overview of ReasonMap. ReasonMap is a benchmark dataset designed to evaluate fine-grained visual reasoning abilities of MLLMs, encompassing 1,008 question–answer pairs constructed over high-resolution transit maps from 30 cities, spanning two question types and three templates.

Abstract

Multimodal large language models (MLLMs) have recently achieved significant progress in visual tasks, including semantic scene understanding and text-image alignment, with reasoning variants enhancing performance on complex tasks involving mathematics and logic. However, their capacity for reasoning tasks involving fine-grained visual understanding remains insufficiently evaluated. To address this gap, we introduce ReasonMap, a benchmark designed to assess the fine-grained visual understanding and spatial reasoning abilities of MLLMs. ReasonMap encompasses high-resolution transit maps from 30 cities across 13 countries and includes 1,008 question-answer pairs spanning two question types and three templates. Furthermore, we design a two-level evaluation pipeline that properly assesses answer correctness and quality. Comprehensive evaluations of 15 popular MLLMs, including both base and reasoning variants, reveal a counterintuitive pattern: among open-source models, base models outperform reasoning ones, while the opposite trend is observed in closed-source models. Additionally, performance generally degrades when visual inputs are masked, indicating that while MLLMs can leverage prior knowledge to answer some questions, fine-grained visual reasoning tasks still require genuine visual perception for strong performance. Our benchmark study offers new insights into visual reasoning and contributes to investigating the gap between open-source and closed-source models.

Experimental Results

Table 1: Evaluations of various MLLMs on ReasonMap. S. represents short questions (max map score = 20), L. denotes long questions (max map score = 40). Bold is best per group; Underline is second best.

ModelTypeAcc. (S.)#Tokens (S.)Acc. (L.)#Tokens (L.)Map Score (S. / L.)
Open-source Models
Qwen2.5-VL-3B-InstructBase8.68%427.99%1512.75 / 3.70
Qwen2.5-VL-32B-InstructBase16.49%3615.71%1123.88 / 6.84
Qwen2.5-VL-72B-InstructBase26.65%3324.22%1045.09 / 8.80
InternVL3-38BBase14.84%4313.45%683.48 / 6.31
InternVL3-78BBase25.35%3319.62%624.80 / 7.50
Kimi-VL-A3B-InstructBase12.76%4112.33%413.30 / 5.37
Kimi-VL-A3B-ThinkingReasoning5.47%7545.47%1,2872.44 / 3.17
Skywork-R1V-38BReasoning6.86%6453.21%8422.11 / 3.11
QvQ-72B-PreviewReasoning9.03%1,2794.25%1,6191.59 / 1.55
Closed-source Models
Doubao-115Base34.20%3238.02%1185.25 / 11.96
OpenAI 4oBase41.15%3442.80%586.84 / 13.57
Doubao-415Reasoning43.14%53646.09%1,7967.33 / 14.67
Doubao-428Reasoning37.15%53237.85%2,1675.52 / 11.73
Gemini-2.5-FlashReasoning46.09%80629.86%1,4197.64 / 9.98
OpenAI o3Reasoning63.02%1,23659.11%2,3729.53 / 17.96

Table 2: Evaluations of various MLLMs on ReasonMap without visual inputs. S. represents short questions (max map score = 20), L. denotes long questions (max map score = 40). Bold is best per group; Underline is second best. Green ↑ means improved, Red ↓ represents the value dropped from full input (Table 1).

ModelTypeAcc. (S.)#Tokens (S.)Acc. (L.)#Tokens (L.)Map Score (S. / L.)
Open-source Models
Qwen2.5-VL-3B-InstructBase9.38% 0.7%479.72% 1.73%1472.93 0.18 / 4.51 0.81
Qwen2.5-VL-72B-InstructBase16.41% 10.24%2815.71% 8.51%1084.03 1.06 / 6.49 2.31
Kimi-VL-A3B-InstructBase11.81% 0.95%419.81% 2.52%493.37 0.07 / 5.32 0.05
Kimi-VL-A3B-ThinkingReasoning4.17% 1.30%1,0392.08% 3.39%1,7552.06 0.38 / 1.64 1.53
Closed-source Models
Doubao-115Base13.72% 20.48%3413.98% 24.04%993.50 1.75 / 6.48 5.48
Doubao-415Reasoning21.53% 21.61%35217.19% 28.90%1,0474.85 2.48 / 7.68 6.99

Figure 1: Accuracy across different cities for four representative MLLMs (Qwen2.5-VL-72B-I, InternVL3-78B, OpenAI o3, and Doubao-415, left for short questions, right for long questions). Each city is marked with the corresponding map difficulty and the country flag. Each city in the test set provides a specific number of samples per model: 32 samples for Auckland, 34 for Los Angeles, 7 for Miami, 35 for Lisboa, 18 for Geneva, 40 for Beijing, 39 for Hangzhou, 17 for Budapest, 39 for Singapore, 40 for Rome, and 11 for Toronto.

How to use ReasonMap for evaluating your model?

ReasonMap is designed to evaluate the fine-grained visual reasoning abilities of MLLMs. To use ReasonMap for evaluation, follow these steps:

  1. Download the ReasonMap dataset from the Hugging Face repository: ReasonMap.
    from datasets import load_dataset
    
    ds = load_dataset("FSCCS/ReasonMap")
  2. Download our evaluation code from the GitHub repository: Evaluation Code.
    git clone https://github.com/fscdc/ReasonMap.git
  3. Setup conda env.
    conda create -n reasonmap python=3.10
    conda activate reasonmap
    pip install torchvision==0.17.2
    pip install torch==2.2.2
    pip install numpy==1.24.3
    pip install transformers, datasets
    pip install flash-attn # if not work, please install flash-attn from source (LINK)
  4. Then you can do your work (for closed-source models, please fill 'to-add-your-api-key' with your api keys)
  5. If you encounter any issues, please open an issue at ReasonMap Issues, and we will assist you as soon as possible.

BibTeX

@article{,
  title={Can MLLMs Guide Me Home? A Benchmark Study on Fine-Grained Visual Reasoning from Transit Maps},
  author={Feng, Sicheng and Wang, Song and Ouyang, Shuyi and Kong, Lingdong and Song, Zikai and Zhu, Jianke and Wang, Huan and Wang, Xinchao},
  journal={arXiv preprint arXiv:2505.18675},
  year={2025},
}