Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
RxnBench / README.md
AI4Industry's picture
Update README.md
6eb8760 verified
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: choices
      list: string
    - name: answer
      dtype: int32
    - name: meta_info
      struct:
        - name: title
          dtype: string
        - name: journal
          dtype: string
        - name: doi
          dtype: string
        - name: url
          dtype: string
    - name: question_type
      dtype: string
  splits:
    - name: en
      num_bytes: 546653187.125
      num_examples: 1525
    - name: zh
      num_bytes: 546319847.125
      num_examples: 1525
  download_size: 218606009
  dataset_size: 1092973034.25
configs:
  - config_name: RxnBench-VQA
    data_files:
      - split: en
        path: data/en-*
      - split: zh
        path: data/zh-*
license: cc-by-nc-sa-4.0
task_categories:
  - visual-question-answering
language:
  - en
  - zh
tags:
  - chemistry

RxnBench: A Benchmark for Chemical Reaction Figure Understanding

๐Ÿ“˜ Benchmark Summary

RxnBench (SF-QA) is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding.

The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals. For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams. These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy. The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms. This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills.

The benchmark is released in both English and Chinese versions.

๐Ÿ“‘ Task Types

We categorize chemical reaction visual question answering tasks into six types:

  • Type 0 โ€” Fact Extraction: Direct retrieval of textual or numerical information from reaction schemes.
  • Type 1 โ€” Reagent Roles and Functions Identification: Identification of reagents and their functional roles, requiring chemical knowledge and reaction-type awareness.
  • Type 2 โ€” Reaction Mechanism and Process Understanding: Interpretation of reaction progression, including intermediates, catalytic cycles, and mechanistic steps.
  • Type 3 โ€” Comparative Analysis and Reasoning: Comparative evaluation, causal explanation, or outcome prediction under varying conditions.
  • Type 4 โ€” Multi-step Synthesis and Global Understanding: Comprehension of multi-step pathways, step-to-step coherence, and overall synthetic design.
  • Type 5 โ€” Chemical Structure Recognition: Extraction and reasoning-based parsing of chemical structures in SMILES or E-SMILES (as defined in the MolParser paper).

output3

๐ŸŽฏ Benchmark Evaluation

This benchmark evaluates model performance on multiple-choice question answering (MCQ) tasks.

We provide two versions of the prompt template, depending on the language setting.

English Prompt

Question: {question}
Choices:
A. {choice_A}
B. {choice_B}
C. {choice_C}
D. {choice_D}
Based on the image and the question, choose the most appropriate answer.
**Only output a single letter (A, B, C, or D)**. Do NOT output any other text or explanation.

Chinese Prompt

้—ฎ้ข˜: {question}
้€‰้กน:
A. {choice_A}
B. {choice_B}
C. {choice_C}
D. {choice_D}

่ฏทๆ นๆฎๅ›พๅƒๅ’Œ้—ฎ้ข˜๏ผŒไปŽไปฅไธŠๅ››ไธช้€‰้กนไธญ้€‰ๆ‹ฉๆœ€ๅˆ้€‚็š„็ญ”ๆกˆใ€‚
ๅช่พ“ๅ‡บๅ•ไธชๅญ—ๆฏ (A, B, C ๆˆ– D)๏ผŒไธ่ฆ่พ“ๅ‡บ้€‰้กนๅ†…ๅฎน๏ผŒไนŸไธ่ฆ่พ“ๅ‡บไปปไฝ•่งฃ้‡Šใ€‚

Evaluation Protocol

If the modelโ€™s output is not one of A, B, C, or D, we use GPT-4o to map the output to Aโ€“D based on the option content. The final evaluation reports the absolute accuracy of the benchmark in both English and Chinese versions.

Code Example: https://github.com/uni-parser/RxnBench

๐Ÿ“Š Benchmark Leaderboard

We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.

Moldel Think Weight API-Version RxnBench-En RxnBench-Zh Mean Score
Gemini-3-Pro-preview โˆš Proprietary 20251119 0.9318 0.9403 0.9361
GPT-5(high) โˆš Proprietary 20250807 0.9279 0.9246 0.9263
Gemini-2.5-Pro โˆš Proprietary 20250617 0.9095 0.9423 0.9259
GPT-5.1(high) โˆš Proprietary 20251113 0.9213 0.9220 0.9216
GPT-5(medium) โˆš Proprietary 20250807 0.9207 0.9226 0.9216
Qwen3-VL-235BA22B-Think โˆš Open - 0.9220 0.9134 0.9177
Qwen3-VL-32B-Think โˆš Open - 0.9128 0.9161 0.9144
GPT-5.1(medium) โˆš Proprietary 20251113 0.9108 0.9141 0.9125
GPT-5-mini โˆš Proprietary 20250807 0.9108 0.9128 0.9118
Seed1.5-VL-Think โˆš Proprietary 20250428 0.9056 0.9161 0.9109
GPT o3 โˆš Proprietary 20250416 0.9056 0.9115 0.9086
GPT o4 mini โˆš Proprietary 20250416 0.9062 0.9075 0.9069
InternVL3.5-241B-A28B โˆš Open - 0.9003 0.9062 0.9033
Intern-S1 โˆš Open - 0.8938 0.8944 0.8941
Qwen3-VL-30BA3B-Think โˆš Open - 0.8689 0.8590 0.8689
Qwen3-VL-Plus ร— Proprietary 20250923 0.8551 0.8656 0.8604
Qwen3-VL-8B-Think โˆš Open - 0.8636 0.8564 0.8600
Seed1.5-VL ร— Proprietary 20250328 0.8518 0.8669 0.8594
Qwen3-VL-235BA22B-Instruct ร— Open - 0.8492 0.8675 0.8584
InternVL3-78b ร— Open - 0.8531 0.8308 0.8420
Qwen3-VL-4B-Think โˆš Open - 0.8577 0.8256 0.8416
Intern-S1-mini โˆš Open - 0.8521 0.8282 0.8402
GLM-4.1V-9B-Thinking โˆš Open - 0.8392 0.8341 0.8367
Qwen3-VL-32B-Instruct ร— Open - 0.8315 0.8407 0.8361
Qwen2.5-VL-72B ร— Open - 0.8341 0.8308 0.8325
Qwen2.5-VL-Max ร— Proprietary 20250813 0.8192 0.8262 0.8227
GPT-5-nano โˆš Proprietary 20250807 0.7980 0.7941 0.7961
Qwen2.5-VL-32B ร— Open - 0.7980 0.7908 0.7944
Gemini-2.5-Flash โˆš Proprietary 20250617 0.6925 0.8557 0.7741
Qwen3-VL-8B-Instruct ร— Open - 0.7548 0.7495 0.7521
Qwen3-VL-30BA3B-Instruct ร— Open - 0.7456 0.7436 0.7456
GPT-4o ร— Proprietary 20240806 0.7462 0.7436 0.7449
Qwen2.5-VL-7B ร— Open - 0.7082 0.7233 0.7158
Qwen3-VL-4B-Instruct ร— Open - 0.7023 0.7023 0.7023
Qwen3-VL-2B-Think โˆš Open - 0.6780 0.6708 0.6744
Qwen2.5-VL-3B ร— Open - 0.6748 0.6643 0.6696
GPT-4o mini ร— Proprietary 20240718 0.6636 0.6066 0.6351
Qwen3-VL-2B-Instruct ร— Open - 0.5711 0.5928 0.5820
Choice longest answer - - - 0.4262 0.4525 0.4394
Deepseek-VL2 ร— Open - 0.4426 0.4216 0.4321
Random - - - 0.2500 0.2500 0.2500

We also conducted separate evaluations for different task types (in RxnBench-en).

Moldel Think Weight API-Version Type0 Type1 Type2 Type3 Type4 Type5
Gemini-3-Pro-preview โˆš Proprietary 20251119 0.9648 0.9246 0.9527 0.9398 0.9322 0.7463
GPT-5(high) โˆš Proprietary 20250807 0.9313 0.9444 0.9527 0.9167 0.9661 0.8358
Gemini-2.5-Pro โˆš Proprietary 20250617 0.9331 0.9246 0.9459 0.9491 0.9322 0.6343
GPT-5.1(high) โˆš Proprietary 20251113 0.9243 0.9524 0.9426 0.9167 0.9661 0.7910
GPT-5(medium) โˆš Proprietary 20250807 0.9349 0.9325 0.9493 0.9167 0.9492 0.7761
Qwen3-VL-235BA22B-Think โˆš Open - 0.9190 0.9405 0.9459 0.9213 0.9322 0.8433
Qwen3-VL-32B-Think โˆš Open - 0.9296 0.9405 0.9426 0.9259 0.9153 0.7015
GPT-5.1(medium) โˆš Proprietary 20251113 0.9243 0.9365 0.9426 0.9167 0.9492 0.7090
GPT-5-mini โˆš Proprietary 20250807 0.9225 0.9325 0.9257 0.9259 0.9831 0.7388
Seed1.5-VL-Think โˆš Proprietary 20250428 0.8996 0.9365 0.9358 0.9074 0.9153 0.8060
GPT o3 โˆš Proprietary 20250416 0.9313 0.9325 0.9223 0.8981 0.9492 0.7090
GPT o4 mini โˆš Proprietary 20250416 0.6391 0.7302 0.7500 0.6667 0.6271 0.4627
InternVL3.5-241B-A28B โˆš Open - 0.8944 0.9127 0.9291 0.9167 0.9153 0.8134
Intern-S1 โˆš Open - 0.9014 0.9127 0.9223 0.9028 0.8814 0.7463
Qwen3-VL-30BA3B-Think โˆš Open - 0.8732 0.8810 0.9054 0.8843 0.9322 0.6940
Qwen3-VL-Plus ร— Proprietary 20250923 0.8275 0.8968 0.8986 0.8565 0.9153 0.7687
Qwen3-VL-8B-Think โˆš Open - 0.8768 0.8730 0.8885 0.9028 0.8983 0.6567
Seed1.5-VL ร— Proprietary 20250328 0.9327 0.9127 0.9122 0.8472 0.8305 0.7015
Qwen3-VL-235BA22B-Instruct ร— Open - 0.8204 0.8929 0.8986 0.8426 0.8814 0.7761
InternVL3-78b ร— Open - 0.8556 0.8730 0.8885 0.8981 0.9153 0.6194
Qwen3-VL-4B-Think โˆš Open - 0.8838 0.8770 0.8615 0.9074 0.8983 0.6045
Intern-S1-mini โˆš Open - 0.8239 0.8690 0.8547 0.8611 0.8475 0.6791
GLM-4.1V-9B-Thinking โˆš Open - 0.8433 0.8690 0.8649 0.8657 0.8814 0.6493
Qwen3-VL-32B-Instruct ร— Open - 0.8169 0.8571 0.8885 0.8519 0.8305 0.6866
Qwen2.5-VL-72B ร— Open - 0.8063 0.8063 0.8770 0.9088 0.8102 0.9322
Qwen2.5-VL-Max ร— Proprietary 20250813 0.7958 0.8571 0.8885 0.8194 0.8983 0.6642
GPT-5-nano โˆš Proprietary 20250807 0.8063 0.8452 0.8311 0.8241 0.7797 0.5672
Qwen2.5-VL-32B ร— Open - 0.7729 0.8413 0.8750 0.8009 0.8305 0.6418
Gemini-2.5-Flash โˆš Proprietary 20250617 0.7799 0.6111 0.6757 0.6620 0.7627 0.5373
Qwen3-VL-8B-Instruct ร— Open - 0.7113 0.8175 0.8446 0.8241 0.7627 0.5075
Qwen3-VL-30BA3B-Instruct ร— Open - 0.7042 0.7937 0.8311 0.7824 0.7119 0.5970
GPT-4o ร— Proprietary 20240806 0.7359 0.8175 0.7973 0.7500 0.7627 0.5224
Qwen2.5-VL-7B ร— Open - 0.6678 0.7659 0.8041 0.7130 0.6441 0.5373
Qwen3-VL-4B-Instruct ร— Open - 0.6708 0.7302 0.7804 0.7222 0.6610 0.5970
Qwen3-VL-2B-Think โˆš Open - 0.7342 0.6706 0.7128 0.7083 0.6102 0.3657
Qwen2.5-VL-3B ร— Open - 0.6426 0.7381 0.7635 0.6898 0.6610 0.4776
GPT-4o mini ร— Proprietary 20240718 0.6391 0.7302 0.7500 0.6667 0.6271 0.4627
Qwen3-VL-2B-Instruct ร— Open - 0.5405 0.6190 0.6318 0.6250 0.6102 0.3731
Deepseek-VL2 ร— Open - 0.4120 0.5040 0.4899 0.4907 0.3729 0.3060

๐Ÿ†• RxnBench-Doc

A single reaction image often lacks the information needed for full interpretation, requiring contextual text from the literature. Therefore, we also provide a benchmark for chemical reaction literature understanding.

https://huggingface.co/datasets/UniParser/RxnBench-Doc

๐Ÿ“– Citation

our paper coming soon ...