MMSU / README.md
ddwang2000's picture
Update README.md
a2c7b90 verified
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
dataset_info:
  features:
    - name: id
      dtype: string
    - name: task_name
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: question
      dtype: string
    - name: choice_a
      dtype: string
    - name: choice_b
      dtype: string
    - name: choice_c
      dtype: string
    - name: choice_d
      dtype: string
    - name: answer_gt
      dtype: string
    - name: category
      dtype: string
    - name: sub-category
      dtype: string
    - name: sub-sub-category
      dtype: string
    - name: linguistics_sub_discipline
      dtype: string
  splits:
    - name: train
      num_bytes: 1199569150
      num_examples: 5000
  download_size: 1466894219
  dataset_size: 1199569150
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark

Paper Project

Pipeline

Overview of MMSU

MMSU (Massive Multi-task Spoken Language Understanding and Reasoning Benchmark) is a comprehensive benchmark for evaluating fine-grained spoken language understanding and reasoning in multimodal models.

It systematically captures the variance of real-world linguistic phenomena in daily speech through 47 sub-tasks, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics, spanning both perceptual and higher-level reasoning capabilities.

The benchmark comprises 5,000 carefully curated audio–question–answer pairs derived from diverse authentic recordings.

Pipeline

Usage

You can load the dataset via Hugging Face datasets:

from datasets import load_dataset
ds = load_dataset("ddwang2000/MMSU")

For evaluation, please refer to GitHub Code

Citation

@article{wang2025mmsu,
      title={MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark}, 
      author={Dingdong Wang and Jincenzi Wu and Junan Li and Dongchao Yang and Xueyuan Chen and Tianhua Zhang and Helen Meng},
      journal={arXiv preprint arXiv:2506.04779},
      year={2025},
}