Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Not found.
Error code:   ResponseNotFound

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Tokenization Robustness

TokSuite Logo

TokSuite Benchmark (STEM Collection)

Dataset Description

This dataset is the STEM subset of the TokSuite benchmark, designed to evaluate how tokenizer choice affects model behavior under realistic formatting, notation, and surface-form perturbations in technical text. TokSuite includes specialized benchmarks for mathematics and STEM, with the STEM subset containing 44 canonical technical questions paired with a diverse set of targeted perturbations.

  • Curated by: R3 Research Team
  • License: MIT License

Dataset Summary

TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation, independent of architecture, training data, or optimization.

The STEM subset specifically measures model performance on technical and scientific questions under perturbations that commonly arise in real-world STEM communication, including Unicode formatting, mathematical notation, LaTeX representations, spacing changes, and visual styling variants.

Key Features:

  • 44 canonical STEM questions covering science, mathematics, engineering, and technical reasoning
  • A wide range of formatting- and notation-based perturbations reflecting real-world technical text
  • Parallel structure with other TokSuite benchmark subsets
  • Designed for controlled robustness evaluation with high baseline accuracy

Supported Tasks

  • Multiple-Choice Question Answering: Text completion format with 4 answer choices
  • Tokenizer Robustness Evaluation: Measuring performance degradation under surface-form and formatting perturbations
  • Technical Text Understanding: Evaluating model robustness on STEM-style content

Languages

The dataset is primarily in English (en), with variations expressed through symbolic, typographic, and Unicode transformations rather than natural-language translation.


Dataset Structure

Data Fields

Field Type Description
question string The STEM question text
choices list[string] 4 multiple-choice answer options
answer int64 Index of the correct answer
answer_label string Letter label of the correct answer
split string Dataset split identifier
subcategories string Perturbation category
lang string Language code (en)
second_lang string Optional plain-text or alternative representation
notes string Additional context about the perturbation
id string Unique question identifier
set_id float64 Question set grouping identifier
variation_id float64 Variation number within a question set
vanilla_cos_sim_to_canonical dict[string, float] Cosine similarity to canonical form (raw tokens)
trimmed_cos_sim_to_canonical dict[string, float] Cosine similarity after token normalization
token_counts dict[string, integer] Token counts per tokenizer

Dataset Creation

Curation Rationale

This dataset was created to:

  1. Systematically evaluate how different tokenization strategies handle STEM-style text
  2. Measure robustness to formatting, notation, and Unicode-based perturbations
  3. Isolate tokenizer effects from semantic reasoning difficulty
  4. Provide standardized benchmarks for technical text robustness analysis

The questions were intentionally designed to be conceptually straightforward, ensuring high canonical accuracy and enabling clean measurement of performance degradation due solely to perturbations.

Source Data

Data Collection and Processing

  • Canonical Questions: 44 STEM questions authored in clean, standard technical English
  • Perturbations: Each question was transformed using targeted surface-form and formatting variations
  • Validation: Model-in-the-loop filtering ensured canonical questions are answerable with high accuracy

Perturbation Categories

Each perturbation preserves the underlying semantic intent of the canonical STEM question while modifying its surface form, notation, or formatting to stress tokenizer behavior. All perturbations are paired with the same canonical question and differ only in representation.

1. Canonical

Clean, standard technical English with conventional notation, spacing, and formatting. This serves as the reference condition for evaluating robustness.

2. Character Deletion

Removes one or more characters from technical terms, symbols, or variables (e.g., markup → markp). These deletions are subtle but often catastrophic for subword tokenization, especially in STEM terminology.

3. Colloquial

Rewrites the question using more informal or descriptive language while preserving technical meaning. This tests robustness to register changes without altering core content.

4. Compounds

Alters compound technical terms by merging or restructuring components (e.g., removing separators or introducing fused forms), changing token boundaries and segmentation behavior.

5. Diacriticized Styling

Introduces decorative or combining diacritics applied to characters in technical text. These perturbations preserve visual similarity but change Unicode code points and normalization behavior.

6. Double-Struck Characters

Replaces standard Latin characters with mathematical double-struck Unicode forms (e.g., R → ℝ, Z → ℤ), commonly used in mathematical notation.

7. Enclosed Characters

Substitutes alphanumeric characters with enclosed Unicode variants (e.g., A → Ⓐ, 1 → ①), which are visually similar but tokenized very differently.

8. Equivalent Expressions

Rewrites the same STEM concept using an alternative but semantically equivalent formulation, such as paraphrasing definitions or reordering explanatory clauses.

9. Fullwidth Characters

Uses fullwidth Unicode forms (e.g., A → A, 1 → 1) instead of standard ASCII characters, altering byte-level and subword tokenization.

10. LaTeX

Represents mathematical expressions or symbols using LaTeX-style notation (e.g., $x^2$, $N_2$, \frac{a}{b}), reflecting common technical writing practices.

11. Morpheme Separation

Artificially splits technical terms into smaller morpheme-like units, increasing sequence length and disrupting learned subword patterns.

12. Scripted Text

Uses scripted or calligraphic Unicode variants of characters (e.g., 𝒜𝒷𝒸) in place of standard Latin letters, stressing visual–semantic mismatch handling.

13. Space Removal

Removes or alters whitespace that is normally meaningful in technical text, such as between variables, units, or multi-word terms.

14. Spelled-Out

Replaces numerals, symbols, or abbreviated technical forms with fully spelled-out textual equivalents (e.g., 2 → two, H2O → water molecule).

15. Strikethrough

Applies strikethrough, combining characters or formatting marks to portions of text, preserving content but introducing visual and Unicode noise.

16. Superscript / Subscript

Uses Unicode superscript and subscript characters (e.g., , N₂) instead of linear text representations, which often fragment tokenization.

17. Typographical Errors

Introduces realistic typos such as missing letters, duplicated characters, or minor corruptions common in fast technical writing.

18. Unicode Formatting

Applies Unicode formatting characters that affect text rendering or directionality while leaving the visible content largely unchanged.

19. Unusual Formatting

Introduces nonstandard layout, punctuation, or visual formatting patterns that are uncommon but realistic in technical documents.

20. Upside-Down / Rotated

Uses visually rotated or upside-down Unicode characters that resemble standard characters but differ at the code-point level.


Considerations for Using the Data

Social Impact of Dataset

This dataset supports the development of more robust language models for technical and scientific domains, improving reliability in education, research, and engineering applications.

Discussion of Biases

  • Domain focus: Emphasizes STEM-style technical text rather than natural language discourse
  • Formatting-centric: Perturbations focus on surface form, not conceptual difficulty
  • English-centric: Uses English technical text, though many perturbations are language-agnostic
  • Question simplicity: Designed for robustness evaluation rather than deep problem-solving

Other Known Limitations

  • Evaluation-only dataset (no training split)
  • Multiple-choice format
  • Limited question count per perturbation
  • Results may differ for long-form or open-ended STEM reasoning

Additional Information

Dataset Curators

The dataset was curated by the TokSuite research team at R3.

Licensing Information

MIT License

Citation Information

If you use this dataset in your research, please cite the TokSuite paper:

@inproceedings{toksuite2026, title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior}, author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin}, booktitle={Preprint}, year={2026}, url={TBD} }

Paper: TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior

Contributions

This dataset is part of TokSuite, which includes:

  • 14 language models with identical architectures but different tokenizers
  • Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
  • Comprehensive analysis of tokenization's impact on model behavior

Contact

For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.


Part of the TokSuite Project

Understanding Tokenization's Role in Language Model Behavior

Downloads last month
200

Collection including toksuite/tokenizer_robustness_completion_stem