dVoting: Fast Voting for dLLMs

2026

1National University of Singapore, Singapore
*Corresponding author: xinchao@nus.edu.sg

NUS
Overview of dVoting. For each prompt, our dVoting preserves consistent tokens in previous generations and remasks the remaining tokens to initiate subsequent sampling, and terminates the process early when candidate answers satisfy consistent criteria.

Abstract

Diffusion Large Language Models (dLLMs) represent a new paradigm beyond autoregressive modeling, offering competitive performance while naturally enabling a flexible decoding process. Specifically, dLLMs can generate tokens at arbitrary positions in parallel, endowing them with significant potential for parallel test-time scaling, which was previously constrained by severe inefficiency in autoregressive modeling. In this work, we introduce dVoting, a fast voting technique that boosts reasoning capability without training, with only an acceptable extra computational overhead. dVoting is motivated by the observation that, across multiple samples for the same prompt, token predictions remain largely consistent, whereas performance is determined by a small subset of tokens exhibiting cross-sample variability. Leveraging the arbitrary-position generation capability of dLLMs, dVoting performs iterative refinement by sampling, identifying uncertain tokens via consistency analysis, regenerating them through voting, and repeating this process until convergence. Extensive evaluations demonstrate that dVoting consistently improves performance across various benchmarks. It achieves gains of 6.22%-7.66% on GSM8K, 4.40%-7.20% on MATH500, 3.16%-14.84% on ARC-C, and 4.83%-5.74% on MMLU.

Experimental Results

Results of LLaDA. Results of Dream. Visualization of dVoting

BibTeX

@article{feng2026dvoting,
  title={dVoting: Fast Voting for dLLMs}, 
  author={Feng, Sicheng and Chen, Zigeng and Ma, Xinyin and Fang, Gongfan and Wang, Xinchao},
  journal={arXiv preprint arXiv:2602.12153},
  year={2026},
}