Decoding strategies for large language models (LLMs) are a critical but often underexplored aspect of text generation tasks. Since LLMs produce probability distributions over the entire vocabulary, various decoding methods have been developed to transform these probabilities into coherent and fluent text, each with its own set of hyperparameters.
In this study, we present a large-scale, comprehensive analysis of how hyperparameter selection affects text quality in open-ended text generation across multiple LLMs, datasets, and evaluation metrics. Through an extensive sensitivity analysis, we provide practical guidelines for hyperparameter tuning and demonstrate the substantial influence of these choices on text quality. Using three established datasets, spanning factual domains (e.g., news) and creative domains (e.g., fiction), we show that hyperparameter tuning significantly impacts generation quality, though its effects vary across models and tasks. We offer in-depth insights into these effects, supported by both human evaluations and a synthesis of widely-used automatic evaluation metrics.
Please check our other papers if you are interested in related work.
Adaptive Contrastive Search: Uncertainty-Guided Decoding for Open-Ended Text Generation introduces adaptive contrastive search, a novel decoding strategy extending contrastive search by incorporating an adaptive degeneration penalty, guided by the estimated uncertainty of the model at each generation step.
Towards Better Open-Ended Text Generation: A Multicriteria Evaluation Framework.
@article{Esteban2024towards,
author = {Esteban Garces Arias, Meimingwei Li, Christian Heumann, Matthias Aßenmacher},
title = {Decoding Decoded: Understanding Hyperparameter Effects in Open-Ended Text Generation},
journal = {arXiv preprint},
year = {2024},
}