Document Type

Article

Publication Date

2025

DOI

10.36227/techrxiv.174140719.96375390/v1

Publication Title

TechRxiv

Pages

59 pp.

Abstract

Prompt engineering has arisen as a pivotal discipline in optimizing the performance of Large Language Models (LLMs) by structuring inputs to enhance coherence, accuracy, and task alignment. This paper comprehensively surveys various prompting techniques, systematically categorizing them according to their application domains and methodological foundations. Fundamental approaches like zero-shot and few-shot prompting are examined along with advanced strategies, including chain-of-thought reasoning, retrieval-augmented generation, and self-consistency mechanisms. A rigorous qualitative analysis is conducted to evaluate each technique's strengths, limitations, and optimal use cases, offering a structured framework for selecting the most effective prompting strategies. Theoretical insights and empirical findings are consolidated to provide researchers and practitioners with advanced methodologies for refining prompt design and enhancing LLM capabilities in complex reasoning, decision-making, and knowledge synthesis while improving reliability and factual accuracy in generated outputs.

Comments

e-Prints posted on TechRxiv are preliminary reports that are not peer reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in the media as established information.

Rights

© 2025 The Authors.

Published under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) License.

Original Publication Citation

Debnath, T., Siddiky, M. N. A., Rahman, M. E., Das, P., & Guha, A. K. (2025). A comprehensive survey of prompt engineering techniques in large language models. TechRxiv. https://doi.org/10.36227/techrxiv.174140719.96375390/v1

ORCID

0009-0004-0421-5540 (Rahman)

Share

COinS