Quality Evaluation of Large Language Models Generated Unit Tests: Influence of Structured Output

Santrauka

Unit testing is critical in software quality assurance, and large language models (LLMs) offer an approach to automate this process. This paper evaluates the quality of unit tests generated by large language models using structured output prompts. The research applied six LLMs in generating unit tests across different classes of cyclomatic complexity of C# focal methods. The experiment result shows that LLMs generated results according to a strict structure output (Arrange-Act-Assert pattern) that significantly influences the quality of the generated unit tests.

PDF (anglų)
##submission.license.cc.by4.footer##

Atsisiuntimai

Nėra atsisiuntimų.

##plugins.generic.recommendByAuthor.heading##