Generative AI is changing the way developers interact with software systems, providing services that can produce and deliver new content, crafted to satisfy the actual needs of developers. For instance, developers can ask for new code directly from within their IDEs by writing natural language prompts, and integrated services based on generative AI, such as Copilot, immediately respond to prompts by providing ready-to-use code snippets. Formulating the prompt appropriately, and incorporating the useful information while avoiding any information overload, can be an important factor in obtaining the right piece of code. The task of designing good prompts is known as prompt engineering. In this paper, we systematically investigate the influence of eight prompt features on the style and the content of prompts, on the level of correctness, complexity, size, and similarity to the developers’ code of the generated code. We specifically consider the task of using Copilot with 124,800 prompts obtained by systematically combining the eight considered prompt features to generate the implementation of 200 Java methods. Results show how some prompt features, such as the presence of examples and the summary of the purpose of the method, can significantly influence the quality of the result.

Fagadau, I., Mariani, L., Micucci, D., Riganelli, O. (2024). Analyzing Prompt Influence on Automated Method Generation: An Empirical Study with Copilot. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings [10.1145/3643916.3644409].

Analyzing Prompt Influence on Automated Method Generation: An Empirical Study with Copilot

Fagadau, Ionut Daniel;Mariani, Leonardo;Micucci, Daniela;Riganelli, Oliviero
2024

Abstract

Generative AI is changing the way developers interact with software systems, providing services that can produce and deliver new content, crafted to satisfy the actual needs of developers. For instance, developers can ask for new code directly from within their IDEs by writing natural language prompts, and integrated services based on generative AI, such as Copilot, immediately respond to prompts by providing ready-to-use code snippets. Formulating the prompt appropriately, and incorporating the useful information while avoiding any information overload, can be an important factor in obtaining the right piece of code. The task of designing good prompts is known as prompt engineering. In this paper, we systematically investigate the influence of eight prompt features on the style and the content of prompts, on the level of correctness, complexity, size, and similarity to the developers’ code of the generated code. We specifically consider the task of using Copilot with 124,800 prompts obtained by systematically combining the eight considered prompt features to generate the implementation of 200 Java methods. Results show how some prompt features, such as the presence of examples and the summary of the purpose of the method, can significantly influence the quality of the result.
paper
Prompt engineering, code generation, Copilot
English
IEEE/ACM International Conference on Program Comprehension (ICPC)
2024
Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings
9798400705861
2024
none
Fagadau, I., Mariani, L., Micucci, D., Riganelli, O. (2024). Analyzing Prompt Influence on Automated Method Generation: An Empirical Study with Copilot. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings [10.1145/3643916.3644409].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/490720
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact