KAPE: kNN-based Performance Testing for Deep Code Search

Auteurs

Guo Y., Hu Q., Xie X., Cordy M., Papadakis M., Le Traon Y.

Référence

ACM Transactions on Software Engineering and Methodology, vol. 33, n° 2, art. no. 48, 2023

Description

Code search is a common yet important activity of software developers. An efficient code search model can largely facilitate the development process and improve the programming quality. Given the superb performance of learning the contextual representations, deep learning models, especially pre-trained languagemodels, have been widely explored for the code search task. However, studies mainly focus on proposing new architectures for ever-better performance on designed test sets but ignore the performance on unseen test data where only natural language queries are available. The same problem in other domains, e.g., CV and NLP, is usually solved by test input selection that uses a subset of the unseen set to reduce the labeling effort. However, approaches from other domains are not directly applicable and still require labeling effort. In this article, we propose the kNN-based performance testing (KAPE) to efficiently solve the problem without manually matching code snippets to test queries. The main idea is to use semantically similar training data to perform the evaluation. Extensive experiments on six programming language datasets, three state-of-theart pre-trained models, and seven baseline methods demonstrate that KAPE can effectively assess the model performance (e.g., CodeBERT achieves MRR 0.5795 on JavaScript) with a slight difference (e.g., 0.0261).

Lien

doi:10.1145/3624735

Partager cette page :