Using Explicit Feedback to Improve Code Recommenders
Developers are often confronted with new Application Programming Interfaces (APIs) or they need to learn how to use a new framework; both tasks are tedious and exhausting. To reduce the effort, Integrated Development Environments (IDEs) were extended by various developer-assistance tools.
Tools like this are based on knowledge gained in a static analysis of source code. However, as the analyzed code is not written by the designers, but by users of the framework, it does not always present correct usages of the framework. Sometimes users misunderstood the intended usage of the framework’s API and wrote suboptimal or incorrect code. A static analysis cannot detect this and treats this spurious usages the same way as the correct ones. If a usage model is learned from this data, incorrect usages are included and proposed to other developers that use the developer-assistance tool. This way, the erroneous usage is propagated.
In this thesis you will create a review tool that allows framework designers to review the information and models built from code examples. The tool should enable the designers to give feedback about the quality of the information. This could, for example, be realized by presenting an incomplete code snippet and a list of completion recommendations to the designer. The designer could then identify the most/least likely completion from this list or just reorder the list. This feedback should then be considered by the assistance tool, to improve its quality.
- Pannaga Shivaswamy, Thorsten Joachims: Online Structured Prediction via Coactive Learning.
- Zitao Liu: Interactive Machine Learning.
- Sebastian Proksch, Johannes Lerch, and Mira Mezini: Intelligent Code Completion with Bayesian Networks.