To content
Fakultät für Informatik
Master-Thesis

Systematic Evaluation of Static Analysis Performance

Context

Program analyses are a common technique to uncover flaws in software. As software systems grow larger and larger in size, program analyses must cope with increasing scalability demands. An analysis that uncovers bugs in yesteryear's code hardly provides assistance in contemporary software development processes. While analysis tools developed by the research community may often dodge this problem when they aim for analysis depth or prototype novel analysis techniques, the issue is much more pressing for commercial tools.

Goal

In this thesis, your goal is to develop an approach to systematically evaluate static analysis platforms. As a proof of concept, this systematic evaluation is conducted on the commercial program analysis and software intelligence platform Teamscale. Teamscale is equipped with a vast array of program analyses for two dozen programming languages that tackle software maintainability. More than 100 companies and institutions worldwide have included Teamscale into their development process. 

Part of the approach will be to run the static analysis platform against extensive software repositories. To facilitate this step, you will integrate the Teamscale with the benchmarking platform DelphiHub. DelphiHub is a large-scale index of software components searchable by their metrics. Using the DelphiHub integration, it will subsequently be your task to run Teamscale on a software repository DelphiHub supports. The results of this analysis will reveal potential performance bottlenecks, excessive memory usage and other issues that inhibit or break large-scale analyses.

Research Questions

  1. What are the principal components in a well-balanced static analysis performance evaluation?
  2. What observations can be made when scaling a static analysis tool to an ecosystem?

Literature 

  • Lars Heinemann, Benjamin Hummel, and Daniela Steidl. 2014. Teamscale: software quality control in real-time. In Companion Proceedings of the 36th International Conference on Software Engineering (ICSE Companion 2014). Association for Computing Machinery, New York, NY, USA, 592–595.
    DOI: https://doi.org/10.1145/2591062.2591068
     
  • Vojtěch Horký, Peter Libič, Antonin Steinhauser, and Petr Tůma. 2015. DOs and DON'Ts of Conducting Performance Measurements in Java. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (ICPE '15). Association for Computing Machinery, New York, NY, USA, 337–340.
    DOI: https://doi.org/10.1145/2668930.2688820

Collaboration

This thesis is conducted in close collaboration with CQSE GmbH who develop and maintain Teamscale. CQSE's work mainly revolves around helping their customers with software maintenance, both through consulting as well as Teamscale directly. Before the thesis starts, you will receive one month time to familiarize yourself with the Teamscale code base. During this time, you will be supervised by a member of CQSE. The supervision of the thesis will be shared by Prof. Hermann and an employee of CQSE.