Our research is at the intersection of two fundamental computer science methods. First, we design and implement new programming models and frameworks with the goal of maximising the automation of technical (non-functional) concerns, so that developers creativity and engineering efforts are focussed on solutions for application domain concerns. Second, we design new methods and techniques for reasoning about software properties. There are obvious synergies between these two areas. One can design programming models dedicated to writing program analyses, so as to facilitate their design and implementation. On the other way around, program intelligence methods can be instrumental for ensuring the efficiency and safety of language concepts.

Our expertises are programming models and code intelligence, which we apply in several specialized areas.

Programming Models

  • Focus creativity and engineering efforts on application domain concerns, not accidental complexity.

Code Intelligence

  • Understanding and reasoning about existing software properties.

Also see our list of publications and ongoing and past software projects.

Programming Models

Computing automates the world, programming models automate computing.

The design and implementation of high-level (domain-specific) programming abstractions are key in automating complex “non-functional” concerns, thus enabling desired software system’s properties such as reliability or security and privacy “by-design”.

Recurring research themes include:

  • trading expressivity versus reasoning
  • formal definitions
  • proving key properties
  • implementing efficient prototypes
  • evaluating expressiveness
  • evaluating effects on program design quality
  • evaluating application performance

Code Intelligence

In a digitalized world (“software is eating the world”), software quality in general and software security in particular play a central role.

Enforcing high-quality, secure software requires powerful and intelligent methods of program analysis, which automatically detect problems, rule violations, and security vulnerabilities.

Recurring research themes include:

  • trading precision versus correctness
  • analyzing binary or obfuscated code
  • building a unified knowledge base from a diverse set of analyzes
  • learning patterns from existing code bases
  • replacing expert knowledge
  • preventing security vulnerabilities

Application Areas

We apply our expertise in several specialized application areas:

  • Reliable and resilient networked software systems for decentralized communication.
  • Secure and privacy-preserving software.
  • Methods and tools for AI software systems.

Reliable and Resilient Networked Software Systems

Today’s software systems are interactive and distributed.

The current dominant software architecture is centralized, which is unsuited for the age of plentiful mobile devices, autonomous vehicles, and uncountable embedded devices and sensors. A centralized architecture prevents independent processing, causes latency, and undermines user control and privacy. Decentralized architectures solve these problems by providing offline availability, low latency, and data privacy. However, decentralized architectures require new solutions for trust, fault tolerance, and concurrency control.

We research resiliency and reliability of networked software systems at several levels using declarative programming platforms.

  • Declarative reactive programming to improve on callback-based solutions to guarantee consistency of complex interactions.
  • Multitier programming to improve decentralized applications that interact according to smart contracts, by preventing novel classes of bugs that arise from a global, untrusted execution environment.
  • Resilient and disruption-tolerant networking protocols to enable message dissemination even in highly unreliable settings.
  • Programmable networking to increase reliability at the networking layer by extending application semantics to in-network execution.

Secure and Privacy-preserving Software Systems

As the digitalization of our everyday lives continues, the risk to our data and privacy also increases.

All data is valuable, whether it is corporate or private data, accumulates on the web, resides in the cloud or is stored on personal devices. Such data is constantly under attacks from Malware and spyware to trackers and ad networks. We cover a broad range of both analytic and constructive methods towards enabling secure and privacy-preserving software systems.

In this area, automatic analyses have a major impact on the quality and security of software systems because even systems that appear secure at first glance may contain insecure code hidden from even the most trained eye. And to improve new developments, we focus on privacy-by-design using high-level specification languages. In combination, we have developed an assortment of modular tools.

  • A general-purpose platform for static analysis of programs that enables quick, reliable, and loosely-coupled analysis.
  • Analyses that scan either the whole application or individual libraries for their potential dangerousness.
  • Novel methods to comprehend software systems by slicing them into clear modules.
  • Machine learning to further enhance the detection capabilities of many analyses.
  • Specification systems used by domain experts to guide and customize the analysis system.
  • A specification language for domain-specific usage rules of software components – from which correct code is synthesized – simple enough for non-programmers to use.
  • Query language for data-intensive applications that automatically generates and deploys sub-computations to optimize performance while protecting the processed data from unauthorized access.

Trustworthy AI Software Systems

A new paradigm requires new tools and methods.

There is a lack of appropriate development methods and tools. Even though there are many libraries available for training models, e.g. TensorFlow, their correct use poses great challenges for developers without deep knowledge of the methods encoded in them, and it requires automation. At the same time, tools that are usually used in software development for the analysis of software correctness are hardly available. Therefore, the group is researching new methods and tools that support developers in meeting these challenges and help to ensure that AI can be used by a larger circle of developers and transferred to a broader set of applications.

The second object of our research is the development of new software engineering methods to enhance the AI itself. For example, we are exploring ways to introduce ideas of reactive programming and modular programming into the data processing pipeline powering the learning of AI models. Respectively, we want to explore new programming languages to support the third wave of AI that integrates learned, modeled and built in knowledge and cognitive models to achieve human-like properties. For example, by continuously extending existing knowledge and “thinking” logically. The von Neumann machine model on which today's programming is based describes calculations on a level too low to be suitable for the complexity of third-wave AI. However, a new suitable model has not yet been systematically researched. Possibly, the same architectures that are used for the pipeline design can be applied to the composition of different components of third wave AI.