Our research is at the intersection of two fundamental computer science methods. First, we design and implement new programming models and frameworks with the goal of maximising the automation of technical (non-functional) concerns, so that developers creativity and engineering efforts are focussed on solutions for application domain concerns. Second, we design new methods and techniques for reasoning about software properties. There are obvious synergies between these two areas. One can design programming models dedicated to writing program analyses, so as to facilitate their design and implementation. On the other way around, program intelligence methods can be instrumental for ensuring the efficiency and safety of language concepts.
Our expertises are programming models and code intelligence, which we apply in several specialized areas.
Also see our list of publications and ongoing and past software projects.
Computing automates the world, programming models automate computing.
The design and implementation of high-level (domain-specific) programming abstractions are key in automating complex “non-functional” concerns, thus enabling desired software system’s properties such as reliability or security and privacy “by-design”.
Recurring research themes include:
In a digitalized world (“software is eating the world”), software quality in general and software security in particular play a central role.
Enforcing high-quality, secure software requires powerful and intelligent methods of program analysis, which automatically detect problems, rule violations, and security vulnerabilities.
Recurring research themes include:
We apply our expertise in several specialized application areas:
Today’s software systems are interactive and distributed.
The current dominant software architecture is centralized, which is unsuited for the age of plentiful mobile devices, autonomous vehicles, and uncountable embedded devices and sensors. A centralized architecture prevents independent processing, causes latency, and undermines user control and privacy. Decentralized architectures solve these problems by providing offline availability, low latency, and data privacy. However, decentralized architectures require new solutions for trust, fault tolerance, and concurrency control.
We research resiliency and reliability of networked software systems at several levels using declarative programming platforms.
As the digitalization of our everyday lives continues, the risk to our data and privacy also increases.
All data is valuable, whether it is corporate or private data, accumulates on the web, resides in the cloud or is stored on personal devices. Such data is constantly under attacks from Malware and spyware to trackers and ad networks. We cover a broad range of both analytic and constructive methods towards enabling secure and privacy-preserving software systems.
In this area, automatic analyses have a major impact on the quality and security of software systems because even systems that appear secure at first glance may contain insecure code hidden from even the most trained eye. And to improve new developments, we focus on privacy-by-design using high-level specification languages. In combination, we have developed an assortment of modular tools.
A new paradigm requires new tools and methods.
There is a lack of appropriate development methods and tools. Even though there are many libraries available for training models, e.g. TensorFlow, their correct use poses great challenges for developers without deep knowledge of the methods encoded in them, and it requires automation. At the same time, tools that are usually used in software development for the analysis of software correctness are hardly available. Therefore, the group is researching new methods and tools that support developers in meeting these challenges and help to ensure that AI can be used by a larger circle of developers and transferred to a broader set of applications.
The second object of our research is the development of new software engineering methods to enhance the AI itself. For example, we are exploring ways to introduce ideas of reactive programming and modular programming into the data processing pipeline powering the learning of AI models. Respectively, we want to explore new programming languages to support the third wave of AI that integrates learned, modeled and built in knowledge and cognitive models to achieve human-like properties. For example, by continuously extending existing knowledge and “thinking” logically. The von Neumann machine model on which today's programming is based describes calculations on a level too low to be suitable for the complexity of third-wave AI. However, a new suitable model has not yet been systematically researched. Possibly, the same architectures that are used for the pipeline design can be applied to the composition of different components of third wave AI.