Bei Interesse an Abschlußarbeiten können Sie sich jederzeit an die Mitarbeiter des Fachgebiets oder an Frau Prof. Mezini wenden. Am Besten ist es wenn Sie sich vorher die aktuelle Liste der Forschungsthemen ansehen, da die allermeisten Arbeiten auch in diesen Bereichen vergeben werden. Sollten Sie ein Thema spannend finden, dann wenden Sie sich einfach an den Mitarbeiter, der das Thema bearbeitet. Haben Sie ein Thema aus dem Bereich Software Engineering / Softwaretechnik, dass Sie besonders spannend finden, welches aber nicht direkt zu einem der Forschungsthemen passt, dann sprechen Sie uns trotzdem einfach darauf an – vielleicht ergibt sich ja eine Betreuungsmöglichkeit.
10 Einträge gefunden. Alle Arbeiten anzeigen.
Sofware based systems already play a major role in industrial production and this role will only grow in the context of Industrie 4.0. In order to solve new challenges that arise in this context, existing software has to be adapted in different directions, e.g. to enable the addition of new sensors or enable the creation of a digital twin. For this purpose, we want to uncover Software Product Line features and models, which are already present implicitly, and make them explicit. Therefore the development of corresponding analyses and automatic refactorings is necessary.
Candidates would work on different topics that enable software reuse of industrial controllers written in C.
These topics include (but are not limited to):
• automatic identification and localization of features
• automatic code slicing of identified features
• adaption of analyses to the presence of C preprocessor macros
• automatic module extraction
If you are interested in any of the above mentioned topics or have any further questions, please contact: email@example.com (Patrick Müller)
Betreuer/in: Patrick Müller, M.Sc.
Betreuer/in: Anna-Katharina Wickert, M.Sc.
Over the last few years, a vast amount of data has become available from a variety of heterogeneous sources, including social networks and cyber-physical systems. This state of the things has pushed recent research in the direction of investigating computing platforms and programming environments that support processing massive quantities of data. Systems like MapReduce, Spark, Flink, Storm, Hive, PigLating, Hadoop, HDFS have emerged to address the technical challenges posed by the nature of these computations, including parallelism, distribution, network communication and fault tolerance.
Despite the popularity of such systems, there has been little attention to aspects in the development process other than programming itself. For example, testing Big Data applications is an area that remains largely unexplored. This is even more surprising considering that testing has a long tradition in Software Engineering from a research standpoint (e.g., concoholic testing, mutation testing) as well as for practitioners, with established testing techniques and tools that are widespread in industry (e.g., JUnit).
The goal of this thesis is to develop a testing methodology for Big Data applications focusing on the Apache Spark platform. The candidate will apply testing techniques based on symbolic execution to the setting of Big Data. Ideally, the thesis will include a comparison of different approaches as well as the development of a new methodology specifically tailored for Big Data.
Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. 2010. Spark: cluster computing with working sets. In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (HotCloud'10). USENIX Association, Berkeley, CA, USA, 10-10. weiter
Betreuer/in: Prof. Dr. Guido Salvaneschi
Software-Defined Networks (SDNs) provide a new way to configure computer networks. Special-purpose network devices with tightly coupled data and control planes are replaced by programmable switches managed by a logically centralized controller. The communication between the controller and these programmable switches is carried out using well-defined APIs (e.g., OpenFlow). Instead of configuring devices individually, network policies are implemented on top of the controller and then used by the controller to instruct individual network switches.However, SDN APIs like OpenFlow closely resemble the features provided by the hardware. OpenFlow uses a set of prioritized match-action rules as abstraction, which makes it difficult to write sophisticated network applications. For example, an application supporting multiple tasks at the same time must merge the switch-level rules required by each of the tasks into a single set of rules that can be installed on the switches. To overcome these limitations, different programming languages for SDNs have been proposed, that provide higher language abstractions on top of OpenFlow, including abstractions for querying the network state, basic service composition or language support for network verification.Developing new SDN language features requires a comparison with already existing languages, usually, in order to show that they increase the expressivity of the language while providing at least the same performance at runtime. However, currently it is quite cumbersome to compare different SDN languages, since they are implemented on top of different host languages, like Python, OCaml, Java and they usually provide only a small set of simple example application that are not directly comparable with each other.The goal of this thesis is to develop an extensible testbed for SDN applications that allows to compare SDN programming languages with respect to expressivity as well as performance and to provide a set of small- to medium-sized example applications that can be used for benchmarking and comparing the available language abstractions. Based on the results of the experiments, the next step will be to propose new language features that address the limitations of current solutions. weiter
Just in time compiler – interpreter optimization
Over the last few years, reactive programming (RP) has gained the
attention of researchers and practitioners for the potential to
express otherwise complex reactive behavior in intuitive and
declarative way. Implementations of RP have been proposed in several
Scala. Recently, concepts inspired by RP have been applied to
production frameworks like Microsoft Reactive Extensions (Rx), which
received great attention after the success story of the Netflix
streaming media provider. Finally, a lot of attention in the frontend
developers community is revealed by the increasing number of libraries
that implement RP principles, among the others React.js, Bacon.js,
Knockout, Meteor and Reactive.coffee.
Performance remains, however, a major limitation of RP. Most RP
implementations are based on libraries where the language runtime is
agnostic to reactive abstractions. As a result, a number of aspects
like change propagation, dependency tracking and memory management
that could be specifically optimized can only benefit from general
purpose optimization such as those provided by out of the box just in
time compilers. Optimization at the virtual machine level has the
potential to address these issues. weiter
Betreuer/in: Prof. Dr. Guido Salvaneschi
Reactive programming is a recent programming paradigm that specifically supports the development of reactive applications. It provides dedicated language abstractions, like signals and events, that overcome the disadvantages of the traditional Observer pattern.
Previous research on reactive programming has greatly improved the abstractions available to the developer. Other research areas focused on non-functional properties, like proving safety or time-bound execution of reactive applications.
Interestingly, supporting reactive applications with dedicated tools and programming environments is a mostly unexplored area. However, the field is extremely promising, since reactive applications exhibit regular patterns that can be easily exploitable by the IDE. weiter