The CS department's Talk Series

Talk

Have you ever wondered what happens behind all these doors in our department?
Are you curious what is happening around you?
What are all these people busy with?
What research is emerging and which results are actually being produced in the CS department?

The CS department's Talk Series is a forum to researchers from different groups. It offers a great opportunity to learn about cutting-edge research, to talk to members of other research groups and to interconnect your research.

We especially want to invite all PhD students and research staff to the meetings, but interested master students are very welcome.

If you want to participate by giving a talk yourself, please contact us at promotionsprogramm@cs.uni-kl.de!

Schedule for Summer 2021

The talks will be presented online this term due to the current Corona-/Covid-19 restrictions.
The access codes for the online meeting are sent via the PhD mailing list. If you didn't receive them, please contact us at promotionsprogramm@cs.uni-kl.de

Monday, 19.04.2021, 15:45

Torben Fetzer: Structured Light Reconstruction - An Entire Pipeline with Improvements in Usability, Accuracy, Stability and Speed

The field of 3D reconstruction is one of the most important areas in computer vision. It is not only of theoretical importance, but also increasingly used in practice, be it in reverse engineering, quality control or robotics. There are already a variety of different 3D scanners available for purchase. However, different accuracies, resolutions and application flexibilities go hand in hand with sometimes drastic price differences. A silver bullet has not yet been found. A system is desirable that makes it possible to reconstruct a wide variety of objects inexpensively, flexibly and fully automatically with high accuracy, density and resolution. As few requirements as possible should be placed on the objects to be reconstructed and the hardware used should be freely available (customer devices) and exchangeable. Thus, resolution and reconstructed detail density could be efficiently controlled via the hardware used. In contrast to most existing consumer systems, it should be self-calibrating so that it can be set up and adapted to new use cases without complicated procedures and user interaction. Important steps and already achieved research results towards this desirable system will be shown in this talk. I present the chosen method for reconstruction and explain our developed procedures and current tasks. In particular, the improvements in the generation of high precision matches, the auto-calibration of the flexible setup and the automatic alignment of the achieved partial point clouds are featured. This results in a complete guide to fully automatic 3D reconstruction based on the structured light method. Furthermore, we also present some possible solutions to the weaknesses of the chosen method.

Monday, 03.05.2021, 15:45

Lovro Bosnar (Computer Graphics and HCI Group): Material modeling and rendering for surface inspection

Industry 4.0 introduced the automatization of customized product manufacturing. To ensure the quality standards, automated production must be followed by an automated inspection. Visual surface inspection is a common quality inspection process, therefore automation of surface inspection planning is re- quired. Surface inspection planning is aiming to solve two important tasks: finding optimal hardware placement for complete coverage of inspected objects and development of image processing algorithms for surface analysis. Both tasks can greatly benefit from the simulation of the inspection environment which consists of inspected object, light and camera using computer graphics modeling and rendering techniques. The focus of our work is the simulation of the inspected object material. Our aim is to model physically based material using microfacet based surface scattering models in order to achieve the realism required for surface inspection planning. Our approach is based on procedural texturing methods which are used to build models for generating microstructure over the inspected object surface. Alongside procedurally generated texture, the aim of our models is usage on arbitrary complex inspected object geometry.

Monday, 17.05.2021, 15:45

Gajendra Doniparthi: Indexing Methods for Interactive Exploration of Large-Scale Bio-Science Research Data

The advancement of high-throughput technologies has considerably increased the amount of research data generated from bio-science experiments. The integrated analysis of these large datasets provides opportunities to understand complex biological systems better. Our research begins with structuring and standardizing the experimental bio-science metadata from individual -omics studies. We develop a novel research data management framework that uses a Polystore model for interactively querying and exploring the standardised metadata along with huge volumes of contextual data (raw data). It also offers a multi-step interactive exploration process for integrated bio-science data analysis, enabling scientists to explore combined research data augmented with schema-less contextual information progressively. We also develop novel indexing algorithms to speed up the interactive exploration when dealing with large-scale cross-omics datasets. The research goal is to help develop a standard application to curate, annotate, maintain and explore integrated experimental meta-data from cross-omics studies.

Mahta Bakhshizadeh: Context-Aware Recommender Systems for Personal Knowledge Assistants

One of the ways to assist knowledge workers in the daily tasks and improve their productivity, is providing them with relevant helpful information based on their current situation. There are many challenges in developing such a recommender system which could be capable of recommending the right information at the right time to the right person. Understanding the contextual state of the users as precise as possible, along with detecting their activities and information need play a significant role in enhancing the context-aware recommender systems for personal knowledge assistants.

Monday, 07.06.2021, 15:45

Zai Müller-Zhang: Integrated Planning and Scheduling for Customized Production using Digital Twins and Reinforcement Learning

For customized production in small lot-sizes, traditional production plants have to be reconfigured manually multiple times to be adapted to variable order changes, what significantly increases the production costs. One of the goals of Industry 4.0 is to enable flexible production, allowing for customer-specific production or even production with lot size 1 in order to react dynamically to changes in production orders. All of this come with increased quality parameters such as optimized use of machines, conveyor belts and raw materials, which ultimately leads to optimized resource utilization and cost-efficiency. To address this challenge, in this paper, I present a digital twin based self-learning process planning approach using Deep-Q-Network that is capable of identifying optimized process plans and workflows for the simultaneous production of personalized products. I have evaluated my approach on a virtual aluminum cold milling factory from the SMS Group, in the context of the BaSys 4 project. The goal of the evaluation was to provide evidence that the proposed approach is able to handle large problem space effectively. My approach ensures the efficiency of the personalized production and the adaptivity of the production system.

Nishanth Laxman: "Safe" handling of uncertainties at runtime

One of the most well-established fact, that presents challenges to design and engineer today’s systems is that “Uncertainty is certain”. Embedded systems of today have evolved through time and have become smart, complex, dynamic, adaptive and are more accommodated into our daily life. The embedded systems of today are envisioned to dynamically interact and collaborate with each other to achieve a common goal and go by the term Cyber Physical Systems (CPS). Through rapid changes in system types and system boundaries, one thing which has remained certain is the uncertainty associated with it, because of the fact that physical world is inherently uncertain. Traditional safety assurance techniques assume that the complete set of system specifications and possible configurations at runtime are available during the system design. Such assumptions cannot be made for the development of CPS due to its dynamic nature at runtime. The runtime uncertainties, both the epistemic and the aleatory form, might put the system into a hazard state which in turn might result in an accident / harm. Early possible detection of such hazards can ensure appropriate measures being taken to avoid catastrophic events.

Monday, 21.06.2021, 15:45

Tim Dellmann: Robust object recognition for agricultural robots through augmentation with simulated data sets

Agricultural robots increasingly have to rely on artificial intelligence. As the tasks are getting more and more complex, like the recognition of certain attributes of plants, also the requirements for the training data gets more difficult. Robust recognition under a given daytime or specific plant disease demands a large diversity of such data, which are further only available for a limited time in a year. Mature game engines allow highly realistic modeling of environments, which gives the ability to create different light and weather conditions. A virtual robot can then collect datasets that replace or enhance existing ones. Further, the application of Generative Adversarial Networks (GANs) is possible to transform such simulated images into photo-realistic ones. A combination of both approaches is to be developed, which provides training and testing of agricultural object recognition algorithms all over the year.

Christian Kötting: FPGAs in Roboterkontrollarchitekturen

Auf höheren Abstraktionsschichten der Kontrollarchitektur eines mobilen Roboters gibt es viele recheninten- sive Aufgaben. Da diese im Betrieb zur Laufzeit erledigt werden müssen, ist es üblich, diese Aufgaben auf schnellen PC-CPUs oder Grafikkarten durchzuführen. Die Ergebnisse, in Bezug auf Verarbeitungszeit sind i.d.R. befriedigend, allerdings ergeben sich hier auch oft Lastfälle, in denen die Hardware an die Grenzen ihrer Auslastungsfähigkeit gebracht wird. Eine tatsächliche Echtzeitfähigkeit ist damit oft nur schwer zu gewährleisten. Insbesondere für große Maschinen ergibt sich hier eine gewisse Problematik. Bei nicht zu gewährleistender Echtzeitfähigkeit stellt der Einsatz automatischer, großer Roboterfahrzeuge ein gewisses Sicherheitsrisiko dar. Es gibt zwar Erkennungsalgorithmen, die benutzt werden können, um Sensordaten auszuwerten, so die Umwelt des Roboters wahrzunehmen und Aktionen einzuleiten oder zu beeinflussen, die Sach- und Personenschäden prinzipiell verhindern können. Damit diese korrekt funktionieren, muss man aber formelle Garantien, wie Echtzeitfähigkeit, Deadlock-Freiheit oder formale Korrektheit angeben können. Eine mögliche Perspektive stellt der Einsatz von FPGAs dar. Es wurde schon mehrfach gezeigt, dass diese durch ihre hohe Parallelität eine adäquate Option zum Bewältigen vieler rechenintensiver Aufgaben darstellen. Sicherheitsrelevante Strukturen können in eigene Module integriert werden, die voll-parallel zu der restlichen Kontrollarchitektur laufen und so eine Gewährleistung für korrektes Arbeiten ermöglichen. Dies gilt nicht nur für untere Abstraktiesschichten, wie der Anbindung von Peripheriemodulen, sondern es kann so auch eine Sensordatenverarbeitung mit anschließender Entscheidungsfindung, unabhängig von der CPU, stattfinden. Komplexe, sicherheitsrelevante Algorithmen können damit entkoppelt von der restlichen Kontrollarchitektur arbeiten. Die FPGA-Implementation von vielen typischen High-Level-Robotikalgorithmen, wie beispielsweise Verhaltensnetzen, "klassischen" Bild- und Punktwolkenverarbeitungsalgorithmen oder neuronalen Netzen, kann die CPU des Roboter- systems deutlich entlasten und so eine schnellere und gleichzeitig energiesparendere Umsetzung darstellen. Dies geht in vielen Fällen auch mit verringertem Gewicht einher, da der PC durch embedded Hardware erstzt werden kann. Damit ist dieser Ansatz genauso für den Aufbau kleiner Roboter geeignet. Eine unterstützende Technologie, die untersucht werden soll, ist die High-Level-Synthese, die, verglichen mit klassischen Hardware-Beschreibungssprachen, eine schnelle Implementierung und Modifizierbarkeit von Kontroll- strukturen erlaubt. Eine weitere Technik, die in dem Zusammenhang von Interesse ist, ist dynamische Rekonfiguration von FPGAs. Diese erlaubt, vorhandene Hardware besser auszunutzen. Somit können im laufenden Betrieb kontextsensitiv Module ausgetauscht oder umkonfiguriert werden, während sicherheitsrelevante Module permanent aktiv bleiben.

Monday, 05.07.2021, 15:45

Dennis Meckel: Concepts and Tooling for the Ecosystem of the Behavior-Based Control Architecture iB2C

Autonomous robot systems consist of hundreds of interacting software and hardware components. In the case of the behavior-based control architecture iB2C, these systems are controlled by hierarchically partitioned behavior networks. iB2C modules implement a standardized interface, are usually parameterizable, and can be interconnected with other components through data and meta signals. Although the individual components are ideally compact and highly specialized, composed networks are often complex due to the number of interconnections and parametrization options. Therefore, additional concepts and tools are needed to support the discovery, development, parametrization, and inspection of individual and composed components. New findings, processes, and solutions are integrated into and tested with the tool iB2C-Designer.

Schedule for Winter 2020/21

The talks will be presented online this term due to the current Corona-/Covid-19 restrictions.
The access codes for the online meeting are sent via the PhD mailing list. If you didn't receive them, please contact us at promotionsprogramm@cs.uni-kl.de

Monday, 02.11.2020, 15:30

Muhammad Nabeel Asim (DFKI): Deep dive into genomic analysis using machine learning
With the advancement of high-throughput sequencing technologies and ultra-modern bioinformatics tools, data related to genome sequencing is increasing. As compared to the previous decade, now humongous genome-wide assays of gene expression are available publicly, deep analysis and biological interpretation of which can facilitate in acquiring profound comprehension of multifarious areas including sub-cellular location prediction of non-coding RNA, nucleosome position detection, Acetylation and methylation prediction in DNA sequences, enhancer discrimination, and Prediction of human-virus protein-protein interactions. Over the period, a number of machine learning based methodologies have been proposed to improve the predictive performance of diverse biomedical tasks mentioned earlier. However, there is still a big gap between the genome analysis and machine learning community. The aim of my thesis is to bridge this gap by developing machine learning based methodologies to substantially raise the predictive performance, overall efficiency, and adaptation for different scale genome analysis datasets.

Monday, 16.11.2020, 15:45

Sizhen Bian (DFKI): ML-applied novel capacitive sensing technology

Capacitive sensing technology is one of the primary sensing modalities to perceive the physical world, like pressure, distance, proximity, humidity, acceleration. The present work focused on advanced novel capacitive-based sensing technique that has been seldomly explored before and the machine learning solutions in different application scenarios. Two types of capacitive-based sensing systems have been build, including the hardware, firmware, and applications. The first one is an oscillating magnetic field system composed of the traditional ceramic capacitance and a customized coil-based inductance. The system shows robust, accurate navigation functionality indoors/outdoors/underwater with an ML solution. After minimizing the system, we developed a wearable magnetic field system to efficiently monitor the social distancing, aiming to decrease the risk of virus infection. We focused on a ubiquitous concept for the second advanced capacitive sensing system, the human body capacitance(HBC). Previous work on the HBC mainly focused on electrostatic discharge protection, especially in healthcare institutions. Our work designed different wearable, low cost, low power consumption prototypes that can measure the value of the HBC continuously in real-time and utilize this concept for individual/cooperative activity recognition. The activity classification based on HBC shows significant improvement regarding the IMU-only solutions. Some unique motion-sensing abilities were founded based on HBC, which is beyond the ability of traditional motion sensors like accelerometer and gyroscope.

Monday, 30.11.2020, 15:45

Albert Schimpf (AG Programming languages, Prof. Hinze): Rapid prototyping using declarative specifications for concurrent and composable workflows

Data science tasks usually involve composing individual tasks to complex workflows and run them in a safe and efficient manner. To help build and understand data-flow heavy tasks, we propose the concept of flows. Flows result by annotating directed graphs with additional arrow types. Arrow types describe data flow, scopes, and concurrency and are used to reason about which parts of the graph are independent from each other. Furthermore, to enable rapid prototyping, a declarative specification is used to build, visualize, and statically type-check a workflow graph. Our work can be used to prototype and run typical data science tasks in an efficient manner.

Peter Neigel (AG Augmented Vision, Prof. Stricker): Multi-Camera Based Recognition of Outdoor Environments with Learning Based Approaches

The understanding of the environment is a key factor for all advanced driver assistance systems (ADAS) and a large part of the advances in many computer vision tasks come from the use of deep learning techniques. From the beginning of this recent trend, research has focused heavily on urban areas with the intention of use in private passanger cars or cargo trucks. A large percentage of all vehicles worldwide however are industrial vehicles and mobile working machines like tractors, excavators or harvesters used in dozens of industries, from coarse earthwork operations and mining to agriculture. In this walk we want to have a look how computer vision tasks like person detection or semantic segmentation can be transferred from urban domains to outdoor environments in which these industrial vehicles might operate.

Monday, 14.12.2020, 15:45

Stanislav Frolov (Wissensbasierte Systeme (Prof. Andreas Dengel)): Text-to-Image Synthesis

With the advent of generative adversarial networks, synthesising images has recently become an active research area. Given an input text description, Text-to-Image (T2I) is the task is to generate an image that correctly reflects the meaning of that description. It is a flexible and very intuitive way for conditional image synthesis. Although significant progress has been achieved in the last few years, generating images with multiple interacting objects is still very difficult. In this talk I will introduce the basic architecture of a T2I model, discuss challenges, and present a way to improve T2I models by leveraging Visual Question Answering.

Markus Anders (AG Algorithmics and Complexity, Prof. Schweitzer): Search Problems in Trees with Symmetries: near optimal traversal strategies for individualization-refinement algorithms

The graph isomorphism problem captures the essence of symmetry detection in combinatorial structures. Since exploitation of symmetry can have a dramatic impact on the efficiency of algorithms in various fields, practical solvers have been developed for over 50 years.
In this talk, I present a search problem on trees that closely captures the backtracking behavior of all current practical graph isomorphism algorithms. We derive novel probabilistic algorithms for which the running time is sublinear in the size of the trees, improving upon known backtracking techniques which incur linear cost. We prove that these algorithms are optimal up to logarithmic factors. Furthermore, we give tight linear lower bounds for deterministic algorithms.

Monday, 11.01.2021, 15:45

Eric Jedermaann (AG DISCO, Prof. Schmitt): space-DISCO - An Introduction To Satellite Security

Satellite based communication becomes more popular since projects as StarLink, OneWeb and Kuiper Systems started. Some of them already deployed satellites in low earth orbits. The satellites will send their signals down to the end-users. But how to ensure that your signals are coming from the desired satellite and not from a malicious neighbor or from a drone flying above your head? Currently the used technologies and protocols of the mentioned organizations are not publicly known. My talk gives a brief introduction into this topic, with a focus on signal source verification based on physical properties and their challenges.

Thomas Schneider: Classification of Finite Highly Regular Vertex-Coloured Graphs

In the literature, there are two concepts describing the "high regularity" of graphs: First, a graph is k-ultrahomogeneous if every isomorphism between two induced subgraphs of order at most k extends to an automorphism. Next, a graph is k-tuple regular if for any vertex set S of order at most k the amount of common neighbours only depends on the isomorphism type of the subgraph induced by S.
In this talk, I give an overview of the existing classification results of undirected uncoloured graphs. After extending both properties above to coloured graphs, I present our classification of finite vertex-coloured k-ultrahomogeneous graphs and finite vertex-coloured k-tuple regular graphs for k >= 4.

Monday, 25.01.2021, 15:45

Jan-Tobias Sohns (AG Visual Information Analysis, Prof. Leitte): Decision Boundaries: Feature-Space Exploration of Black Box Classifiers

For critical processes such as cancer diagnosis or loan approval that rely on machine learning, the trust in a classifier's decision depends on its transparency. As data typically spans many dimensions and the reasoning of contemporary models is incomprehensible, new methods of model inspection need to be developed. Specifically, counterfactuals are an upcoming black box explanation approach, where possible 'What if' scenarios are presented that overturn the result to a desired one. However, the mutability of features depends on the situation, thus current algorithms struggle to find expressive scenarios. Integrating the user into the search process can improve this shortcoming.
I developed a visual analytic tool to interactively analyze the mapping of data to decisions and hence explore the feature space for patterns and actionable counterfactuals. Model interpretability is increased by introducing the concept of decision boundaries, i.e. hyper-surfaces that separate the high-dimensional feature space by predicted class, and collating it with human domain knowledge.

Jayasankar Santhosh (SDS-DFKI, Prof. Dengel): Deep Learning Based Learning Analytics and Augmentation

The research is focused on investigating how Deep Learning approaches could benefit in analyzing learning in educationational domain. Due to several limitations, including the difficulty of labeling and the insufficient amount of data, many researchers in the education domain and cognitive science are still utilizing traditional machine learning approaches, such as Support Vector Machine and Random Forest​ for learning analysis. The main goal of my research is to resolve this problem by investigating the state-of-art deep neural networks and proposing a new method adjusted into the educational domain for effectively determining the student engagement in learning.
As a use case, a student engagement monitoring system is planning to be developed with a feedback mechanism, which could further be used to optimize the learning environment. In addition to that, a long term intervention of the feedback mechanism could be used to determine the performance variation of learning in students.

Monday, 08.02.2021, 15:45

Markus Schröder (DFKI GmbH, Smart Data & Knowledge Services (SDS) Group, Prof. Andreas Dengel): Building Knowledge Graphs from Messy Enterprise Data

In absence of a data management strategy, undocumented enterprise data piles up and becomes increasingly difficult for companies to use to its full potential. Because of the data’s messiness, there are significant challenges in making the data usable again. In my approach I intend to build enterprise knowledge graphs (KG) by semantically enriching messy data with meaning by using semantic technologies. Such graphs serve as a semantic bridge between domain conceptualization (mostly in people’s minds) and raw data (e.g. in storage systems).

Shraddha Gupta (AG Embedded Intelligence, Prof. Lukowicz): Explainable and informed machine learning applications in hot staking resistance welding process

Machine learning applications in manufacturing have several advantages to optimize process, reduced monitoring and process development cost etc. The use case in this thesis is focused on hot staking resistance welding process at Robert Bosch Manufacturing solutions, which is thermo-compacting process to join material using pressure and heat. This is typically done in resistance welding machine with pair of electrodes but without any filler material to achieve electrical as well as mechanical joints. The aim of this thesis is to provide insights upon prediction of quality parameter, analysis on electrode lifetime prediction, and predicting scrap parts using combination of domain knowledge and machine learning techniques. It also investigates on how the domain knowledge can be incorporated and represented for application of ML models.

Monday, 22.02.2021, 15:45

Constantin Seebach (AG Algorithms and Complexity, Prof. Schweitzer): Exponential Time Algorithms for Easy Problems

Algorithms with exponential running times are the best we currently have for NP-complete problems like the satisfiability problem SAT. Analyzing and improving such algorithms is an ongoing area of theoretical research, with important implications for practice, since many interesting problems are NP-complete. This analysis can be taken in a different direction, by taking common exponential algorithm paradigms, like branching on local structures, and applying them to problems which are known to be solvable in polynomial time, i.e. easy problems. This counter-intuitive approach leads to new insights about the power of exponential time algorithms. Upper bounds as well as lower bounds can be shown for various easy problems in this algorithmic framework.

Iuliia Brishtel (DFKI): User´s ongoing physiological and mental state recognition using multimodal sensor approach

The work is focused on the investigation and automatic recognition of the user´s cognitive state measuring its physiological markers.
With the expanding number of low-cost and non-invasive physiological sensors, access to the physiological data is becoming easier, the amount of available data is increasing. Previous studies demonstrated, that the combination of this data with machine learning algorithms significantly enhances the development of automatic, user-independent models. However, the focus of these studies is heavily directed to the classification results of the built models, neglecting the meaningfulness of used physiological features and appropriateness of the used sensors. This procedure raises serious issues for the future reproducibility of reported results.
To overcome this gap, in my first field study I investigated phenomena of mind wandering using data from an electrodermal activity sensor and an eye-tracker. I did not only develop an ML model for mind wandering detection but also used behavioral data to increase the explanatory power of the developed model. The physiological markers were also extensively discussed.

Jack D. Martin (AG Cyber-Physical Systems): Constraint Net Design and Implementation Utilizing Affine Arithmetic Decision Diagrams and Integer Decision Diagrams

In the design and configuration of automotive systems, system-wide dependencies exist between individual components and their respective properties and constraints. These dependencies between properties, along with their constraints, defines a Constraint Satisfaction Problem (CSP). Using a CSP, it is possible to analyze impacts of a change in one component and its property upon other properties in the system. A Constraint Net is designed around two data types, Affine Arithmetic Decision Diagrams (AADDs) and Integer Decision Diagrams (IDDs). Beginning with a model or configuration design, one allows for each element of the model to have an associated property. Properties are interrelated by dependency (mathematical) expressions. From the dependency expressions, it is possible to create a Constraint Satisfaction Problem (CSP), using one or more parsed dependency expressions as a constraint net. This talk will describe how the CSP is solved by this specific Constraint Satisfaction Algorithm (CSA). The CSA is used to check consistency of system specifications for engineering models and configuration designs. Using the CSA, it is possible to perform bi-directional evaluation, using variables, their domains, and the constraints that must be satisfied. In addition, iterative procedures refine the values of the variables.

Schedule for Summer 2020

The talks will be presented online this term due to the current Corona-/Covid-19 restrictions.
The access codes for the online meeting are sent via the PhD mailing list. If you didn't receive them, please contact us at promotionsprogramm@cs.uni-kl.de!

Monday, 04.05.2020, 15:30

Shailza Jolly (DFKI, Prof. Dengel): Understanding of Vision and Language Systems

In today's era of artificial intelligence, we are surrounded by agents who accompany us in carrying our daily tasks. The tasks can range from asking about the weather (from Alexa, Siri) to a question about the scene in front of our eyes. However, there are some problems like biases in training data, understanding of linguistic variations that raise trustability issues and lowers chances of their deployment into real-world settings. In my 1st year of Ph.D. I worked to understand dataset biases for Visual Question Answering systems and making them robust towards linguistic variations. Please join my presentation where I will talk about my latest findings and will look forward to the feedback.

Monday, 18.05.2020, 15:30

Jendrik Brachter (AG Algorithms and Complexity): On the Weisfeller-Leman Dimension for Finite Groups

The group isomorphism problem (GrI) is one of the most fundamental problems in group theory for which we do not have efficient algorithmic tools. In fact, its complexity is not well understood at all and, despite decades of active research, even for very limited classes of groups the best known bounds are only slightly better than the basic n^O(log(n)) bound obtained from guessing generating sets. As GrI is polynomial-time reducible to graph isomorphism it furthermore constitutes another natural candidate for an NP-intermediate problem.

In comparison to graphs, combinatorial tools for the group isomorphism problem are much less developed. For graphs, an important tool in this scope are the Weisfeiler-Leman algorithms. These provide simple but universal and effective combinatorial methods for distinguishing non-isomorphic graphs. They are strongly linked to the descriptive complexity of graphs and while their limits have been firmly established, they can be very successful in many situations. In this talk, parts of our work on defining and investigating Weisfeiler-Leman algorithms for groups will be presented. I will first give a brief overview on the group isomorphism problem and its connections to graph isomorphism. Afterwards, I will introduce Weisfeiler-Leman algorithms for groups and compare them with their classical analogues. Finally, I will discuss first results on the Weisfeiler-Leman dimension of finite groups. More explicitely, I will construct an infinite family of pairs of groups which agree in many traditional isomorphism invariants but can be distinguished from all other groups by 3-dimensional WL-refinement.

Monday, 25.05.2020, 15:30

Marc Hauer (Algorithm Accountability Lab): Reduction of negative social long-term consequences of ADM systems in software development processes

Algorithmic decision making systems are finding their way into our society on an ever-increasing scale. It is not uncommon for non-technical errors or side effects to occur that are difficult to foresee in advance and that exceed moral or even legal limits. In the context of my PhD I develop concepts for software development processes that help to eliminate such problems as reliably as possible. Subsequently, I want to test and improve these concepts together with industrial partners and develop materials which allow an efficient training of the concepts.

Monday, 08.06.2020, 15:30

Sebastian Palacio (SDS group at DFKI (Prof. Dengel)): Towards Interpretable Machine Learning Models for Computer Vision Problems

The rising demand for machine learning (ML) models has become a key concern for stakeholders in diverse scenarios, as black-box solutions are continuously being implemented and relied upon. Consequently, an emergent field of ML has focused on intuitive notions of Explainable Artificial Intelligence (XAI), in an effort to fulfill requirements mostly related to safety and legal applications. In this work, current limitations in the field of XAI are being addressed, starting with the establishment of a framework that contextualizes, among others, the notions of “explainability” and “interpretability” for AI. Next, this thesis proposes a new method to generate visual explanations for state-of-the-art image classifiers, such that global patterns existing between the whole dataset and the model (deep convolutional neural networks) can be quantified. Finally, new model architectures are proposed that are “explainable by design”. These models explicitly convey low-level priors, providing richer, more structured predictions while maintaining or outperforming their black-box counterparts.

Monday, 15.06.2020, 15:30

Xiao Wang (AG Embedded Systems): From Synchrony to Asynchrony in Model-Based Design of Embedded Systems

The design of safe and efficient embedded systems is an extremely demanding task. It comprises a variety of activities such as formal verification, simulation, synthesis, consistency checking, etc. Model-based design is a methodology that seeks to incorporate all these design tasks by considering models rather than specific implementations. These models act as intermediate representations, and are specified by a plethora of languages and formalisms, which can be classified by their underlying models of computation (MoC). The synchronous MoC is one such class. It is successfully used in the design of embedded systems, mostly because it is well suited for verification and simulation. Single-threaded software can also be easily synthesized from synchronous models. Additionally, some systems require the synchronous MoC, because the timing information is crucial to them and thus cannot be removed. While for other systems, synchronization is extra overhead, and a synchronous model might have the problem of over-synchronization. Moreover, synchronous MoC has its limitations when it comes to multi-threaded software, which is more compatible with the asynchronous MoC. On the other hand, checking whether a synchronous system can be desynchronized is a daunting task, and potential solutions on how to achieve that will be discussed in this talk.

Monday, 22.06.2020, 15:30

Avraam Chatzimichailidis (AG Scientific Computing (Prof. Gauger) / Fraunhofer ITWM): GradVis: Visualization and Second Order Analysis of Optimization Surfaces during the Training of Deep Neural Networks

Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods. While these approaches tend to work in practice, there are still many gaps in the theoretical understanding of key aspects like convergence and generalization guarantees, which are induced by the properties of the optimization surface. In order to gain deeper insights, a number of recent publications proposed methods to visualize and analyze the optimization surfaces. However, the computational cost of these methods are very high, making it hardly possible to use them on larger networks.
In this talk, I present the GradVis Toolbox, an open source library for efficient and scalable visualization and analysis of deep neural network loss landscapes in Tensorflow and PyTorch. Introducing more efficient mathematical formulations and a novel parallelization scheme, GradVis allows to plot 2D and 3D projections of optimization surfaces and trajectories, as well as high resolution second order gradient information for large networks.

Damjan Gjurovski (AG Database and Information Systems, Prof. Michel): Query processing over massive schema-free data

The past years have witnessed a major shift from traditional data management over mostly relational data, toward various application-tailored data formats without fixed schema. Thus, this thesis firstly focuses on computing natural joins over massive streams of JSON documents that do not adhere to a specific schema. By proposing an efficient and scalable partitioning algorithm that uses the main principles of association analysis, patterns of co-occurrence of the attribute-value pairs within the documents are identified. Data is accordingly forwarded and joined using a novel FP-tree–based join algorithm, allowing compact storing and efficient traversing. In this talk a broadening of the future research area is proposed, including an extension of query processing approaches for local joins or other data formats, such as knowledge graphs. Through extensive experiments the purpose and performance of the created algorithms are shown, and finally, a discussion is conducted touching on the topics for the future of the thesis.

Monday, 29.06.2020, 15:30

Mhd Rashed Al Koutayni (AG Augmented Vision, Prof. Stricker): Hardware Acceleration of Deep Neural Networks

Deep learning plays an important role in the field of computer vision, where deep neural networks (DNN) based methods are replacing the traditional vision algorithms. Due to their extensive computational requirements, these methods are implemented on Graphics Processing Units (GPUs). However, GPUs are not suitable for practical application scenarios, where low power consumption is crucial. Furthermore, the difficulty of embedding a bulky GPU into a small device prevents the portability of such applications on mobile devices. Our main goal is to provide energy efficient solutions for the existing computer vision algorithms. The FPGA is considered a powerful candidate, as it is highly customizable in terms of pipelining, hardware architecture and memory hierarchy. Our experiments have shown so far that our efficient FPGA implementations outweigh their GPU counterparts in terms of runtime speed and energy efficiency. The talk will present the results achieved so far as well as tools and workflows that have been developed to speedup the design process.

Angjela Davitkova (AG Database and Information Systems, Prof. Michel): Optimizing Data Management using Machine Learning Approaches

Recently, the usage of machine learning has majorly expanded, highly impacting the research field of the improvement or replacement of database concepts. Firstly focusing on the enhancement of traditional database indexes, this thesis proposes the ML-Index, a memory-efficient Multidimensional Learned (ML) structure for processing point, KNN, and range queries. Using data-dependent reference points, the ML-Index partitions the data and transforms it into one-dimensional values relative to the distance to their closest reference point. Once scaled, the ML-Index utilizes a learned model to efficiently approximate the order of the scaled values in combination with a novel offset scaling method. The future scope of the thesis will include a further extension of the ML-Index, as well as expansion of the research area towards different data formats and learning enhanced optimization of query processing. Through a thorough experimental performance comparison and a discussion regarding the future research area, the feasibility and the supremacy of the ML-Index, as well as a promising direction of this thesis are shown.

Monday, 06.07.2020, 15:30

Jan Reich (AG Robotik; Prof. Berns; Fraunhofer IESE): SINADRA – Situation-Aware Dynamic Risk Assessment of Autonomous Vehicles

Assuring an adequate level of safety is the key challenge for the approval of autonomous vehicles (AV). The full performance potential of AV cannot be exploited at present because traditional assurance methods at design time are based on a risk assessment involving worst-case assumptions about the operating environment. Dynamic Risk Assessment (DRA) is a novel technique that shifts this activity to runtime and enables the system itself to assess the risk of the current situation. However, existing DRA approaches neither consider environmental knowledge for risk assessments, as humans do, nor are they based on systematic design-time assurance methods. To overcome these issues, we introduce the model-based SINADRA framework for situation-aware dynamic risk assessment. It aims at the systematic synthesis of probabilistic runtime risk monitors employing tactical situational knowledge to imitate human risk reasoning with uncertain knowledge. To that end, a Bayesian network synthesis and assurance process is outlined for DRA in different operational design domains and integrated in an adaptive safety management architecture. The SINADRA monitor intends to provide an information basis at runtime to optimally balance residual risk and driving performance, in particular in non-worst-case situations. In this talk, the building blocks of the SINADRA framework are presented along with the current state of my PhD project.

Monday, 13.07.2020, 15:30

Qazi Hamza Jan: Safe and Efficient Navigation of Autonomous Shuttle in Pedestrian Path (Robotics Research Lab, Prof. Berns)

Autonomous shuttles traversing in the pedestrian area have become popular in the transport industry. Due to the limited width of pedestrian areas, shuttles can only be used for such purposes. Self-driving shuttles become meaningful to be used here. The major concern while driving in a pedestrian area is the safety of the people due to the fact that people have random behavior. For autonomous shuttles moving in such environments, it is efficient to keep their pace while avoiding any collision with pedestrians that can have irregular motions. For this, we propose to use behavior-based architecture with Pedestrian Interaction System (PIS). The PIS determines the scheme of interaction with pedestrians. Also, based on evasion cost it decides which behaviors to prioritize. It uses tentacle based evasion to evade the pedestrian. It also keeps the pace of the shuttle by recognizing safe conditions in a pedestrian zone. The experiments are performed in a simulated environment.

Monday, 20.07.2020, 15:30

Michael Hohenstein (Heterogenous Information Systems): Leveraging Approximate Query Processing to Realize Progressive Visual Analytics

Progressive Visual Analytics is a relatively new paradigm in the realm of visualization. It's main objective is to develop algorithms and infrastructure to support analysts in exploratory ad hoc data analysis. This means each query should return an (approximate) result in an upper time bound, so that the data exploration can be considered a real-time process. Additionally, the analyst should be able to steer the query by tuning parameters of the computation. The keystone of the progressive paradigm is to instantly return an approximate result which is (progressively) updated in the background. Ideally some notion of (partial) re-use of earlier results that intersect with a live query should be in place to further reduce the time to return of later queries. PVA is closely related to approximate query processing and streaming applications with a focus on real-time interactivity, which is usually reached by (partial) re-use of prior results, data- or process chunking, sampling and using fast algorithms that replace exact computations with approximate results. We want to observe the paradigm of progressive data science through the lens of database systems, trying to improve and tailor existing approaches to create an efficient infrastructure for PVA systems.

Monday, 27.07.2020, 15:30

Andreas Kölsch (AG Augemented Vision, Prof. Stricker):Monocular Human Pose Estimation In The Wild

Monocular human pose estimation is a fundamental problem in computer vision with a multitude of application areas, such as autonomous driving, medicine, gaming and more. This makes it an active field of research. While most modern approaches are based on deep-learning models which are trained on large-scale datasets, the specific implementations are very different. This is mostly owed to the many distinct dimensions of the problem which depend on the use case or application area. The approaches can be classified into single-person/multi-person, 2D/3D, skeleton/shape, single image/video, constrained/unconstrained environment and online/offline pose estimation methods. The specific subproblems which are given by combinations of aforementioned dimensions pose unique challenges which require a careful algorithm design to address these challenges. In this talk, I will give an overview of some popular methods which were designed for particular use cases and I will present our own approach for single-person in-the-wild 2D pose estimation which combines the usage of convolutional neural networks and graph networks to exploit the inherent graph structure of the human skeleton. Lastly, I will provide an outlook on planned future works.

Monday, 24.08.2020, 15:30

Ahmad Adee (AG Software Engineering: Dependability; Prof. Liggesmeyer): Model-based System Analysis Techniques to determine propagation paths of functional insufficiencies in software-intensive systems

The research focuses on the application of model-based system analysis techniques to address functional insufficiencies in software-intensive systems used in open context environments and determining probabilistic ways to model the uncertainties. Open context systems are those in which a complete perception of the environment is not possible and which therefore may operate beyond their original design intent. Functional insufficiencies denote the deviation of the nominal behaviour of a system that does not stem from a malfunction of one or multiple components, i.e. a classical fault. A typical example is a camera in a highly automated vehicle that should prevent a collision with a human being, but that only detects in 99.9% of the cases correctly a human person. Model-based system analysis techniques include component fault trees as well as other modeling artifacts, which both extend the components of a functional or technical architecture model of the system with error propagation information. Whereas a classic model-based safety analysis often limits itself to failures of one or multiple components, the open-context nature of autonomous systems forces to consider the safety-applications of functional insufficiencies.

Naghmeh Ghanooni (AG Prof. Kloft): Deep Extreme SNP prediction

Single Nucleotide Polymorphisms (SNPs) are the most common and simplest sequence variants in single bases of DNA in humans. A DNA sequence consists of a chain of four nucleotide bases: A, C, G, and T. An SNP consists in a difference in a nucleotide of paired chromosomes in an individual. For example, a cytosine (C) nucleotide may be replaced by the nucleotide thymine (T) in a certain position of DNA. SNP occurs on average once in every 1000 nucleotides. Thus, one's genome contains an average 4 to 5 million SNPs, which can be unique or shared between individuals. Some SNPs are associated with certain diseases such as diabetes or cancer, or only some genetic differences or traits. Measuring and studying the SNPs of individuals is of huge importance to the study of human health. Indeed, this might help us find the disease-inducing genes that are inherited within families, or even predict a person's response to certain medicine.
In this project, we analyse how to predict the outcome of thousands of specific SNPs starting only from phenotypical image data: we implement a deep learning approach to extract features from retinal fundus images from the UK Biobank to predict all the SNPs on a chromosome. Since the number of SNPs can be counted in millions, our model can be categorized as a novel application of extreme multi-label classification. However there are some differences: the SNP prediction problem deals with both multi-label and multi-class classification. The multi-class aspect comes from the fact that each SNP can have two possible alleles and each chromosome has two copies in the human genome, resulting in 3 classes: 0, 1, and 2 (e.g. CC, CT-TC, or TT). The problem is also multi-label because for each sample we can get more than one SNP occurrence. With the view to provide better qualitative understanding of the genetic factors driving disease and other biological phenomena, we are also interested in mining all the genetic differences (in terms of SNP's) between specific pairs of individuals, beyond the well-known ones that give rise to well understood inheritable traits such as the colour of the skin or the eyes.

Schedule for Winter 2019/20

Monday, 18.11.2019, 15:30 in 34-420 (!!!)

Peter Zeller (AG Software Technology): Tool Supported Specification and Verification of Highly Available Applications

Today, information systems are often distributed to achieve high availability and low latency. These systems can be realized by building on a highly available database to manage the distribution of data. However, it is well known that high availability and low latency are not compatible with strong consistency guarantees. For application developers, the lack of strong consistency on the database layer can make it difficult to reason about their programs and ensure that applications work as intended.

We address this problem from the perspective of formal verification. We present a specification technique, which allows to specify functional properties of the application. In addition to data invariants, we support history properties, which allows to relate past events including invocations of the application API and operations on the database.

To address the verification problem, we have developed a proof technique that handles concurrency using invariants and otherwise reduces the problem to sequential verification. The system semantics, technique and its soundness proof are all formalized in Isabelle/HOL. Additionally, we have developed a tool named Repliss which uses the proof technique to enable partially automated verification and testing of applications. For verification, Repliss generates verification conditions via symbolic execution and then uses an SMT solver to discharge them.

Alireza Koochali (DFKI, Prof. Dengel): The application of generative models in probabilistic machine learning

n many sensitive domains like finance, health care or climate prediction, it is viable to determine the statistical model uncertainty. Probabilistic machine learning aims to learn from data while quantifying model uncertainty. On the other hand, Generative models are a class of statistical methods that can learn an unknown probability distribution from its samples to generate artificial data. Recently, with the introduction of neural network-based generative models like Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs), this field has drawn a lot of attention. However, most of the efforts are focused on generating realistic artificial data. In my Ph.D., I aim to investigate the possibility and the extent of utilizing the generative models' power for probabilistic machine learning. In this presentation, I am going to address my research in the direction of probabilistic forecasting and also and present my recent publication on one step ahead probabilistic forecasting using GANs. Finally, I will discuss the further direction of research on using generative models for probabilistic machine learning.

Monday, 02.12.2019, 15:30 in 48-680

Mareike Bockholt (Algorithm Accountability Lab): Towards a process-driven network analysis

In the recent decades, there has been in an increasing interest in analyzing the behaviour of complex systems. A complex system consists of independent entities interacting with each other such that the system shows a so-called emergent behaviour, a behaviour which cannot be explained by the behaviour of the single entities, but only by their interactions. A popular approach for analyzing such systems is a network analytic view where the system is represented by a graph structure: nodes represent the system's entities, edges their interactions. A large toolbox of network analytic methods, such as measures for structural properties, centrality measures, methods for identifying communities, etc, is readily available to be applied on any network structure -- and one is tempted to do so. However, it is often overseen that a network representation of a system and the (technically applicable) methods contain assumptions which need to be met, otherwise the results are not interpretable or even misleading. The most important assumption of any network representation is the presence of indirect effect: if A has an impact on B, and B has an impact on C, a network representation assumes that also A has an impact on C. If such indirect effects are not present in the system, a network representation is meaningless. A presence of indirect effects however implies that "something" is flowing through the network. Otherwise indirect effects are not explicable. Those network flows (we also call them network processes) can be the propagation of information in social networks, the spreading of infections, but also entities using the network as infrastructure as in transportation networks. For a meaningful network analysis, the network process, the network representation and the network measures cannot be chosen independently [Dorn2012]: the network representation and the network process need to match (investigating how an infection might spread by using an online social network as Facebook is pointless), and the network method and the network process need to match (applying a measure assuming that the process uses shortest paths is pointless for the process of information spreading) [Borgatti2005].

We claim that the network process dictates the suitable network representations and the suitable network methods and call this approach "process-driven network analysis". In order to show the necessity of this approach, we use four data sets of real-world processes. In this work, as first step, we show that the assumptions of standard network measures about the properties of a network process are not fulfilled by the real-world process data. As second step, we compare the network usage pattern by real-world processes to the usage pattern of the corresponding shortest paths and random walks.

Monday, 16.12.2019, 15:30 in 48-680

Hannan Ejaz Keen (AG Robotik, Prof. Berns): Autonomous Navigation and Mapping in Disastrous Environment using Unmanned Aerial Vehicle

Unmanned aerial vehicles (UAVs) have been extensively encouraged to use in industrial, civil and defense applications. During natural disasters such as flood, earthquakes and wildfires, UAVs are used to monitor situations in real-time. Autonomous UAVs for critical tasks in disastrous environment, require precise navigation and mapping of the environment. Numerous Deep Reinforcement Learning (DRL) based navigation approaches and vision or laser based reactive techniques have been proposed in literature. However these approaches have their own shortcomings. Our current robotic system (Octocopter) has been upgraded to provide all necessary sensor data. This research proposes the work towards efficient navigation through disastrous environment by optimizing the current DRL techniques and maps it robustly to conclude critical information of the calamity.

Yongzhi Su (DFKI, Prof. Stricker): Multi-State Object Pose Estimation for AR Assisted Assembly

The detection of objects in images and their classification into one of a number of predefined object classes has been one of the most researched topics in computer vision for several decades. A similar problem with specific constraints and challenges is object state estimation, dealing with objects that consist of several removable or adjustable parts. Automatic recognition of object state along with their pose directly from camera images can enable AR applications that assist in the assembly/disassembly and maintenance of these objects while increasing safety and human error detection and prevention.

Traditionally, handcrafted features such as SIFT or HOG were used to train different types of classifiers for the task. However, industrial objects are usually textureless and such kind of features still have limited performance with them. The field was revolutionized by Deep Learning and Convolutional Neural Networks (CNNs), which can be trained to generate more complex features. The first task of my PhD is to design a CNN that is able to detect and regress the pose of an object in multiple states. In this talk, I will firstly give a brief overview of the CNN based object detection and object pose estimation methods. And then I will present our first result as well as the remaining challenges.

Monday, 06.01.2020, 15:30 in 48-680

Nico Schäfer (AG Databases and Information Systems, Prof. Michel): Partially Materializable Delta Trees for Efficient Data Wrangling of Semi-Structured Contents

We propose delta trees to boost efficiency and reduce storage requirements of iterative data exploration and data wrangling tasks over massive, semi-structured datasets. During such tasks, data is filtered, projected, joined, and converted in multiple successive or independent steps, driven by data scientists or higher-level applications. While the original datasets can often not be disposed, delta trees are necessary to represent only the changes to the original data, instead of creating largely redundant copies. With delta trees, we are able to reduce storage requirements and query execution time for various data manipulation operations, while maintaining acceptable query times for others. We report on a first experimental study over a dataset of Twitter tweets, showing that the expected vast savings of storage consumption can be enjoyed with negligible computational overhead compared to a full data duplication.

Emilia Cioroaica (Fraunhofer IESE, Prof. Liggesmeyer): Building Trust in Ecosystems and Ecosystem components

In the context of Smart Ecosystems, systems engage in dynamic cooperation with other systems to achieve their goals. Expedient operation is only possible when all systems cooperate as expected. This requires a level of trust between the components of the ecosystem. New systems that join the ecosystem therefore first need to build up a level of trust. Humans derive trust from behavioral reputation in key situations. In Smart Ecosystems (SES), the reputation of a system or system component can also be based on observation of its behavior. In my thesis I will introduce a method and a test platform that support virtual evaluation of decisions at runtime, thereby supporting trust building within SES. The key idea behind the platform is that it employs and evaluates Digital Twins, which are executable models of system components, to learn about component behavior in observed situations. The trust in the Digital Twin then builds up over time based on the behavioral compliance of the real system component with its Digital Twin. The technical contribution is the development of a domain specific language that doesn’t allow the digital twins to discover that they are under evaluation.

Monday, 20.01.2020, 15:30 in 48-680

Dominique Mercier (DFKI, Prof. Dengel): Understanding DNNs: Towards interpretable neural network for time-series analysis

During the last years deep neural networks have been used in different domains for different tasks. It has been shown that these networks can achieve very good results, but the use of these networks is limited by the lack of interpretability. Therefore, many resources have been invested to develop methods to interpret these networks. The main focus of the research, however, relates to applications in the field of image processing and especially in other areas such as time series analysis, there are significantly fewer methods that contribute to the interpretability of the networks.

During my PhD work I try to develop methods which are suitable for the interpretability of networks for time series analysis. I will look at different perspectives, including intrinsic as well as post-hoc methods. Also methods used in the image domain will be investigated and their applicability in time series analysis, after necessary modification and extension, will be considered. The aim of this work is to provide a set of methods that can be used to better understand the networks in the field of time series analysis. This not only serves to facilitate debugging but also targets the end user who needs to understand the system in order to use it. One of the biggest challenges is to design the methods according to the user's needs. In this presentation, I will address the challenges and difficulties of the topic and present first results.

Christopher Kohlstruck (AG Networked Systems, Prof. Gotzhein): rmin-Routing - Discovery and Operation of Routes in Wireless Ad-hoc Networks with Specified Statistical Minimum Reliabilities

We propose a new approach for Quality-of-Service routing in wireless ad-hoc networks called rmin-routing, with the provision of statistical minimum route reliability as main route selection criterion. Discovery of rmin-routes is based on a network model with statistical link reliabilities, which are combined into path reliabilities. The link reliabilities are obtained using a topologoy exploration algorithm based on packet probing. To achieve specified minimum route reliabilities, we improve the reliability of individual links by well- directed retransmissions, to be applied during the operation of routes. To select among a set of candidate routes, we define and apply route quality criteria concerning network load. Rmin routing and the supporting layers are implemented in a simulation environment as well as on a real-world WLAN-based testbed. Both are used for evaluation and further improvements of the protocol stack. The most recent contribution is a demonstrator application to control and observe experiments in real-time.

Monday, 03.02.2020, 15:30 in 48-680

Steve Dias Da Cruz (DFKI, Prof. Stricker; IEE S.A.): Development and Evaluation of Deep Learning Methodologies for Safety Critical Sensor Applications in the Automotive Industry

For most automotive applications the needed training data for deep learning methods imply very high measurement and annotation efforts. Deep neural networks (DNN) trained in a single environment take non-relevant characteristics in an uncontrolled way into account and therefore data must be recorded repetitively for different environments. Consequently, the available means to reduce required training efforts are limited. This project will investigate and develop methods for invariant-salient information separation which will improve the robustness and invariance of DNNs to changes irrelevant to the application problem. The efficiency of the resulting background invariant DNN will be tested on a camera system in the vehicle interior to classify and detect occupancy and passengers.

In this talk I will introduce the challenges for the vehicle interior regarding generalization and robustness of machine learning models. I will present SVIRO, a recently released synthetic dataset for the vehicle interior accepted for publication at WACV'20. Finally, I will talk about future investigations regarding autoencoders and disentangled latent space representations to mitigate the aforementioned challenges.

Ahmet Firintepe(DFKI, Prof. Stricker): Deep learning based visual head and glasses tracking

Deep learning approaches show remarkable results and advance the state of the art to solve future problems with challenging requirements. For the head-pose estimation problem, current deep-learning techniques show high accuracy.

Dataglasses can be worn like typical glasses, but allow to show virtual content directly in front of the wearer’s eyes. A convincing superimposition during driving requires to track the orientation and the position of the dataglass inside the car, which is challenging due to high accuracy and low latency requirements. Therefore a deep learning-based pose and position estimation algorithm shall increase the accuracy via a combination of head pose estimation and object-based, i.e., glasses-based, tracking using visual data.

Schedule for Spring 2019

Tuesday, 26.02.2019, 13:45 in 48-680

Christian Jilek (DFKI GmbH & Knowledge-Based Systems Group, Prof. Dengel): Self-Organizing Context Spaces to Support Information Management and Knowledge Work

Most knowledge workers have already faced the problem of their personal information sphere (i.e. files, mails, bookmarks, folder hierarchies, etc.) becoming cluttered with meanwhile irrelevant information from being used in many different contexts without time for rehashing or tidying up. Since 2005, Semantic Desktop (SD) research has focused on better supporting personal information management and knowledge work activities. For instance, by allowing to easily interconnect resources on the user’s computer they are more aligned with the user’s mental model. Nevertheless, there is still a high potential for increasing the automation of such support systems.

In my PhD project, I intend to combine SD technology with Explicated User Context and measures of Managed Forgetting to investigate and provide new levels of user support inspired by human forgetting. By the latter, we understand an escalating set of measures overcoming the binary keep-or-delete paradigm: They range from temporal hiding, to data condensation, to adaptive synchronization, archiving and deletion. The ultimate goal is to develop a self-(re)organizing information system that supports knowledge workers dependent on their different contexts, e.g. by allowing to better focus on the current task or making the large number of user contexts emerging and evolving over time (as well as their content) easier to handle.

In this talk, I will give insights into my PhD topic's challenges, which touch several areas of computer science and especially artificial intelligence. In addition, first results, also including proof-of-concept implementations, will be presented.

Tuesday, 12.03.2019, 13:45 in 48-680

Rodrigo Alves (AG Machine Learning, Prof. Kloft): Matrix Completion and Learning for the Sciences: When a blockbuster helps coefficient predictions

The aim of this project is to research, improve and apply machine learning methods based on matrix completion for pressing problems from the sciences. Scientific questions that we want to tackle are, e.g.: How to predict the breeding value of the plants in plant sciences? How can we predict the activity reaction of chemical compounds? These and many other problems in the sciences can be modelled through matrices or tensors, of which only a subset of the entries are observed, either because they can be expensive or impossible to obtain. In the PhD project, we are researching methods that predict the completion of the respective matrices or tensors and will investigate their application in the sciences, including plant sciences and chemistry. In this talk we will present the first results and the research perspectives.

Tuesday, 21.05.2019, 13:45 in 48-680

Rene Schuster (DFKI, Prof. Stricker): Advances in 3D Motion Estimation for Driving Scenarios

Intelligent vehicles for assisted and autonomous driving will define the future of transportation. Precise visual perception is a key challenge to enable these technologies. Estimation of the motion of the environment is one of the important core components of autonomous vehicles and highly assisted driving, e.g. to avoid collisions or to predict the actions of other traffic participants. The image-based full 6D estimation of 3D geometry and 3D motion is known as the scene flow problem. Scene flow provides a detailed and powerful representation of the environment. In state-of-the-art, due to its high complexity, scene flow is often replaced by less-dimensional motion representations (e.g. optical flow). However, under special assumptions or controlled indoor environments, some existing scene flow algorithms can achieve impressive results. This work is embedded in the conflicting field of speed, robustness, and accuracy to push the limits of 3D motion estimation, especially in the context of traffic. The efforts are centered around the following questions: How to transfer successful concepts of 2D optical flow estimation to the more complex scene flow problem? Can deep learning improve scene flow estimation, despite very limited availability of data?

Moritz Lichter (AG Algorithms and Complexity, Prof. Schweitzer): Walk refinement, walk logic, and the iteration number of the Weisfeiler-Leman algorithm

The Weisfeiler-Leman algorithm is a combinatorical algorithm on graphs that can be used to test graphs for non-isomorphism. It is employed as subroutine in Babai's quasipolynomial time algorithms for graph isomorphism, which makes graph isomorphism one of the rare candidates for NP intermediate problems. Beyond that, there is a strong connection between the Weisfeiler-Leman algorithm, a certain logic, and an associated game played on graphs. The algorithm continues to apply a refinement routine, the so-called Weisfeiler-Leman refinement, to a graph until this process stabilizes.

In this talk I will present results to appear at LICS '19 regarding the classical, 2-dimensional, Weisfeiler-Leman refinement: We show that the classical Weisfeiler-Leman algorithm stabilizes n-vertex graphs after at most O(n*log(n)) iterations reaching the best known lower bound of Ω(n) up to a logarithmic factor. This implies that formulas of quantifier depth O(n*log(n)) suffice to distinguish two graphs in 3-variable first order logic with counting (given that the graphs are distinguishable at all). For this we exploit a new refinement based on counting walks and argue that its iteration number differs from the classic Weisfeiler-Leman refinement by at most a logarithmic factor. We then prove matching linear upper and lower bounds on the number of iterations of the walk refinement. This is achieved with an algebraic approach by exploiting properties of semisimple matrix algebras. ((joined work with Ilia Ponomarenko and Pascal Schweitzer).

Tuesday, 04.06.2019, 13:45 in 48-680

Kilian Werner (AG Scientific Visualization, Prof. Garth): Task-Based Visualization Methods for Scalable Analysis of Large Data

With increasing size and complexity of measurements and simulations of scientific phenomenons, scientific visualization and visual analysis are becoming ever more important tools for scientific data exploration, understanding and interpretation. At the same time well-established methods and implementations in this field are failing to perform on this very increased size and complexity of data within feasible runtime and memory constraints. Efforts to scale these methods through parallelism and onto distributed memory systems, accelerators and clusters therefore received increasing attention recently, but usually stayed at a machine-oriented low level which requires expensive custom implementations for each individual application. In this project, it is evaluated how well-established methods in visualizing scientific data can be tailored to and implemented in a high-level abstraction to formulating parallel and distributed algorithms: the task-parallel ParallaX paradigm. The ultimate goal of these efforts is to concept-proof a portable because high-level algorithmic structure for a task-parallel, scalable pipeline of common scientific visualization methods for large and complex data.

In this talk I will explain firstly one of these methods on which work has already started namely the contour tree construction, secondly the ParallaX paradigm and its implementation in the HPX-Framework and finally my ongoing efforts to reinvent, tailor and implement the former within the latter.

Bo Zhou (DFKI, Embedded Intelligence, Prof. Paul Lukowicz): Textile Pressure Mapping (TPM) for Pervasive and Wearable Activity Recognition: Sensing, Framework and Applications

Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (normally acquired by inertial measurement units). However, the studies involving planar pressure are still a niche part in the discipline of activity recognition, using ad-hoc systems and data analysis methods.

This dissertation systematically investigates using planar pressure distribution sensing for pervasive and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, that encapsulates of (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition including the ambient, object and wearable subspaces.

The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization and feedback. With the TPM framework, other researchers and developers can evaluate TPM sensing modality in their application scenarios.

Tuesday, 16.07.2019, 13:45 in 48-680

Sarwar Hussain Paplu (AG Robotics, Prof. Berns): Emotional Intelligence in Human-Robot Interaction

Emotional Intelligence (EI) a form of social intelligence involving the ability to monitor one’s own and others’ feelings and emotions. Existing AI systems lack the perception of human emotions, intentions, personality traits or behavioral pattern in a typical interaction process. Robotic systems employed to work concurrently with humans often can not keep pace with humans in terms of expressiveness and attention due to the inability of the systems to understand or decode emotions of their co-workers. State-of-the-art machine learning approaches focusing computer vision and pattern recognition can be of great use to recognize the psychological and emotional cues. In this talk, I will be discussing on the ideas of the implementation of a system capable of understanding and expressing appropriate emotional states. Development of an emotion-based control architecture is vital in the this regard. The perception and control system of our existing robotic system (Robin) plays an important role in establishing the interaction between the robot and a human. A short overview on my ongoing efforts in the direction of the proposed research work will also be presented. Among other areas that I work on include recognition of personality traits using nonverbal cues, generation of suitable gestures and facial expressions, social distance awareness etc. The combination of psychological, emotional and linguistic aspects in a Human-Robot Interaction paves the way for a robust social robotic system, ensuring to exhibit more natural behavior.

Sungho Suh (AG Embedded Intelligence, Prof. Lukowicz): Improving Classification Performance under Imbalanced Data Conditions

The data imbalance problem for classification is a frequent but challenging task. In a real-world dataset, most of the class distribution is imbalanced and the classification result under imbalanced data condition induces a bias in the majority data class. The data imbalanced problem is easily discovered in many domains like computer vision, medical diagnosis, fault detection, and so on. For fault detection, the normal condition data are more common than faulty condition data in real manufacturing environments. This dissertation investigates using data augmentation to improve the performance of the classification under imbalanced data conditions. I propose a generative oversampling method on bearing fault detection on induction motor using Generative Adversarial Networks (GANs) and the data augmentation method for four benchmark dataset, MNIST, EMNIST, Fashion MNIST, and CIFAR-10. Additionally, I propose robust shipping label recognition and validation for logistics by using object detection. In this talk, I will explain the extends versions of GANs, the on-going framework of the fault detection algorithm, the results of the proposed method, and the research plan.

Jens Froemmer (Robert Bosch GmbH, Prof. Grimm): An Automotive CGRA and its Model-Based Configuration

Domain specific coarse-grained reconfigurable architectures offer a performance and efficiency close to hardwired implementations when accelerating multiple selected data intensive algorithms. However, existing solutions to increase the performance of a microcontroller are either not automotive grade, not suitable for a subset of the expected algorithms, or too expensive in terms of area. Hence, we introduce the novel Data Flow Architecture (DFA) for embedded hardware accelerators. The DFA targets not only automotive microcontrollers, but also microprocessors, smart sensors and similar applications. In order to achieve an as high as possible utilization of the DFA, a modeling framework enables algorithm experts and hardware experts to collaboratively and efficiently transfer algorithms onto the DFA.

Previous talks in the Talk Series