Skip to main content

Associated Projects

InCoRAP

Intent-oriented cooperative robot action planning and worker support in factory settings

Flexible production in modern smart factories requires effective support of human workers and smooth cooperation with assisting robots. However, in a factory scenario where the worker performs manual tasks, the hands are often not free to interact with assistance systems. Voice commands are not practical either due to factory noise. Hence, to provide well-adapted and acceptablesupport, both a worker assistance systemand robotic action planningneed to anticipatethe worker’s future activities. Existing approaches to worker assistance leverage comparatively coarse-grained information such as the worker’s trajectory in the environment. In contrast, we base any assistive functionality (including the actions of the robot) on high-level intentions. To infer such high-level intentions, we regard sensor data in relation to the worker’s current task. This requires a comprehensive, detailed model of the factory environment, integrating information from a rich variety of sensors as well as sophisticated process information stemming, e.g., from an ERP system. The environment model must be accessible for various purposes and on different levels of interpretation and detail. The detection of human intentionfrom available sensor data, the generation and maintenance of such a multi-sensor-based hierarchical semantic modelof the current environment, and their usage for action planning and situation-adequate assistancecomprise the R&D focus of the project.

HyperMind - The Anticipating Textbook

The anticipating physics textbook, which we are developing in the HyperMind project, is a dynamically adaptive personal textbook and enables individual learning. HyperMind starts at the micro level of the physics textbook, which contains the individual forms of presentation, so-called representations, of a textbook - such as the text of a textbook with a certain proportion of technical terms, formulas, diagrams or images.

The static structure of the classic book is dissolved. Instead, the book‘s content is portioned and the resulting knowledge modules are linked associatively. In addition, the modules are supplemented with multimedia learning content, which can be called up on the basis of attention (eye) data.

The project page can be found here.

Be-greifen

The topic of this project is the exploration of innovative human-technology interactions (HTI), which, by merging the real and digital worlds (augmented reality), makes the connection between experiment and theory comprehensible, tangible and interactively explorable in real time for students of STEM subjects.

The project page can be found here.

ESPEDUCA - JST CREST (in cooperation with the Osaka Prefecture University)

Knowledge can be shared via the Internet. However, most information is limited to "explicit knowledge" such as text and not "implicit knowledge" as  how the experience of experiencing something oneself. The aim of this project is to record these experiences with the help of sensors and to make them available to others.

Eyetifact and GazeSynthesizer (in cooperation with the Osaka Prefecture University)

Deep learning technologies have significantly accelerated the processing of images, audio and speech. Activity detection has not yet benefited from this, as it is difficult to obtain a large amount of data for training the deep learning algorithm in this area. The more sensitive and complex the sensors are, the more difficult it is to obtain the required amount of data. With the help of Eyetifact, sensor data from different sources are combined to create a data set with the help of Deep Learning, so that other data can be predicted when individual sensors are subsequently used.

In order to improve the readability of texts, they must first be read by a large group of people, the eye movement recorded and evaluated. GazeSynthesizer generates a data set based on the measurements on a few pages and can simulate artificial eye movements with the help of Deep Learning to simulate the readability of unknown texts depending on parameters such as age or cultural background.

SUN - Analyzing student understanding of vector field plots with respect to divergence

Visual understanding of abstract mathematical concepts is crucial for scientific learning and a prerequisite for developing expertise. This project analyses the understanding of students when viewing vector fields using different visual strategies.

The project page can be found here.