Skip to main content

Associated Projects

HyperMind - The Anticipating Textbook

The anticipating physics textbook, which we are developing in the HyperMind project, is a dynamically adaptive personal textbook and enables individual learning. HyperMind starts at the micro level of the physics textbook, which contains the individual forms of presentation, so-called representations, of a textbook - such as the text of a textbook with a certain proportion of technical terms, formulas, diagrams or images.

The static structure of the classic book is dissolved. Instead, the book‘s content is portioned and the resulting knowledge modules are linked associatively. In addition, the modules are supplemented with multimedia learning content, which can be called up on the basis of attention (eye) data.

The project page can be found here.


The topic of this project is the exploration of innovative human-technology interactions (HTI), which, by merging the real and digital worlds (augmented reality), makes the connection between experiment and theory comprehensible, tangible and interactively explorable in real time for students of STEM subjects.

The project page can be found here.

ESPEDUCA - JST CREST (in cooperation with the Osaka Prefecture University)

Knowledge can be shared via the Internet. However, most information is limited to "explicit knowledge" such as text and not "implicit knowledge" as  how the experience of experiencing something oneself. The aim of this project is to record these experiences with the help of sensors and to make them available to others.

Eyetifact and GazeSynthesizer (in cooperation with the Osaka Prefecture University)

Deep learning technologies have significantly accelerated the processing of images, audio and speech. Activity detection has not yet benefited from this, as it is difficult to obtain a large amount of data for training the deep learning algorithm in this area. The more sensitive and complex the sensors are, the more difficult it is to obtain the required amount of data. With the help of Eyetifact, sensor data from different sources are combined to create a data set with the help of Deep Learning, so that other data can be predicted when individual sensors are subsequently used.

In order to improve the readability of texts, they must first be read by a large group of people, the eye movement recorded and evaluated. GazeSynthesizer generates a data set based on the measurements on a few pages and can simulate artificial eye movements with the help of Deep Learning to simulate the readability of unknown texts depending on parameters such as age or cultural background.

SUN - Analyzing student understanding of vector field plots with respect to divergence

Visual understanding of abstract mathematical concepts is crucial for scientific learning and a prerequisite for developing expertise. This project analyses the understanding of students when viewing vector fields using different visual strategies.

The project page can be found here.