Contact
Name | Marcus Handte |
---|---|
Position | Senior Researcher |
Phone | +49 176 63309480 |
Fax | +49-201-183-4176 |
marcus.handte@uni-due.de | |
Address | Schützenbahn 70 Building SA 45127 Essen |
Room | SA-121 |

Education
- 2013 – Habilitation in Computer Science from Universität Duisburg-Essen (Topic: A Framework for Context-aware Applications)
- 2009 – PhD in Natural Sciences from Universität Stuttgart (Topic: System-support for Adaptive Pervasive Applications)
- 2003 – Diplom in Computer Science from Universität Stuttgart (Specializations: Software Engineering, Distributed Systems)
- 2002 – Master of Science in Computer Science from Georgia Institute of Technology in Atlanta (Specialization: Programming Languages)
- 1997 – Abitur from Albert-Schäffle-Schule in Nürtingen (Specializations: Mathematics, Business Administration)
Employments
- Since 11.2012 LocosLab GmbH, Consultant, Developer, and Co-Founder
- Since 11.2009 Universität Duisburg-Essen, Senior Researcher (Networked Embedded Systems)
- 08.2007–10.2009 Fraunhofer IAIS, Researcher and Project Leader (Cooperating Objects)
- 08.2003–07.2007 Universität Stuttgart, Institut für Parallele und Verteilte Systeme, Researcher (Distributed Systems)
- 01.2002–07.2002 Georgia Institute of Technology, Research Assistant (Programming Languages)
- 09.2002–12.2002 T-Systems GEI GmbH, Intern, (Software Consulting)
- 07.2000–12.2000 debis Systemhaus MEB GmbH, Freelance Software Developer (Product Development)
Projects
- MOIN (BMVI, MFUND) – A Nationwide Mobility Index
- INNAMORUHR (VM NRW) – Integrated and sustainable mobility for Ruhr
- TALAKO (BMWI) – Inductive taxi charging concept for public spaces
- SMATA (BMVI, MFUND) – Smart platform for the data-driven networking of taxi and charging operations
- FAIR (BMVI, MFUND) – A user-friendly provisioning of climate and weather data
- SIMON (FP7, CIP) – Assisted Mobility for Older and Impaired Users
- GAMBAS (FP7, STREP) – Adaptive Data Acquisition, Privacy-preserving Sharing
- LIVING++ (BMWi, EraSME) – Automatic Activity Recognition, Communication
- WEBDA (BMBF, AAL) – RFID-based Object Localization and Person Tracking
- PECES (FP7, STREP) – Secure Communication, Trustworhty Context Management
- 3PC (DFG, SP1140) – Peer-based Communication and Distributed Application Configuration
Teaching
- Lecture: Net-based Applications, Universität Stuttgart, WT06/07
- Lab: Intelligent and Interactive Screens, Universität Bonn, ST08, ST09
- Lab: Web-Development with Typo3, Universität Bonn, WT07/08
- Lab: Distributed Application Development with Enterprise Java Beans, Universität Stuttgart, ST05, WT05/06
- Exercise: High Performance Networking, Universität Bonn, ST09
- Seminar: Pervasive Computing/Sensor Networks, Universität Bonn, WT07/08, ST08, WT08/09, ST09
- Lab: Microcomputer Systems, Universität Duisburg-Essen, WT09/10, ST10
- Lab: Computer Architecture, Universität Duisburg-Essen, WT10/11, WT11/12, WT12/13, WT13/14
- Project: Context Recognition with Mobile Devices, Universität Duisburg-Essen, WT09/10, ST10, WT10/11, ST11, WT11/12, ST12, WT12/13, ST13, WT13/14, WT14/15
- Project: Context Prediction, Universität Duisburg-Essen, WT13/14
- Project: Indoor Localization, Universität Duisburg-Essen, WT13/14
- Project: Augmented Reality Navigation, Universität Duisburg-Essen, ST16, ST17
- Project: Remote Rendering of Geospatial Data, Universität Duisburg-Essen, ST16/17
- Seminar: Context Recognition, Universität Duisburg-Essen, WT09/10, ST10, WT10/11, ST11, WT11/12, ST12, WT12/13, ST13, WT13/14, WT14/15, WT15/16, ST16, WT16/17
- Case Study: Location-based Services, Universität Duisburg-Essen, ST14, ST15, ST16, ST17, ST18
- Lecture: Pervasive Computing, WT15/16, WT16/17, WT17/18, WT18/19, ST19, ST20
- Project Group: Location-based Services, WT17/18
- Exercises: Programming Java, ST18, ST19, ST20
- Exercises: Programming C/C++, WT18/19, WT19/20
- Project: Android-based Robot Control, ST19
- Project Group: Crowd Sourcing of Temperature Data, WT19/20
Bachelor Theses
- Gathering and Matching of User Information Derived from Social Networks, March 2010
- A System for Inertia-based Distance Estimation using Mobile Phones, July 2012
- A System for Detecting the On-Body Placement of Mobile Phones, July 2012
- An Android-based Board Game with Board Recognition, October 2012
- A Visualization Tool for Localization Data, January 2013
- A System for the Recognition of the Mode of Transportation using Mobile Phones, January 2013
- A System for Audio-based Distributed Speaker Detection, May 2013
- A BASE Extension for Spontaneous Device Interaction using Wi-Fi Direct, July 2013
- A Framework for the Derivation of and Conflict Detection in Generic Privacy Policies from Social Networks, February 2014
- System Support for Offline Maps on Android Devices, August 2014
- A Smartphone-based Recognition System for Speed Limit Signs, August 2015
Master Theses
- A Component System for Resource-efficient Context Recognition, August 2010
- An Adaptive Protocol for Resource-efficient Data Synchronization, March 2012
- Reference-based Indoor Localization using Passive RFID Technology, April 2012
- Design and Evaluation of a Multi-modal Presence Detection System, May 2013
- An Accurate Passive WLAN-based Localization System, November 2014
- Automatic Detection of WLAN Signal Propagation Changes, January 2015
- Robust Localization of Objects using Passive RFID, March 2015
- An Extensible Engine for Adaptive Transit Routing, April 2015
- Precise Person Tracking with Active RFID, April 2015
Publications
2023 |
Peter Roch, Bijan Shahbaz Nejad, Marcus Handte, Pedro José Marrón: Positionierung induktiv geladener Fahrzeuge. In: Proff, Heike, Clemens, Markus, Marrón, Pedro José, Schmülling, Benedikt (Ed.): Induktive Taxiladung für den öffentlichen Raum: Technische und betriebswirtschaftliche Aspekte, pp. 93–142, 2023, ISBN: 978-3-658-39979-5. (Type: Proceedings Article | Abstract | Links)@inproceedings{talako-book-chapter, Ziel des TALAKO Projekts ist es, kabelloses Laden von Elektrofahrzeugen im öffentlichen Raum zu ermöglichen. Induktives Laden erfordert eine präzise Ausrichtung des Fahrzeugs, um einen effizienten Ladevorgang zu gewährleisten. Dabei hat die Ausrichtung des Fahrzeugs direkten Einfluss auf den Wirkungsgrad. Der Positionierungsvorgang kann für den Fahrer herausfordernd sein, da er den Versatz der Ladekomponenten ohne weitere Unterstützung nicht wahrnehmen kann. Daher umfasst die entwickelte Anlage neben der induktiven Ladeinfrastruktur selbst ebenfalls ein kamerabasiertes Fahrerassistenzsystem. Das Fahrerassistenzsystem wird dazu genutzt, anfahrende Fahrzeuge zu erkennen und den Fahrer beim Positionierungsvorgang zu unterstützen. Es besteht aus zwei Komponenten: einem kamerabasierten Positionierungssystem und einer Fahrerleitanwendung. Das Positionierungssystem nutzt Kamerabilder, um die Position von Fahrzeugen mit einer Genauigkeit von 5 cm zu berechnen. Daraus wird der Abstand zwischen Fahrzeug und Ladeplatte abgeleitet. Die Fahrerleitanwendung interpretiert die Positionsinformationen und generiert daraufhin geeignete Anweisungen für den Fahrer. Das Positionierungssystem basiert auf einem neuronalen Netz, welches die Reifen des Fahrzeugs erkennt. Da der Abstand zwischen den Reifen bekannt ist, kann daraus die Position und Rotation des Fahrzeugs errechnet werden. Untersuchungen haben ergeben, dass die Genauigkeit im Bereich von 5 cm liegt. Um das Positionierungssystem unabhängig vom Fahrzeugtyp und Installationsort zu betreiben, muss es entsprechend konfiguriert werden. Dazu muss das neuronale Netz trainiert und die Kameraausrichtung kalibriert werden. Das Training des neuronalen Netzes wird mit synthetisch generierten Bildern ergänzt, welche mit einem eigens entwickelten Bildgenerator produziert werden können. Die Kameraausrichtung wird mit einem speziellen Muster bestimmt, welches an verschiedenen Stellen auf dem Untergrund platziert wird. Da die realen Maße des Musters bekannt sind, lässt sich daraus die Geometrie des Installationsortes ableiten. Im Rahmen einer Nutzerstudie wurde untersucht, welche Bildschirmmodalität für die Fahrerleitanwendung unter den gegebenen Umständen optimal eingesetzt werden kann. Die Studie hat ergeben, dass Nutzer einen im Fahrzeug befindlichen Bildschirm für die Ausgabe von Anweisungen bevorzugen. Daher wurde die Fahrerleitanwendung durch eine mobile Anwendung realisiert. Diese zeigt dem Fahrer die Position des Fahrzeugs in Relation zur Ladestation an. Für die Darstellung der räumlichen Relationen wurden verschiedene Visualisierungen miteinander verglichen. Mit mehreren Visualisierungen sind die Nutzer in der Lage, das Fahrzeug in einem Toleranzbereich von 5 cm zu positionieren. Die meisten Nutzer bevorzugen jedoch eine Darstellung aus der Vogelperspektive. Die Kommunikation der beiden Komponenten wurde mittels Bluetooth Low Energy umgesetzt. Im Gegensatz zu anderen drahtlosen Kommunikationsmöglichkeiten, wie z. B. WLAN, bietet dies den Vorteil, dass Informationen ohne Verzögerung eines Verbindungsaufbaus an die mobile Anwendung gesendet werden können. Dadurch kann der Fahrer unmittelbar nach Ankunft an der Anlage die Positionierung verzögerungsfrei starten. Das Gesamtsystem wurde prototypisch bei einem Taxiunternehmen in Mülheim a. d. R. (Auto Stephany GmbH (2012) Auto Stephany GmbH – Taxi Dienstleistungen. Abgerufen am 04. 08. 2022 von https://taxi-stephany.de/) in Betrieb genommen und über mehrere Monate iterativ optimiert. Während dieser Zeit wurden wertvolle Erfahrungen gesammelt, die dazu beigetragen haben, dass sowohl das Positionierungssystem als auch die Fahrerleitanwendung stetig verbessert wurden. Nach Abschluss der Optimierungen konnte das entwickelte System erfolgreich als Bestandteil der Pilotanlage in Köln mit mehreren Ladeplätzen eingesetzt werden. Da die Pilotanlage in Köln im öffentlichen Raum betrieben wird, müssen die Persönlichkeitsrechte einzelner Personen beachtet werden. Eine explizite Einwilligung in die Datenverarbeitung durch die Betroffenen ist jedoch nicht praktikabel. Daher wurde eine automatisierte Verschleierung eingesetzt, welche personenbezogene Daten wie Kennzeichen und Gesichter aus den Kamerabildern entfernt, um eine Verarbeitung zu vermeiden. |
Sayedsepehr Mosavat, Matteo Zella, Marcus Handte, Alexander Julian Golkowski, Pedro José Marrón: Experience: ARISTOTLE: wAke-up ReceIver-based, STar tOpology baTteryLEss sensor network. In: Proceedings of the 22nd International Conference on Information Processing in Sensor Networks, ACM Digital Library, 2023, ISBN: 979-8-4007-0118-4. (Type: Proceedings Article | Abstract | Links)@inproceedings{mosavat2023experience, A truly ubiquitous, planet-wide Internet of Things requires ultra-low-power, long-lasting sensor nodes at its core so that it can be practically utilized in real-world scenarios without prohibitively high maintenance efforts. Recent advances in energy harvesting and low-power electronics have provided a solid foundation for the design of such sensor nodes. However, the issue of reliable two-way communication among such devices is still an active research undertaking due to the high energy footprint of traditional wireless transceivers. Although approaches such as radio duty cycling have proved beneficial for reducing the overall energy consumption of wireless sensor nodes, they come with trade-offs such as increased communication latency and complex protocols. To address these limitations, we propose ARISTOTLE, an ultra-low-power, wake-up receiver-based sensor node design employing a star network topology. We have deployed ARISTOTLE in two different venues for carrying out the task of weather data collection. In addition to reporting the results of the two deployments, we also evaluate several performance aspects of our proposed solution. ARISTOTLE has a mean power consumption of 236.67 uW while it is in sleep mode and monitoring the radio channel for incoming wake-up signals. Utilizing various sizes of supercapacitors, ARISTOTLE was able to reach system availabilities between 47.83% and 97.36% during our real-world deployments. |
Alexander Julian Golkowski, Marcus Handte, Pedro Jose Marron, Alexander Wedemeyer, Jakob Robert: A Low-Cost Binaural Hearing Support System for Mobile Robots. In: Proceedings of the 2023 International Conference on Robotics, Control and Vision Engineering, pp. 18–24, Association for Computing Machinery, Tokyo, Japan, 2023, ISBN: 9798400707742. (Type: Proceedings Article | Abstract | Links)@inproceedings{10.1145/3608143.3608147, The ability to quickly obtain a comprehensive picture of the environment is an important prerequisite for the realization of robust and safe autonomous navigation with mobile robots. Due to their comparatively low computational requirements and cost, as well as their ability to provide omnidirectional perception, audio systems can be an excellent complement to more sophisticated or expensive sensing systems such as cameras or lidars. In this paper, we describe the development of a low-cost binaural audio system capable of interpreting the acoustic information present in an environment to assist a lidar system in making decisions that need to be made by a mobile robot. By taking advantage of the robot’s mobility, the system can resolve the uncertainties inherent in consciously choosing two off-the-shelf microphones and effectively realize 360-degree coverage. This opens up new possibilities, for example, to make room monitoring much more reliable. |
2022 |
Alexander Julian Golkowski, Marcus Handte, Leon Alexander Marold, Pedro José Marrón: Simplifying the Control of Mobile Robots through Image-based Behaviors. In: 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), pp. 52-57, 2022. (Type: Proceedings Article | Abstract | Links)@inproceedings{nokey, The development and commissioning of mobile robots is usually a time-consuming and cost-intensive undertaking. Today, systems are realized primarily through the use of complex software and hardware architectures, which only partially meet modern requirements for mobile robots. The core discipline that a mobile robot must fulfill is the correct perception of its environment. To achieve this, various technologies are used that are useful for detecting obstacles or navigating targets. An increasingly attractive technology in this domain are artificial neural networks. Existing literature focuses on either improving the performance of existing systems or on training for very specialized applications. In the field of mobile robotics, the focus is usually on realizing a specific task, while little attention is paid to generalizability, cost and energy constraints. To fill this gap, this paper investigates the possibility of reducing the setup for a mobile robot application to a minimum while still enabling complex behaviors. For the implementation, we take a biological inspired approach and investigate the usability of artificial neural networks, in this case YOLOv4, in a mobile robot application. In particular, we examine whether the currently available technologies meet todays requirements. |
Marcus Handte, Lisa Kraus, Matteo Zella, Pedro José Marrón, Heike Proff, Michael Martin, Richard Figura: Visualizing Urban Mobility Options for InnaMoRuhr. In: Proff, Heike (Ed.): Transforming Mobility -- What Next? Technische und betriebswirtschaftliche Aspekte, pp. 645–658, Springer Fachmedien Wiesbaden, Wiesbaden, 2022, ISBN: 978-3-658-36430-4. (Type: Book Chapter | Abstract | Links)@inbook{Handte2022, InnaMoRuhr (Integrierte und nachhaltige Mobilität für die Universitätsallianz Ruhr) is a multidisciplinary research project funded by the Ministry of Transport of NRW that investigates how the sustainability of mobility in the Ruhr area can be improved. The project develops concpts for integrated sustainable mobility among the members of the University Alliance Ruhr and validates them using simulation and field trials. In this paper, we describe some of the datasets as well as the visualizations that we have developed to study the mobility options available in the Ruhr area. |
Bijan Shahbaz Nejad, Peter Roch, Marcus Handte, Pedro José Marrón: Enhancing Privacy in Computer Vision Applications: An Emotion Preserving Approach to Obfuscate Faces. In: Bebis, George, Li, Bo, Yao, Angela, Liu, Yang, Duan, Ye, Lau, Manfred, Khadka, Rajiv, Crisan, Ana, Chang, Remco (Ed.): Advances in Visual Computing, pp. 80–90, Springer Nature Switzerland, 2022, ISBN: 978-3-031-20716-7. (Type: Proceedings Article | Abstract | Links)@inproceedings{epic, Computer vision offers many techniques to facilitate the extraction of semantic information from images. If the images include persons, preservation of privacy in computer vision applications is challenging, but undoubtedly desired. A common technique to prevent exposure of identities is to cover peoples' faces with, for example, a black bar. Although emotions are crucial for reasoning in many applications, facial expressions may be covered, which hinders the recognition of actual emotions. Thus, recorded images containing obfuscated faces may be useless for further analysis and investigation. We introduce an approach that enables automatic detection and obfuscation of faces. To avoid privacy conflicts, we use synthetically generated faces for obfuscation. Furthermore, we reconstruct the facial expressions of the original face, adjust the color of the new face and seamlessly clone it to the original location. To evaluate our approach experimentally, we obfuscate faces from various datasets by applying blurring, pixelation and the proposed technique. To determine the success of obfuscation, we verify whether the original and the resulting face represent the same person using a state-of-the-art matching tool. Our approach successfully obfuscates faces in more than 97{%} of the cases. This performance is comparable to blurring, which scores around 96{%}, and even better than pixelation (76{%}). Moreover, we analyze how effectively emotions can be preserved when obfuscating the faces. For this, we utilize emotion recognizers to recognize the depicted emotions before and after obfuscation. Regardless of the recognizer, our approach preserves emotions more effectively than the other techniques while preserving a convincingly natural appearance. |
Peter Roch, Bijan Shahbaz Nejad, Marcus Handte, Pedro José Marrón: GUILD - A Generator for Usable Images in Large-Scale Datasets. In: Bebis, George, Li, Bo, Yao, Angela, Liu, Yang, Duan, Ye, Lau, Manfred, Khadka, Rajiv, Crisan, Ana, Chang, Remco (Ed.): Advances in Visual Computing, pp. 245–258, Springer Nature Switzerland, 2022, ISBN: 978-3-031-20716-7. (Type: Proceedings Article | Abstract | Links)@inproceedings{guild, Large image datasets are important for many different aspects of computer vision. However, creating datasets containing thousands or millions of labeled images is time consuming. Instead of manual collection of a large dataset, we propose a framework for generating large-scale datasets synthetically. Our framework is capable of generating realistic looking images with varying environmental conditions, while automatically creating labels. To evaluate usefulness of such a dataset, we generate two datasets containing vehicle images. Afterwards, we use these images to train a neural network. We then compare detection accuracy to the same neural network trained with images of existing datasets. The experiments show that our generated datasets are well-suited to train neural networks and achieve comparable accuracy to existing datasets containing real photographs, while they are much faster to create. |
2021 |
Alexander Julian Golkowski, Marcus Handte, Peter Roch, Pedro José Marrón: An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems . In: International Journal of Semantic Computing, vol. 15, no. 3, pp. 337–357, 2021, ISSN: 1793-7108. (Type: Journal Article | Abstract | Links)@article{nokey, For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application. |
Peter Roch, Bijan Shahbaz Nejad, Marcus Handte, Pedro José Marrón: Car Pose Estimation through Wheel Detection. In: Bebis, George, Athitsos, Vassilis, Yan, Tong, Lau, Manfred, Li, Frederick, Shi, Conglei, Yuan, Xiaoru, Mousas, Christos, Bruder, Gerd (Ed.): Advances in Visual Computing, pp. 265–277, Springer International Publishing, 2021, ISBN: 978-3-030-90439-5. (Type: Proceedings Article | Abstract | Links)@inproceedings{car-pose-estimation, Car pose estimation is an essential part of different applications, including traffic surveillance, Augmented Reality (AR) guides or inductive charging assistance systems. For many systems, the accuracy of the determined pose is important. When displaying AR guides, a small estimation error can result in a different visualization, which will be directly visible to the user. Inductive charging assistance systems have to guide the driver as precise as possible, as small deviations in the alignment of the charging coils can decrease charging efficiency significantly. For accurate pose estimation, matches between image coordinates and 3d real-world points have to be determined. Since wheels are a common feature of cars, we use the wheelbase and rim radius to compute those real-world points. The matching image coordinates are obtained by three different approaches, namely the circular Hough-Transform, ellipse-detection and a neural network. To evaluate the presented algorithms, we perform different experiments: First, we compare their accuracy and time performance regarding wheel-detection in a subset of the images of The Comprehensive Cars (CompCars) dataset. Second, we capture images of a car at known positions, and run the algorithms on these images to estimate the pose of the car. Our experiments show that the neural network based approach is the best in terms of accuracy and speed. However, if training of a neural network is not feasible, both other approaches are accurate alternatives. |
Bijan Shahbaz Nejad, Peter Roch, Marcus Handte, Pedro José Marrón: Evaluating User Interfaces for a Driver Guidance System to Support Stationary Wireless Charging of Electric Vehicles. In: Bebis, George, Athitsos, Vassilis, Yan, Tong, Lau, Manfred, Li, Frederick, Shi, Conglei, Yuan, Xiaoru, Mousas, Christos, Bruder, Gerd (Ed.): Advances in Visual Computing, pp. 183–196, Springer International Publishing, 2021, ISBN: 978-3-030-90439-5. (Type: Proceedings Article | Links)@inproceedings{10.1007/978-3-030-90439-5_15, |