Two papers were accepted and published at 17th International Symposium on Visual Computing
- Title: Enhancing Privacy in Computer Vision Applications: An Emotion Preserving Approach to Obfuscate Faces
- Authors: Bijan Shahbaz Nejad, Peter Roch, Marcus Handte, Pedro José Marrón
- Abstract: Computer vision offers many techniques to facilitate the extraction of semantic information from images. If the images include persons, preservation of privacy in computer vision applications is challenging, but undoubtedly desired. A common technique to prevent exposure of identities is to cover peoples’ faces with, for example, a black bar. Although emotions are crucial for reasoning in many applications, facial expressions may be covered, which hinders the recognition of actual emotions. Thus, recorded images containing obfuscated faces may be useless for further analysis and investigation. We introduce an approach that enables automatic detection and obfuscation of faces. To avoid privacy conflicts, we use synthetically generated faces for obfuscation. Furthermore, we reconstruct the facial expressions of the original face, adjust the color of the new face and seamlessly clone it to the original location. To evaluate our approach experimentally, we obfuscate faces from various datasets by applying blurring, pixelation and the proposed technique. To determine the success of obfuscation, we verify whether the original and the resulting face represent the same person using a state-of-the-art matching tool. Our approach successfully obfuscates faces in more than 97% of the cases. This performance is comparable to blurring, which scores around 96%, and even better than pixelation (76%). Moreover, we analyze how effectively emotions can be preserved when obfuscating the faces. For this, we utilize emotion recognizers to recognize the depicted emotions before and after obfuscation. Regardless of the recognizer, our approach preserves emotions more effectively than the other techniques while preserving a convincingly natural appearance.
- Title: GUILD – A Generator for Usable Images in Large-Scale Datasets
- Authors: Peter Roch, Bijan Shahbaz Nejad, Marcus Handte, Pedro José Marrón
- Abstract: Large image datasets are important for many different aspects of computer vision. However, creating datasets containing thousands or millions of labeled images is time consuming. Instead of manual collection of a large dataset, we propose a framework for generating large-scale datasets synthetically. Our framework is capable of generating realistic looking images with varying environmental conditions, while automatically creating labels. To evaluate usefulness of such a dataset, we generate two datasets containing vehicle images. Afterwards, we use these images to train a neural network. We then compare detection accuracy to the same neural network trained with images of existing datasets. The experiments show that our generated datasets are well-suited to train neural networks and achieve comparable accuracy to existing datasets containing real photographs, while they are much faster to create.