The very first pieces of clothing were made to protect us, for example from harsh weather conditions or injuries. Our project Ignotum (Latin for “the unknown”) has a very similar purpose, as it protects its wearer, but from a more modern hazard – CCTVs equipped with AI person recognition algorithms.
The project was designed as part of the Re-FREAM consortium, a Horizon 2020 project funded by the EU. Re-FREAM is about connecting people from different fields and backgrounds to work together in connecting fashion with technology. Together with our technical partners from the Fraunhofer IZM, Stratasys, Profactor and Empa we created a garment that helps its wearer to become invisible to the virtual domain.

Surveillance in Public Areas

Ignotum is about confusing artificial intelligences (AIs) that are used to analyse CCTV footage. They are gaining personal information of filmed people like gender, age, emotional state and sexual preferences, some of them with very high accuracy. In recent years, these technologies have been finding their ways into spaces of retail and are supposedly used for a better shopping experience. It can also be used for better sales by analysing the emotional state of customers. We find this development questionable and with our project, the wearer gets to choose when to be visible to these technologies.

How AI CCTV systems work

For us, the first step in designing our project was to understand how these systems work. Together with Markus Mau from Studio Moux, we determined that normally these systems work in a three-step process: to begin with, the AI filters out human beings from other objects in the scene by slicing the image in little tiles, analysing contrast values within those, and then calculates the probability of recognition. It then applies a virtual skeleton to the found persons for better tracking of movement, after which high-quality facial images are sent to a server for deep analysis. The first two steps happen “on the edge”, meaning in real-time within the camera. Only the third one needs more processing power and data bandwidth.

Co-Research

Due to the pandemic, the co-researching phase was mainly done within our studio and shared with the partners via online meetings. Later on, we were able to work more closely together with our partners from the Fraunhofer IZM based in Berlin, especially with Christian Dils, Max Marwede and Robin Hoske.

Testing Rig & Virtual Prototyping

After the general research phase, the next step was to create our own AI-enhanced camera system, using the MobileNetV2 object detection model trained with the COCO dataset. This algorithm is allowing us to recognise 90 different objects (human beings/persons among them) and attach virtual skeletons to found persons in real-time. It is running on a Coral board and is fed by an attached HD camera. Now we were able to point this camera at all kinds of images, like projections or screens, but also at real people, and see to which percentage the AI predicted to recognize said parameters.
We then started to go virtual with our tests for a rapid development progress. A digital mannequin was created that we could “dress” with different glowing patterns, point the AI camera at (via the computer screen) and see how well they confused the AI. In a long trial-and-error phase we tested hundreds of patterns. To see how these were read on a moving person, a gaming engine was used to create a street environment to animate our mannequin.

Real Prototyping

Once we had a few promising patterns we moved to prototyping: The immediate practical tests we did were basically holding different kinds of samples in front of our bodies to see if there were any changes in the recognition rates. Whilst it was generally quite hard to confuse the AI (unless we did a more a less full-body cover-up), active light sources like LED strips seemed to affect the numbers.
Our first prototypes were passively testing the final pattern, by applying sticky tape to painting overalls. These did not perform well at all. For the next ones, we utilised LED strips and here we were able to get similar results to our virtual tests.

Evolutionary AI

In parallel, an evolutionary AI setup was created by Markus Mau, that randomly shuffles a set modular pattern, reads the results and then mates the best options with each other to create a better performing pattern. This way we can automatically refine any pattern for its optimal performance. Because of technical restraints we were not able to further prototype this design idea, it would have needed a fully working flexible screen applied to the garment.

Planetary Design Circle

The Planetary Design Circle is a new design tool developed by the Fraunhofer IZM that takes the planet in as an equal stakeholder. This way, eco-friendly design is encouraged by contextualising the impact of decision making on the user, business and design side with the planet. Once we were handed this tool, our design process was significantly altered, resulting in the final design prototype being energy efficient, easier to repair (though modularity) and upgradable.

Co-Creation

After co-creating the electric concept with Christian Dils, he and his colleagues at the Fraunhofer IZM went into full action. Kamil Garbacz and Christian developed the PCB that would distribute the power to our 9 LEDs. Together with Lars Stagun (Fraunhofer IZM), we bonded all the electronic parts to the textile cable (Amohr Technische Textilien) prepared by Sebastian Hohner (Fraunhofer IZM). This process is called NCA bonding (non-conductive-adhesive bonding, internally at the IZM also called “e-textile bonding”), and creates a really strong connection between electronic boards and textiles (and as a result strong electrical bonds).
We also tested light guides that were directly 3D printed onto fabric together with Rafael Jordan of the IZM for their performance. We decided to use the industrially made light fibres by Mentor for the prototype, whilst visualising the 3D printed ones in our final design concept.

Prototyping / Garment

The first garment prototypes were sown in-house. For the final prototype, we worked with Mira Thul-Thellmann. The design was going to be a poncho/throw-over type of apparel.

Prototyping / Utilisation of 3D Printing

Since we needed to protect the electronic components built into Ignotum, we designed a lot of different housings that we refined in a rapid prototyping process, utilising our own 3D printers. All parts were made re-openable to ensure repairability. Special buttons were designed to connect the different layers as well as hold the light fibres in place.

Prototyping / Assembly

The final step was to assemble our kit, which was pretty straightforward due to the modular nature of the design. We broke one of the LED boards during assembly, but thanks to our repairable approach were able to quickly fix it.

Construction

Ignotum is designed in layers – the base layer, which gives it a basic stability, a semi-transparent second layer that creates in union with the base a moirée effect (to further enhance the AI confusion), and all interlaced with the technical components like power cables, PCBs, LEDs and light fibres.

Final Tests

After creating the prototype, we wanted to check if it still worked. We tested it in different settings and with different backgrounds, and even in quite challenging environments (meaning with clean contrasts as in the image below), the poncho worked well in bringing the recognition numbers down to a level where they are not strong enough for further processing.

Ignotum - The Unknown

All in all, it was a fantastic journey and a great opportunity to work on the project. We learned a lot and are very thankful to our partners and collaborators as well as the Re-FREAM consortium.

Project assistants: Max Krenn, Anna Röder, Markus Mau, Tim Schütze

Re-FREAM is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 825647