Más Que la Cara overview

zach lieberman
12 min readApr 3, 2017

--

Project overview video

Más Que la Cara, a spanish phrase which means, “more than the face” is in an interactive installation we installed in Downtown Houston last year.

The studio I help run, YesYesNo, was approached by the Weingarten Art Group and the Downtown District with an interesting proposition — design an artwork for the public in downtown Houston at the old Sakowitz building, a former department store that is currently operates as parking garage — the facade of the building is still the same but the inside is completely gutted.

The building was a staple of downtown Houston in the 1950s :

Sakowitz interiors

We loved the windows on the facade and proposed a street level installation where we would take two of those windows, add monitors in them, and make interactive work that would invite the public to engage.

We really liked these huge windows

We immediately thought about masks. Several years ago, Kyle McDonald and I developed an interactive installation for a SXSW exhibition which replaced your face with chopped up face parts we found in old photographs, in a kind of performative collage. We loved the way this looked.

Likewise, I had built an installation for the New York Hall of Science which cut up face parts of several previous participants and reassembled them on the current user’s face.

When I was working on these projects I was struck by how much like masks these were and wondered if we could do an installation where we could create living masks that people would perform.

For this particular installation in Houston, we were deeply inspired by the work of Bruno Munari, who in his book Design as Art, has this layout where he shows how little you need to represent a face.

Bruno Munari — Design as Art

A similar idea is Pareidolia, the notion that you don’t need much to see a face in anything.

We were also struck by the rich history of masks from different cultures. Masks are tools for role play and make believe and help express what is meaningful to a culture. They allow you to transform and see yourself in a new way.

Increasingly, because of apps like SnapChat, people are familiar with face tracking and augmentation (in the form of snapchat filters) and we were we wondering how could we make what we were doing different. Our solution was to focus on graphical augmentation of the face — we wanted this to feel more like a poster based on your face more than a gimmick or novelty. We started exploring poster graphics and discovered a rich language that inspired us. Our main goal was, make this a living poster.

Design process

I think one of the challenges to a public art project is how do you connect with the community you are working in — beyond just showing up and installing the work. We suggested a series of public workshops where the students would make masks out of cardboard and help us brainstorm what our masks could look like. These workshops were super fun and also gave us many ideas of how things could assemble and move.

Another thing we found helpful was to develop a database of masks — these collections proved really helpful as we were sketching if we needed inspiration.

mask database in pixave, a visual archive tool (we also used pinterest and slack)

We started by sketching in illustrator — we would then save out SVG to test with a moving face.

In another round sketching we started to look at very how these graphical ideas would interact with the face — we were trying to figure out how to fade the face slightly so the graphics would pop.

Technical details

The graphical language we were excited about led us to explore the use of paper.js as our front end. Paperjs is a javascript library that really feels like you are in a programmatic version of illustrator (it’s created by two of the people who originally made Scriptographer so this makes sense)

I had been excited about paper.js ever since I set up an installation next to the phenomenally gifted lab 212 at the Barbican museum. In their project, called Les métamorphoses de Mr. Kalia, they used an openFrameworks app to do kinect skeleton tracking and sent that data to a browser which ran paperjs and visualised the data. The visual style was unlike most OF or processing apps — very graphical and magical:

I remember watching them make new scenes by sketching in illustrator and importing svg and seamlessly integrating graphical ideas quickly.

I can’t say enough good things about the kinds of graphical looks you can get with paper.js which is built on top of canvas graphics. As someone who is used to openGL, I kind of take it for granted that thick lines will mostly look terrible. Canvas is really good at things like drop shadows, gradients and thick lines. It meant we could create poster-style graphics really easily.

Paper.js looks

Similar to the Mr Kalia project, we used openFrameworks to do the tracking and sent the data to a browser running paper.js.

In our case we used CLM face tracking which sits on top of the dLib library. I have in the past used Jason Saragih’s face tracking code (and the super helpful openFrameworks wrapper ofxFaceTracker that kyle mcdonald created), but I had some reservations about that. In my experience it tends to slow when it can’t find a face. I know there are some threading solutions for that, but I wanted to find something more performant. In addition, I’ve had issues with ofxFaceTracker registering good mouth positions (on my face, my beard is pretty confusing to it) and with adverse lighting and contrast conditions. The interesting thing about dlib is that it uses HoG to find faces in the image rather than ofxFaceTracker which uses openCv’s haar finder. I found HoG to work more reliably in more adverse lighting conditions. Here’s a comparison video of the two detectors.

HoG face detection

Another thing that helped on the technical side is CLAHE, which is local contrast adjustment. I found that in many instances, we’d have poor contrast around the face, especially in situations where the figure was backlit. There was enough info in the camera image but the kinds of contrast you want to see around facial features would be too flat to pick up. What clahe does is improve local contact — in a way, it makes everything look like charcoal sketches, since someone who is sketching is using focusing on tiny contrast differences as they draw. This amps up local contrast across the face and greatly helps improve the tracking. I recommend clahe to anyone experimenting with face tracking and facing natural (and adverse) lighting conditions. Some of my backlit situations were brutal and this algorithm was a lifesaver.

CLAHE makes everything look like a drawing !

Here’s an example of some the bad lighting situations we faced:

backlighting and side lighting were big issues for us

We send the data (the 68 points of the face, as well as the bounding circle and orientation) from OF (which finds the face) to paper.js, which visualized the face over OSC on top of websockets.

face data on openframeworks side and in paper.js

One technical challenge we had to using a browser based front end (with openframeworks on the backend) is that we wanted to have the video from the webcam going to both places. On the OF side we needed video to do the tracking, but we also wanted to show the face live in the browser with paper.js as a kind of augmentation. We looked endlessly at solutions that would allow us to put the video out out from openFrameworks into the browser — from webrtc, streaming, grabbing the desktop, etc but we couldn’t find a solution that worked extremely well. We even tried at some point using CEF, which puts chrome in openFrameworks as a texture, but we found it wasn’t super performant.

on the left, dat.gui running in CEF inside of openFrameworks — on the right, an OF app compiled to emscripten to run in the browser, running in CEF, in OF (inception!)

In the end we found a solution where we have the camera open on both OF and the browser. The camera in the browser sits under the paperjs canvas:

Live video in the browser with paperjs on top — notice how the paperjs blends nicely with the underlying video

The camera is a UVC style camera and we do send it commands from the OF app to change its exposure levels as the light changes (we found its own auto settings were too aggressive with the amount of backlight we were seeing at some points of the day). We found that this isn’t 100% stable (the camera would freeze every about 1x day) so we wound up building a script that detects issues and restarts the software. I added some code to post to a slack channel so I could monitor this — my phone buzzes now on restart.

There were some interesting pros and cons about using chrome as the front end. One challenge we had is that chrome tends to aggressively update itself and at some point during the year, our fullscreen / kiosk code broke because chrome changed a menu item and our applescript code was out of date. I found out that you can use canary and there are some ways to prevent auto updating but this is something we didn’t consider at the start. Another con was that when you use video in chrome you need to use https — this adds a level of complexity for things like web sockets / node.

On the plus side there are great debugging tools built into chrome and as old school c++ programmers we found it useful if we put “use strict” at the top of our javascript files we could use es6 style class constructs without the use of a transpiler like babel. There are also some interesting features we used like css3 filters to make the video go “duotone” as the mask comes on top. We also found some helpful javascript libraries like matter.js and dat.gui, for physics and gui in our javascript code.

Other technical details, we use Dynascan ultra-bright TVs to have displays that could work in full sunlight. We used dmx controlled photo lights at night to turn the window around the TV into a sort of soft lightbox. The software ran on OS X laptops and we used logitech c920s cameras.

Team

One of the joys of a project like this is the amazing team that comes together to make it possible. Molmol Kuo, my partner in YesYesNo, provided amazing art direction. Gordey Cherney, was a huge help with creating masks and frameworks for illustrator -> paperjs workflow. Matthias Dortfelt also contributed several generative masks.

We’re really thankful for the team from Lea Weingarten art group including Lea Weingarten and Piper Faust, who was instrumental to making this a success.

From the Downtown district we’d like to thank Angie Bertinot, Jacqueline Longoria, Joe Maxwell, and Lonnie Hoogeboom who were amazing to work with. Also Onézieme Mouton and team for building a great structure to house the gear.

One of our planning sessions in the Garage

Finally, some shots of it in action:

(Some screen caps from remote viewing — it was cool login and watch people use it)

--

--