Eyes Without A Face: How How-Old.net Showcases The Need for Context

June 9, 2015, by , in Blog, no comments

First off, if you haven’t tried the site How-Old.net you should. It’s a cool little display of facial recognition tech in an easy-to-use web app. But part of what makes it fun is how entertaining it is when it’s wrong. And what makes it wrong, and makes it fun to be wrong, is a direct analogy to the problems we see when firms or brands rely on dashboards for their social insights.

Incidentally the story of the site app is an interesting one, from the data to the way word spread. What’s amusing though, is how the focus of the project, and clearly its purpose, is correctly identifying age and gender based on facial recognition technology.

Not The Face

Now re-read that: facial recognition. As in, just the face.

Certainly the face can tell you a lot – from lines to shapes to all kinds of telling (and oft-lamented) features that change over time. And certainly there’s enough that is unique about the face that can make facial recognition super-helpful in social (Facebook, iPhoto) or service (ID verification, security). But using it to detect age can be tricky – as a sole source. For example one might think that a balding man is an indication of age – but How-Old and facial recognition doesn’t take hairline, or the head at all, into account. In other words it doesn’t take the context of the face. Think of all of the other visual cues of age beyond the face and you’d wonder why you wouldn’t take them into account — and the answer is the tech, of course. But it’s also because context is tough and something human’s are better at piecing together than algorithms (my favorite example is the demo of a dashboard to a firm we love, who had to laugh when the dashboard sales rep’s own demo showed the top five “negative” marked Tweets were actually positive).

It’s not that facial recognition isn’t interesting or telling, it’s that it’s incomplete unless you’re seeking a match of just the face’s factors.


Santa’s Beard

But what if the face is covered?  Which brings us to the Santa example seen here: a no-go for the data pile, it looks like Santa doesn’t show up to the detectors (or as Jessica Hagy remarked, “Vampire!”). So in some cases not being in a form the recognition can, “see” means you don’t exist. We see this parallel in social listening dashboards as well – an image isn’t something it can read, meaning you could miss out on thousands of fans of a message board because a search engine can’t see the badges of loyalty to a brand that’s in the signatures.

Amused To (Data) Death

PIC-AWhat’s more, if you receive a result that you’re displeased or amused by (read: not accurate) you’re more likely to keep trying the app. So their data becomes less and less about an accurate assessment of users and more of a behavioral study of error – which could be interesting in its own right but isn’t the point of the exercise. When they pegged me in my first photo as 66 and not 38, it became an entire game for me to screen-grab as many hilariously inaccurate photos I had.

So if you were trying to pull real insights out of the exercise you would be without: context, accuracy, or even honest results. Again, it’s not to say it’s not fun or helpful, it’s just also a great example of how so-called, “big data” can actually be missing the biggest insights – even by design. Think of the dashboards that skim Twitter but don’t understand ironic comments, or the ones that ignore most forums and messageboards, much less can’t “read” images within them. Without context you can grab all sorts of data – and be completely, dead wrong about the combined, “insights.” It’s fun, it’s cool, it likely comes with really pretty graphs, and it doesn’t actually tell the story. You’re playing a game of telephone with your audience behavior – and you’re at the wrong end of the game.

It’s not unlike why we use real human beings at Feedback – people with sociology, anthropology, psychology, backgrounds using time-tested ethnographic techniques – and don’t rely on algorithms. We look at the whole context of the picture. It’s why we can tell sarcasm in a Tweet, and can tell the photo is a little boy dressed up as George Washington.

by: Dean Browell, PhD — EVP and Co-founder of Feedback

Leave a Reply

Your email address will not be published. Required fields are marked *