The last time you passed through the airport, did you use the automatic passport scanner? In that case, you’ve benefited from facial recognition technology. If you’ve ever shopped at a Tesco petrol station in the UK, and found that the digital signage is showing you advertising that’s strangely relevant to your interests – again, you’ve benefited from facial recognition technology. And if you haven’t yet used your face to unlock your smart phone, gain access to your office or home or even pay for something – you soon will.
Of all the applications of artificial intelligence/machine vision/deep learning, the ability to recognise objects is perhaps the most exciting. It is, for example, a key enabling technology for driverless cars, and is widely used to automate many manufacturing processes.
Recognising faces, however, moves the technology to a whole new level. It’s relatively easy to teach a machine to recognise a banana, for example, or a helicopter – but every human face is different. That, however, is what makes facial recognition such a powerful tool.
Using computers to recognise faces has been the goal of much experimentation since the 1960s. The technology has progressed hugely since then, thanks to three things: very powerful computers, capable of processing huge amounts of data almost instantly; very high resolution sensors that can capture the smallest detail; and artificial intelligence software.
How does it work? The ‘landmarks’ of a face – the shape of the eyes, how far apart the eyes are, the outline of the chin and nose and so on – are, in combination, unique to each individual. Facial recognition technology will typically use around 80 of these reference points, captured by high resolution sensors. AI-based algorithms, running on powerful computers, will analyse the results; and those results will then be compared, in almost real time, with a database of ‘known’ faces. In the same way as no two fingerprints are the same, so too the combined value of those ~80 reference points will be unique to one person.
Here’s where it gets interesting for the owners of film and video archives, though. The face doesn’t have to be ‘real’: it can be a photographic image. By scanning film or video, for example, powerful AI-based software can recognise that a face is present – and then compare its unique characteristics with a database of known faces.
At Vintage Cloud, our business is all about digitising those archives – passing analog material through a digitizer, and turning it into a digital file that not only preserves that content forever. That’s only part of the magic, though. The real magic comes when, as we digitize it, we use AI-based software to search for objects – a fireman’s helmet, for example, or a 1972 Chevy Camaro. We call it ‘Smart Indexing’. That ‘metadata’ is automatically added to the file we’re creating – which makes it more easily searchable. That simple ‘searchability’ adds real value to the content because it makes it simpler to monetize it.
The next level
But now, we’ve taken that capability to the next level. Now, we can recognise not just objects, but faces, using exactly the same facial recognition technology described earlier. When it’s given a new name, our incredibly smart software automatically searches Google: when it finds the matching name, our system adds the associated face – and any other relevant information – to our face database. Very soon, our database will contain no fewer than 100,000 faces that our software can instantly recognise.
For content owners who regularly receive requests like “Do you have a clip of …?” or “I’m looking for an image of this famous actor in a movie…” this functionality can be invaluable. And: so far as we know, the capability is absolutely unique in the broadcast industry – which is why some of the world’s biggest broadcasters (and thus content archive owners) are currently evaluating it. Using Vintage Cloud’s AI-based facial recognition software can transform the value of their archive – and, therefore, their ability to derive income from it.