Sign up for our newsletter
Follow us on

Facial recognition software should raise eyebrows

by Adam Henschke
18 September 2015
TECHNOLOGY
Our faces identify us and represent us to others – some think they are fair game for digital surveillance. Adam Henschke asks if there’s something creepy about this.
 
Computers have been able to recognise faces for a while. The technology is used in cameras, smartphones, Facebook and even advertising billboards.  In a recent development, the Australian government announced that it aims to use facial recognition for national security purposes.
 
The government argues that facial recognition software, ominously called ‘The Capability’, is crucial for national security. Privacy advocates are critical of this, believing it to be invasive. Cybersecurity analyst Patrick Gray called it “a whole other league of creepy”.
 
But is it really creepy? Our faces are the prototypical means by which we are identified. The ability to recognise faces is a critical element of both human cognition and sociality. Prosopagnosia is the inability of a person to recognise faces and can obviously damage a person’s quality of life.
 
Given that facial recognition is something we naturally do, and need to do, maybe critics of The Capability are simply confused. While it may feel creepy, it’s not. Rather, the undeniable fact is our faces aren’t private material – they’re the way we present to others, government included.
 
The undeniable fact is our faces aren’t private material – they’re the way we present to others, government included.


Opponents of facial recognition need to show how government’s use of The Capability is different to the way humans use non-technological facial recognition every day. The key difference between the two being that the latter converts the transient act of seeing a face into a permanent piece of information.
 
To clarify, when I recognise a face I retain the visual information in my mind. Unless I act upon it, for example, by telling someone, “I saw Fred today”, the information has limited spread. What’s more, the less I know about you, the less information I can provide. “I saw a woman with blue eyes” isn’t particularly helpful or invasive information to share.
 
Technology is different.
 
Information technologies are unique partly because they share information. The internet’s revolutionary impact has been to enable and encourage the flow of information. Facial recognition software enables more than mere facial recognition. It enables the production, communication, aggregation, analysis and storage of that information.
 
Moreover, the information is not degraded by use – if I see your face in a crowd, I’m likely to forget it over time. By contrast, if software recognises your face at a certain place and time, the information is retained permanently.
 
It is possible for that information to be used, reused, altered and analysed in an infinite number of ways. Plus, we rarely know who is accessing information about us and under what conditions – which has been a major criticism of other forms of digital surveillance.
 
I think this is part of the reason why facial recognition software seems so creepy and why we should be concerned about it. As Daniel Solove argued in his book The Digital Person, we often have no idea who is watching us, and what they’re doing with our personal information. This irony is perhaps the definitive ethical issue regarding information technology and privacy – those who have the capacity to watch and know us remain themselves faceless.


This is a form of informational inequality. They know something about us, but we know nothing about them. It becomes a moral concern when such tools are used in the national security context without sufficient oversight.
 
This is a form of informational inequality. It becomes a moral concern when such tools are used in the national security context without sufficient oversight.


Privacy can be only be overridden in particular circumstances – without sufficient reason, invasions of privacy are unjustified and unethical. This is why formal oversight is so important. Without it, some are concerned our privacy will be violated without sufficient justification.
 
A lack of oversight could also lead to information being used in ways that are both irresponsible and harmful to innocent people. Consider the danger of a false positive – where software wrongly identifies someone as a criminal and the person therefore suffers undue attention from law enforcement. The recent incident involving James Blake shows that such misidentification is both possible and dangerous.
 
We should also be mindful of false negatives. Installing facial recognition software in airports, for example, could be of limited use and might lull airport staff into a false sense of security – which has happened before.
 
Creepy is a difficult term. Like many intuitive responses, it entails a degree of subjectivity. What’s more, there might be good reasons to utilise measures some may find creepy. National security isn’t an issue to dismiss lightly.
 
However, in the case of facial recognition software, our intuitive response is onto something. There is something morally distinct about these sorts of informational tools, and as such, they require effective oversight.

 
Dr Adam Henschke is a Research Fellow at the National Security College at ANU’s College of Asia and the Pacific.

0 Comments

Comments
Blog post currently doesn't have any comments.
 Security code