Have you taken the 10-year challenge that has swept Facebook over the past few weeks? If so, you could be contributing to the spread of facial recognition technology.
Pairing a current profile photo with a shot of yourself from a decade ago may seem to be little more than some harmless fun. But this challenge also hands Facebook a set of data perfectly suited to training its facial recognition algorithms.
Though in their relative infancy, facial recognition algorithms currently power everything from entertainment platforms (allowing you to log into your smartphone with a smile or, perhaps, see what you look like as an emoji unicorn) to security interfaces or customer service portals.
There are wider applications, too: The city of Atlanta recently opened the world’s first biometric airline terminal, using facial recognition technology to scan passengers for check-in, baggage and security clearance. Historians have suggested that the technology might help identify people from Civil War-era photographs. And tech entrepreneurs serving the law enforcement community are working on recognition tools that might preemptively identify potential global terrorists, school shooters or other bad actors.
The current spread of biometrics has a lot in common with the 19th-century phrenology craze. On its face, each is fun, diverting, convenient — but they have the potential to harm the vulnerable and the unwitting. As biometrics develops, we must insist upon safeguards that prevent it from following the path trod by phrenology.
At its simplest, the 19th-century pseudoscience known as phrenology held that the shape of a person’s head — their own special combination of curves, lumps and bumps — was a guide to their disposition and the proportionality of certain traits or tendencies.
The anatomist Franz Joseph Gall developed “cranioscopy” at the turn of the 19th century as an extension of his theories around localized brain function. If we have dozens of functions and skills that emanate from the brain, Gall reasoned, and if each of these is localized to a certain region of the organ, being able to read someone’s neural map would theoretically give you a lot of information about the sort of person they were.
Later scientists and practitioners — particularly the brothers Orson and Lorenzo Fowler, who did more than anyone else to popularize the discipline in America — created phrenology from the supposition that you didn’t even have to see the brain to draw those conclusions, but could map someone’s character just by examining the shape of the skull.
In New York City, the Fowlers’ American Phrenological Institute was a popular tourist destination, a studio and “phrenological cabinet” where visitors could read the latest phrenology journals, stop to take in the busts and skulls of “distinguished and notorious men” and even sit for their own examination.
With the Fowlers, phrenology was like a turbocharged horoscope, a way to know yourself, shore up your deficiencies and live your best life. The brothers believed — and promised their customers — that people could refine their skill and behavior and, thereby, change a phrenological reading.
They identified more than three dozen regions of the head that had an alleged effect on disposition and conduct, labeling each with grand terms like “philoprogenitiveness” (parental love), “alimentiveness” (appetite) and “approbativeness” (ambition). In diagrams and porcelain bald heads decorated with a dense graphic forest of symbolic vignettes, phrenologists pointed out where one might find the root of every possible impulse. Destructiveness wrapped around the top of one’s ear, “inhabitiveness” — love of home — at the back lower ridge of the skull.
The Fowlers advertised phrenology as a sword that could slice through the Gordian knot of human relations: a method for employers to choose the best hires or entrepreneurs to select their business partners; a means for doctors to properly diagnose mental illness; a tool for parents and teachers to evaluate the children in their care; and, yes, of course, a way to find Mr. or Mrs. Right.
But while phrenology served as casual amusement for the curious and well-to-do, like Edgar Allan Poe and Walt Whitman, it was also the supporting beam for problematic theories of criminology. According to phrenology’s advocates, criminal intent was not a question of free will, theology or philosophy: It was a person’s innate physiology and appearance that decided their propensity for criminal action. Enthusiasts documented the supposedly telltale lumps and bumps of criminals, people of color, immigrants and the mentally ill to demonstrate the biologically-grounded superiority of wealthy white society and its preferred social reform agenda.
In 1846, prison matron Eliza Farnham created an illustrated American edition of a European text on phrenology and criminal justice. Farnham added clinical drawings depicting the revealing cranial contours of prisoners, and in an effort to make the work even more compelling, a gallery of engraved illustrations made from photographs Mathew Brady, later famous for his Civil War photography, took of male, female and juvenile prisoners.
The “science” generally favored men and women of strong, classically white Western features: George Washington was often cited as a paragon of phrenological virtue, an “equally balanced” man with vital, well-boned features calling forth elegance and strength. By contrast the Fowlers considered people of African descent “deficient in reasoning capacity,” although with “excellent memories and lingual and musical powers.” Farnham wrote of one prisoner, a former Irish prize fighter, that “he exhibited great energy of passion and purpose, but they were all of a low character, their sole bearing being to prove his own superiority as an animal.”
The racist and bigoted usage of phrenology should sound alarm bells as we consider biometrics today.
Only a handful of states currently regulate biometrics. Illinois was the first in 2008, enacting a law that requires businesses using biometric data — whether for employee time management, tailored marketing, security measures or otherwise — to obtain consent for collection, and to protect that data in the same manner as they do more traditional stores of confidential and sensitive information. That more states have not followed suit is concerning, both because databases have been proved to reflect extant social biases (accuracy rates have been proved lower when identifying people of color, for example), and because the potential for misuse or overreach is large and not yet fully understood. You can, after all, change a password after a data breach — but not your face or your fingerprint.
A hundred and fifty years ago, phrenology may have been a fun diversion if you went and got your head's contours read on a lark; less so if its conclusory logic was being used to throw you in prison or mark you as a social malcontent. Today, facial recognition may be pleasantly useful when it can admit you to a baseball game, but may seem far less so if a distant database thinks, on the basis of preprogrammed visual assumptions, that you are likely to be criminally violent.
A recent challenge to Illinois’s law suggested that plaintiffs should have to demonstrate actual harm to take legal action, but in an age of surveillance capitalism, traditional concepts of harm are inadequate to describe what may happen behind the analytical curtain. Whether it’s our own eyes or a computer’s that are drawing conclusions, it pays to be skeptical of what we think appearances can tell us, and to embody that skepticism in meaningful restriction on business and government use of biometric data.