In the spring of 2016, the global press was abuzz over a new app, FindFace, which let users find someone’s profile on the Russian social media platform VKontakte simply by uploading a photo of that person. When Russian photographer Egor Tsvetkov heard about it, he had to try it.
He photographed strangers on the metro in Moscow and St. Petersburg, ran the photos through the app, and within seconds tracked down many of their profiles. For his series “Your Face Is Big Data,” Mr. Tsvetkov placed his photos from the metro alongside the images he found on their VKontakte profiles.
“This project could be named as a photo project,” he said via email, “but as for me it exists in the field of media activism.”
By documenting an individual’s swift transition from anonymity to identification, Mr. Tsvetkov wants his diptychs to demonstrate the power and accessibility of today’s facial recognition systems — and the threat they present to privacy. But his project doesn’t visualize a crucial midpoint in that transition — namely, the process by which facial recognition systems analyze individuals’ features in order to identify them. The omission is largely because of technological limitations: Facial recognition systems are software applications, and operate out of sight.
Behind the scenes, however, those systems are playing a growing role in daily life. Researchers at the Georgetown Law estimated in 2016 that half of all American adults are in law enforcement facial recognition databases. Stores use the technology to spot shoplifters. Facebook uses it to suggest the names of people in photos users uploaded. Apple relies on it to enable iPhone X owners to unlock their devices, and carmakers are beginning to use it to allow drivers to unlock their vehicles.
How does a facial recognition system determine what makes a face distinct, and how might that determination impact the way powerful public and private institutions see people? A handful of photographers have pushed the technological limits of photography to explore those answers visually. Their results might be viewed as representative of a new kind of portrait photography, one that fuses mathematics and aesthetics to bridge the gap between human and machine vision.
“If you’re in a department store that’s using facial recognition, it’s recording an image of your face but that image, as it were, is just a bunch of ones and zeros that’s used by an algorithm,” said Trevor Paglen, a conceptual artist and 2017 MacArthur Fellow. “At no place in that loop is an image in a native form that’s viewable by a human.”
But Mr. Paglen said people deserve the ability to visualize how facial recognition systems understand the human face, especially as those systems become more ubiquitous. Arguably, they are becoming the single most important way in which people are identified.
“The passport, the driver’s license, and the employee’s badge are relics of the analog era, tools of last resort by security personnel who otherwise passively oversee the invisible, electronic transaction between persons and machines,” wrote John P. Jacob in the catalog for “Trevor Paglen: Sites Unseen,” an exhibition of Mr. Paglen’s work at the Smithsonian American Art Museum through January.
In his series “It Began As a Military Experiment,” Mr. Paglen revisits a seminal moment in the development of facial recognition technology. The ten portraits of government employees in the work are drawn from a database compiled for the Defense Advanced Research Projects Agency’s Facial Recognition Technology program, or FERET.
Originally made in the 1990s, the photos were used to develop and test early facial recognition algorithms, including many that led the way for today’s commercial systems. As an artist-in-residence at Stanford University last year, Mr. Paglen worked with researchers and students to identify and reproduce the keypoints — the location of essential facial features — that facial recognition algorithms measure to distinguish one person from another.
“It’s more like fingerprinting than classical portraiture,” Mr. Paglen said.
Around the same time, Mr. Paglen used the eigenface method of facial recognition to depict dead political, artistic and scientific radicals for a series called “Even the Dead Are Not Safe.” Through that method, a system subtracts from someone’s face the features it has in common with other faces in a database, storing the difference as a kind of personal bar code. Using that bar code, a system could easily recognize this same individual’s face in other images. A human viewer, however, would likely find the blurry, ghostly visualizations Mr. Paglen has created of those bar codes to bear only a modest resemblance to the person depicted.
“What you’re seeing is not a photorealistic image of the face,” he said. “You’re seeing an image that is the most probable statistical distribution of the values of the pixels in an image of that person.”
Facial recognition technology, these images suggest, is best at recognizing a statistically sound identity rather than faithfully capturing a person’s likeness. The portraits in Oliver Chanarin and Adam Broomberg’s “Spirit is a Bone” offer further proof. To create them, the artists used a facial recognition system developed by software engineers in Moscow for public security and border control surveillance. In an instant, the system captured willing participants — including Yekaterina Samutsevich of Pussy Riot — from multiple angles in order to construct low-resolution 3-D models of their heads. Stripped of shadows and rendered through a rough patchwork of fragments, the models ultimately look more humanoid than human.
“This idea of photography as a kind of humanistic endeavor is gone,” Mr. Chanarin said.
Dissatisfaction with a photographic portrait’s ability to adequately grasp a subject’s essence dates back to the genre’s 19th century beginnings. Jan von Brevern, an art history professor at the Free University of Berlin, said that dissatisfaction was rooted in the “gap between what was expected of the then-new medium of photography and what it actually delivered.”
“As a technical medium, it was considered to be ‘exact’— and indeed most early accounts of photography talk about this supposed accuracy,” he said via email. “But actually, it produced a very specific kind of accuracy — one that wasn’t necessarily in line with what sitters expected from portraits at that time.”
Accounts of today’s facial recognition systems often elicit similar expectations of exactitude. But in “Machine Readable Hito,” a grid of portraits of the German artist Hito Steyerl, Mr. Paglen disabuses viewers of that notion. Beneath each photo of Ms. Steyerl, Mr. Paglen includes a facial recognition system’s analysis of her emotional state and gender, among other characteristics. The results vary significantly. The analysis of Ms. Steyerl’s gender, for instance, changes based on whether she’s smiling or frowning. Where exactly did this system get its ideas of masculinity and femininity?
By showing viewers in clear, visual terms how facial recognition systems read and understand people, Mr. Paglen, Mr. Chanarin and Mr. Broomberg want to help viewers question their perceptual accuracy. They ultimately want viewers to look beyond the systems’ own veneer of anonymity and to see those systems for what they are: the products of human beings with their own distinct prejudices and preferences.
“When we’re talking about technologies like this,” Mr. Paglen said, “the main questions you should be asking at the beginning and the end of the day is ‘Whose interests do these particular technologies serve and at whose expense do they come?’”
Keywords clouds text link http://alonhatro.com/
|aviatorsgame.com ban nhạc||confirmationbiased.com|
|mariankihogo.com ốp lưng||Giường ngủ triệu gia Ku bet ku casino|
mặt nạ mặt nạ ngủ Mặt nạ môi mặt nạ bùn mặt nạ kem mặt nạ bột mặt nạ tẩy tế bào chết mặt nạ đất sét mặt nạ giấy mặt nạ dưỡng mặt nạ đắp mặt mặt nạ trị mụn
mặt nạ tế bào gốc mặt nạ trị nám tem chống giả
© 2020 US News. All Rights Reserved.